Dear Friends,
I’m sharing a round-up of news analysis. I’m very happy to share that I’ll be speaking at the Interintellect Salon on December 6th in conversation with Megan Gafford. Tickets and information available here!
Best,
Sam
BUMBLING IN UKRAINE
One very minor consequence of the Trump era is that for me — and I assume this is true for a lot of other people — I lose the appetite for being interested in politics. Everything with Trump follows a similar kind of pattern. There’s the bold statement, the outrage, the flurry of activity as it seems like everything is about to change, and then it all goes away again. Remember when we were supposed to conquer Greenland? How about Panama? Or Canada? What about DOGE?
This isn’t to say that nothing happens. Trump is very consistent on immigration and is driving through all kinds of punitive deportation measures. And the transformation of the government into a kind of personal piggy bank seems to be moving apace. But, already, we may be returning to the feeling of the first Trump administration — that the tax cuts and some of the immigration measures are real, and everything else is a circus swiveling around a weakening and diminishing state.
And that seems to be what’s going on with the Ukraine peace proposal. We’ve had the phase of outrage — in which it emerges that the 28-point proposal was essentially dictated by Russia and that Witkoff was coaching Russian negotiators on how to speak to Trump. It seems like everything is at an end — with Zelensky giving his sombre speech that Ukraine had an “impossible choice” over whether to capitulate or to go it alone. And then the Europeans and the US State Department get together and roll everything back, so that, as Joshua Yaffa puts it in The New Yorker: “there are now essentially two plans, a Rubio proposal and a Witkoff proposal. One suits Russia, the other Ukraine.’
The reality seems to be that neither side actually wants to wrap things up. Ukraine really can’t concede territory. Russia has its new ally in place in Trump, but it’s not clear that he can push what he wants through his own State Department — and, as Yaffa puts it, “from Moscow’s vantage point, Trump is merely a temporary political phenomenon.” The essential issue of the war — whether Ukraine will ever join NATO — is such an either/or proposition that it can’t really be settled in any meaningful way by negotiation, and even if the lines are frozen where they are — which is a possibility from a settlement — Ukraine really can’t concede its right to bid for NATO membership and can’t believe the Russians that forgoing NATO membership buys anything like security. So we all keep going around the mulberry bush.
The enduring legacy of the Russia-Ukraine War — in addition to sowing what I can only imagine is irreconcilable hatred between the two countries — is what it does to the face of war. In The New York Times, C.J. Chivers, earlier in November, has a bracing piece about what the war is like for one wounded Ukrainian soldier caught in no man’s land. And what it is is a real-life version of the “The Most Dangerous Game” — soldiers running and crawling for their lives to escape drones. As Chivers writes, the most likely fate for the soldier he profiles is to “lay helpless on farmland until a predatory drone descended and an explosion tore him apart, his last wretched seconds recorded for social media’s 24-7 snuff-film deathstream.” The drone clearly is to 21st century warfare what the machine gun was to World War I, but really more so — combining air power with precision weaponry and all of it inexpensive and at absolutely no risk to the operator.
What that means, then, is being in an incredibly unstable geopolitical period where the face of war is changing in ways that are all but unrecognizable — is the future of war drone swarms, as seems likely? is there some kind of predator that can get ahead of the drones? The advent of the drones likely doesn’t particularly affect the calculation of Russia or Ukraine — they seem to be fairly balanced in their use of drones, with the drones contributing heavily to the ongoing stalemate — but there must be a certain incentive for war planners worldwide to want to see how all this all plays out.
And how it really plays out is that the development of AI is intricately tied up with developments on the battlefield. Autonomous agents seem to be the next turn of the wheel, and that means that AI has to move forward, as part of an international arms race, even if its commercial benefits seem all the time more questionable.
THE AI VIBE SHIFT
The narrative around AI has abruptly shifted, and the skeptics are having their day. As Carole Cadwalladr puts it, “And, then five weeks ago, there was, what felt like a species leap. I’ve watched as these views have gone from niche lone voices on the periphery to headlines in the business press.” What Cadwalladr is talking about is the realization that, in a word, ‘AI’ is misnamed. It’s not really ‘artificial intelligence’ — the thing that we think we understand so well from a million sci-fi movies. What it is is a cool text-predict meets chatbot gizmo that’s impressive, in the way that Google Maps was impressive, but is inherently somewhat limited and is a very different deal from the AGI that the the LLMs are supposed to lead to.
In an interview with Dwarkesh Patel, Ilya Sutskever, former chief scientist at OpenAI — in other words, somebody who really knows what he’s talking about — acknowledged that there were underlying deficiencies in the models that scaling just wasn’t going to fix. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people. It’s super obvious. That seems like a very fundamental thing,” Sutskever said. “So it’s back to the age of research again just with big computers.”
In a write-up for The Atlantic on the three-year anniversary of ChatGPT, Charlie Warzel writes, “The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop….It is, we’re told, a race—a geopolitical one, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy.”
The utility of it is ever more coming into question. In an interesting piece, Josh Anderson re-verifies the MIT finding on how 95% of corporate AI initiatives fail — discovering that after three months of letting Claude design a software program for him, he couldn’t, despite his longstanding software expertise, modify the program when it needed to be modified. “Now when clients ask me about AI adoption, I can tell them exactly what 100% AI adoption looks like: it looks like failure,” Anderson writes. “Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.”
And then there are the signs of the bubble bursting — of Meta laying off workers from its Superintelligence lab just after some of its splashiest hires; of Meta simply throwing money at hires without any real foresight.
But what’s interesting about the AI bubble is that it may really have nothing to do with the quality of its product or even whether the market can sustain it. As former Silicon Valley venture capitalist Roger McNamee told Cadwalladr, “I think they know they’re in a bubble, and I don’t think they care.” The point is that AI has become a political question, and a question of national pride. It’s become clear that it’s the only meaningful source of growth in the economy, and so as Cadwalladr puts it in her diatribe, “The media is part of the same mystical amalgam of bullshit and vibes that’s keeping the whole thing afloat, a circular economy of access journalism and tech hyperbole that masks something stinking and rotten at its core: OpenAI.” As Warzel argues, AI has all the hallmarks of a big ol’ scam. “Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con,” he writes of the current situation.
But the real point is that none of that may matter. The future of AI is being worked out in no-man’s land somewhere in eastern Ukraine. And even if autonomous agents aren’t particularly more effective in drone warfare than operators, the whole thing has spooked the defense establishment enough that it’s become apparent AI is the future of defense — as well as of the economy. A somewhat hysterical article in Foreign Affairs — a fairly reliable barometer of elite opinion — argues that, to stay ahead of China, the US government will have to far more tightly integrate itself into AI development. “The tech sector can help the state make sense of and deploy AI. The state can help the tech sector continue to grow in a way that advances everyone’s interests,” Foreign Affairs writes of its proposed ‘grand bargain.’ The arms race — and the perception that AI is the future of warfare — trump everything else. The grand bargain, as Foreign Affairs puts it, involves “AI businesses helping the Untied States incorporate their technologies into the national security apparatus.” In return, it’s implicit, ‘guardrails’ go out the window and it’s full steam ahead for AI to transform the economy, regardless of whether or not the product actually improves anybody’s life.
IS OUR CHILDREN LEARNING?
The terrain that all this really plays out in is education. New York Magazine has just the latest scare piece in a long-running series on the collapse of American education standards. What’s tragic, in New York’s accounting, is that, notwithstanding decades-worth of handwringing, American education standards had been actually steadily improving. “If you look at a chart of test scores from the 1990s to the present, it arcs happily upward until the middle of the last decade as each generation incrementally learned a little more,” writes Andrew Rice. But then test scores fall off a cliff — with around 30-40% of students performing “below basic” on reading tests, a nomenclature that sort of obscures what’s really going on. “You can’t believe how low ‘below basic’ is,” public school teacher Carol Jago helpfully tells Rice. “The things that those kids aren’t able to do is frightening.”
People like me, who are still annoyed about pandemic-era policies, would tend to point to school closures in the 2020-2021 school year. But, as one teacher tells Rice, “the pandemic didn’t do shit. It just stripped bare for suburban parents the reality of what was happening.” And somewhat more soberly, Rice concludes, “Something disastrous happened, and academics are nearly united in the opinion that the problem is not simply a product of the pandemic.”
Jonathan Haidt is jumping up and down somewhere in the background here to say that, of course, it’s the phones; that, fundamentally, the phones represent brain rot, and the longer the kids are exposed to the phones, the more brain rotted they are. AI is just a symptom of the larger problem of the phones; if you rely on the phone to do everything, then it’s just a nice little feature that there’s also a button that writes your term paper for you.
And that probably does, really, explain what’s happened over the last decade — the phones shredded everybody’s attention span and ‘remote learning’ during the pandemic exacerbated the already-worrying trend. The question is what to do next. The education fundamentalists are having their moment right now. Haidt has made surprising progress in convincing educators to keep phones out of schools during the school day. Articles like Rice’s or a similar piece by Michael Clune in The Atlantic argue for going back to the fundamentals — the kind of classical education that Euclid would easily have recognized. “The skills needed to thrive in an AI world might counterintuitively be exactly those that the liberal arts have long cultivated,” Clune writes. “Students must be able to ask AI questions, critically analyze its written responses, identify possible weaknesses or inaccuracies, and integrate new information with existing knowledge.”
So back we go to calculating the hypotenuse of triangles and asking what the green light at the end of the dock symbolizes in The Great Gatsby. And I guess that’s a reasonable approach to the education crisis — certainly, the remote learning experiment has empirically failed. But I wonder if a contrarian take might be in order — that the kids know something that we don’t, which is that the time spent scrolling on their phones might be more beneficial for them than the Pythagorean theorem. The truth is that we are living in a digital world. I would argue that, from a market standpoint, my fancy education was basically worthless because I never learned a coding language, which meant that I couldn’t really participate in the most significant development of my era — and also couldn’t intelligently comment on it. When the kids are scrolling, they’re not just scrolling — they’re also identifying their interests, they’re sometimes tunneling very deep into an area of expertise, they’re learning particular social skills in terms of communicating digitally (which is a social skill in the 21st century), and they’re sometimes stumbling across market opportunities, as in the fast-developing influencer space. In some ways they are being much more elastic and innovative than are the adults, who tend to lose sight of the fact that algebra and trigonometry actually aren’t going to be very useful for most people, and never were, and are in place really just because they’re easy to teach and to test and are connected to some superstition about ‘learning how to learn.’
What really should happen is some sort of a hybrid — some recognition by educators of the limits of the core curriculum and an attempt to update it for what students actually need to know (why trigonometry and pre-calculus, for instance, as opposed to statistics and economics?); some attempt to get real about the limits of digital education (no need for phones in English or math classes); at the same time more integration of a computer education into the school system; and then, most importantly, more tracking — with students indicating their career preferences from an earlier age and concentrated programs of study opening up for them much the way they do at the university level.
Probably none of those things are really going to happen. Educators are stuck in their ways, budgets are limited for anything that’s not readily-testable math and reading comprehension, and the students are just inexorably going to drift away from programs of study that they suspect have nothing to do with their actual lives and which they can’t understand anyway. It’s not a pretty picture, and AI (which is effective for ghostwriting student essays but really not for much beyond that) only adds another dose of confusion. Which leaves us where exactly? Well, probably with lots of op-eds decrying the education system. The sense is that what’s called for is vision, is centralization and a real shake-up of the system, but that’s obviously not going to happen either, and so we have atrophy, atrophy, atrophy, and the great big scam of AI the only thing that’s moving anywhere and that everybody seems willing to get behind.



Wait, you live in Kyrgyzstan? Have you written about some of that on Substack (or in published works)? I want to read about that.
"Today's feature in ROL is a fascinating piece on his self journey from Sam who checks in from Bishkek."
ChatGPT could not get Proust's narrator's family correct. I tested it on something arcane and it failed! Multiple times. When I corrected it, it did admit its mistakes.