Modern Meditations: Dwarkesh Patel
Dwarkesh’s views on an “intelligence explosion,” WWII, and 18th century seafaring.
“Some of the best analytical work out there - we learn something new from every post!” — Peter, a paying subscriber
Friends,
James Baldwin authored what is, to my mind, the most profound observation about history. In his essay, “Stranger in the Village,” Baldwin chronicles an unsettling stay in a remote Swiss village that has never seen someone of his complexion before. He interrogates the admixture of “genuine wonder” and dehumanization he receives from the locals, not least the children, and the memories it raises for him. “Joyce is right about history being a nightmare–but it may be the nightmare from which no one can awaken,” Baldwin reflects on his time in Switzerland. “People are trapped in history and history is trapped in them.”
Not all of us experience history as a nightmare in the profound way Baldwin outlined in his essay, but it is undeniable that we are all trapped in history and that it is trapped in us; we are both its products and its agents.
The subject of today’s Modern Meditations is someone who I believe is doing some of the most interesting and important work to help us understand this strange period of history in which we are trapped. Through his eponymous podcast, Dwarkesh Patel delves into the minds of the modern age’s most compelling thinkers, many of whom are not widely known but possess illuminating ideas worth knowing. (I highly recommend it!)
Though Dwarkesh uses a satisfyingly large canvas—dancing between historians, researchers, and CEOs—he has become the standout interlocutor for deep conversations about artificial intelligence. What stands out most from a Dwarkesh episode is its sheer depth and fidelity, which allows the listener to understand the dynamics of this dazzling, daunting AI renaissance.
In today’s edition, Dwarkesh shares his thoughts on an impending “intelligence explosion,” World War II’s Asian Theater, Lyndon B. Johnson, the longitude problem, and the little-known AI researcher who informed his worldview. These are his meditations.
Modern Meditations: Dwarkesh Patel
What would you be doing if you didn’t work in tech media?
That’s tough to answer. I studied computer science, so by default, I’d be working as a software engineer or trying to start a company. I certainly wouldn’t have predicted that I’d become a podcaster. It would have been a surprise if you’d said that to me in college. Instead, I expected to work directly on problems in science and technology rather than discuss them.
If I couldn’t work as a software engineer, ironically, I’d probably be doing something closer to my current job—some other media, maybe writing.
Which current or historical figure has most impacted your thinking?
Carl Schulman. He’s an independent researcher focused on AI who has been studying this and many related fields—from economics to ethics—for a couple of decades. What makes him remarkable is his encyclopedic knowledge across virtually every field relevant to thinking about the future and his willingness to conduct first-order estimates and basic calculations that others don’t.
You may not realize it, but much of the AI discourse sits downstream of his work. He’s not as widely known because he doesn’t really like to write books or anything—he prefers to hop on phone calls with people who then write books and blog posts.
For example, the idea of a software-only singularity stems from him. In my conversation with Jeff Dean, Google’s Chief Scientist, and Noam Shazeer, one of the co-authors of the original Transformer paper, I asked them to assess the plausibility of a scenario in which each model helps to produce the next one. Essentially, Gemini-3 helping to write the training code for Gemini-4 – you’d run multiple concurrent versions of Gemini to find architectural tweaks that help you get 10% better. By combining them, you get an explosive feedback loop that results in remarkable capabilities. Maybe it creates a Jeff Dean-level programmer or something.
Both Jeff and Noam said, “Yes, that could totally happen.” And that’s an idea that Carl came up with two decades ago.
He’s also thought deeply about the economic implications of this kind of intelligence. If you have billions of extra AI “workers” that are each as smart as Jeff Dean, how much might our output grow? It might be as much as 20% a year for many years. That’s the natural implication and much more feasible than it sounds. If you look at China and Hong Kong, they’ve averaged 10% annual growth over multiple decades; this is just doubling that.
What I admire about Carl is that he comes up with his numbers with true first-order thinking. He’ll ask himself, “How fast can growth go? Well, the upper limit is basically the multiplication rate of E. coli which reproduces every 20 minutes. How much solar flux reaches Earth and how many times could you double our energy production before we run out of it? If you rely on nuclear energy, how much waste heat could you produce before it would boil the oceans?” He goes through area after area, thinking through the feedback loops, the processes involved, and the ultimate limits. By doing that, he’s put numbers and models in place for many of the biggest questions.
I first heard of Carl through my friend Leopold Aschenbrenner, who connected me to him. Leopold said, “Look, this guy’s got all the deep models of AI. You should try to interview him.” When I did, it was like getting this bath of a worldview.
It ended up deeply informing my perspective on AI. Namely, we might have this period of a few years that is the most important in human history. It will be defined by AIs driving this incredibly fast progress: AI helping to develop other AIs, AIs helping you design more powerful chips, and so on. That effectively creates billions of extra AI “workers” who, in turn, help you procure more compute to bring online billions more AIs. AIs may still struggle with physical activities, so we’ll rely on humans with Meta glasses to run around our warehouses while AIs optimize the logistics.
Effectively, we may be heading toward an intelligence explosion and explosive economic growth.
What tradition or practice from another culture or era do you think we should widely adopt?
I was recently reading about some 18th-century sea voyages and admiring the courage and risk-taking of the people who participated in them.
This might sound strange because there are many aspects of these voyages that we shouldn’t want to emulate. Financially, these expeditions could incur huge debts and inflict enormous suffering from scurvy and typhus. Working as a seaman on these boats was so unpleasant that gangs of ship crews would roam towns looking for men with tar on their hands – a sign you’d worked at sea – and kidnap them to man their boats. They simply couldn’t get enough people to do it otherwise.
Underneath these formidable problems was a culture of ambition, courage, and mission orientation that I don’t think we’ve fully retained. When faced with challenges with similar levels of risk—perhaps space exploration—could we muster up the capital and human resources to conquer them? I’m not sure.
A less complicated example from this time period was the use of large prizes to galvanize action. In 1714, the British parliament introduced the “Longitude Act,” offering enormous rewards to anyone who could help ships reliably determine their longitude.
It was an unsolved problem until that point – ships could know their latitude by observing the position of the stars, but calculating longitude was extraordinarily difficult. To do so, you needed to know the exact time aboard the ship and at a fixed location – for example, the port you were heading to. However, clocks of the era did not function well on boats, with the ship’s motion affecting their measurements. Not knowing one’s longitude created all kinds of problems – slowing transit, at best, and leading to shipwrecks, at worst.
Parliament’s prize had the intended effect. In response, the 18th-century clockmaker John Harrison devised the “marine chronometer,” a device that reliably kept time at sea.
This doesn’t seem like the kind of thing we do anymore. Can you imagine Congress posting a $10 billion prize for something – or whatever the equivalent fraction of GDP is? Operation Warp Speed’s push to develop a COVID vaccine was similar, but I’d like to see us do it more often. To say: Here’s a problem, if somebody solves it, it’s worth at least this amount of money to us, and you’ll be a rich man or woman.
What experiment would you run if you had unlimited resources and no operational constraints?
If I were an AI researcher who had a bunch of grad students who are really good at prompt engineering, I’d take them to ordinary businesses to see just how much they could increase efficiency. I’m not talking about tech companies but insurance firms, warehouses, or retail stores that don’t necessarily have high-level coders. Their job would be to embed themselves and automate as much as possible. Going through every step of the business and constructing a kind of automation pipeline. That might involve replacing some jobs but also increasing the efficiency of remaining workers.
There’s a disconnect between people who understand cutting-edge AI and the daily operations of typical businesses. The guy running a carpentry shop in Des Moines isn’t current on the latest AI developments, and AI experts probably aren’t interested in the day-to-day improvements you can make across different fields.
I’d want to know: how far can we take it? One of the big questions in AI is: How much progress could we make if we only had our existing models and couldn’t develop any new ones? How much juice could we squeeze from them over the next few years and decades? By dropping AI people into one of these firms, I think we could get some compelling indications. Could they make it 50% more efficient? One hundred percent more efficient? I’m super curious.
My intuition is that with just existing models, we could increase the GDP growth rate by at least 0.5% for the next decade or two – about as big a deal as the internet, by that measure. It’s not obvious, though. We’re in a confusing period where AI capabilities have vastly outperformed even optimistic projections, but have undershot when it comes to economic impact. It’s not clear why that’s the case.
My estimate is based on the fact that I don’t think people are trying that hard to implement AI yet. There are so many obvious use cases still to be deployed. It’s also based on my own experience. I’ve gone from basically finding these models useless a year ago to it being about 60% of my computer use. I’m actually planning on spending next week making as many AI scripts as I can to help with different parts of my work, from production to brand scripts to research.
What piece of art can you not stop thinking about?
It’s a cliché answer, but Robert Caro’s biographies, particularly his works on Lyndon Johnson. There’s a quote in one of the biographies—maybe it’s the epigraph that I think of often: “Explore a single individual deeply enough, and truths about all individuals emerge.”
What particularly stays with me is Johnson’s overwhelming focus and need to get things done—his absolute refusal to take no for an answer, his meticulous preparation, and relentless execution. LBJ was a teacher when he was 21, and he used to tell his students, “If you do everything, you’ll win.”
You see that play out in so many scenarios. LBJ does, in fact, do everything. To get a bill passed or get someone to agree with him, he doesn’t just do the things that are reasonably likely to help your cause, but things that aren’t even plausibly relevant; that have a 0.001% chance of impacting the outcome. For example, he was a candidate in a totally hopeless Texas race, and still, at the end of a day of campaigning, if he spotted a farm in the distance with its lights on, he’d drive out of his way to visit it and spend 30 minutes trying to win one vote. Even when the expected value was basically zero, LBJ still did these things.
Of course, Robert Caro is the same way when it comes to writing these biographies. He moved to West Texas to understand it better; he wrote an entire book about just one Senate race that LBJ cheated on against Coke Stevenson. It’s probably my favorite one, but he has the same mentality: you’ve got to do everything.
What is your most contrarian, high-conviction opinion?
That markets will remain important, powerful coordination mechanisms in the aftermath of an intelligence explosion. There’s often an assumption that the private sector will become less important in the aftermath of us developing artificial superintelligence (ASI) or something similar. The thinking goes that in that scenario, there will be inevitable state consolidation, with governments deciding how to leverage ASI and distribute the benefits.
I think people underrate just how powerful markets are. Just because it’s harder to imagine what trade will look like between billions or trillions of artificial entities that are much smarter than us doesn’t mean it won’t happen. Or that it won’t be a useful way to organize these AI entities. I think having a strong market-based order also reduces some of the high-stakes brinkmanship that might occur if only the American and Chinese governments control ASI and are staring each other down.
What are you obsessed with that others rarely talk about?
The Asian theater of World War II. There are just so many counterfactuals. One of the biggest is the possibility of China not ending up in control of the communists after the war.
It was not an inevitability. During the war, Japan devoted four-fifths of its troops to fighting China—not America. They were fighting against China’s nationalist government, led by Chiang Kai-shek, who hated the country’s communist party. By waging that war, Japan ended up weakening the nationalists significantly, even though they should have been even more politically misaligned with China’s communists.
That gave Mao Zedong an avenue to total power, which is maybe the worst thing that ever happened in human history when you consider how many people died.
America could have done more to prevent this, too. We sent something like 1/100th of the Marshall Plan to Chiang Kai-shek, and even that was bungled. George Marshall didn’t give Chiang the full amount upfront, making it conditional on various outcomes, such as stopping the war and forming a coalition government with Mao, even though that was implausible. These delays worked to Mao’s advantage, helping him consolidate power.
What might China have looked like under the nationalists? During the war, Manchuria was considered one of the most developed parts of Asia, behind Japan, Taiwan, and South Korea. When you look at the gangbuster growth Japan achieved in the post-war years, it’s hard not to wonder if China might have had the potential to do very well.
What will the next generation do or use that is unimaginable to us today?
Is superhuman intelligence too obvious an answer?
I think it’s hard for us to fathom just what it might mean to have so many of these AIs optimizing different aspects of our technological stack. It’s not just about having a few genius AIs – Jeff Deans, Einsteins, or von Neumanns in a server. That idea is well appreciated. What I think is relatively underappreciated is what the sheer volume of them – even if they’re not superintelligent – might mean.
If someone in the 18th century wanted to go to the moon, what technology would they have needed? In some sense, it’s a bad question to ask. It wasn’t that they needed a single innovation but that their entire technological base, from metallurgy to power to electronics and beyond, needed to be upgraded.
That’s what AI may do for us and the next generation. It could totally upgrade our entire technological stack.
What’s an underappreciated corner of the internet?
Epoch AI. It’s genuinely the best resource for keeping up with AI. A number of researchers keep track of things like: What are the biggest models being trained? How much data is being used? When will we run out of data? What are the limitations on scaling?
They answer the basic questions of where AI is headed in a way that doesn’t exist elsewhere.
Big fan of Dwarkish. I love that basically being a podcaster means compounding yourself.
You are your own model.
The best thing you can do in your life is give yourself as much training data as possible. Compile yourself as many times as possible and keep learning.