“I think your Substack contains simply the best writings about venture out there. You provide a kind of storytelling and thoroughness that no one else does. Thanks :)” — David, a paying member
A species’ intelligence does not always increase. Many parasites have evolved to become simpler than their ancestors, shedding neural complexity over generations. It has long been observed that domesticated animals like sheep, pigs, and dogs have smaller brains than their wild ancestors. As their environment changed and they no longer faced the same existential threats, their cognitive capabilities atrophied (though some meta-studies suggest the matter is more complicated). Researchers at the Max Planck Institute of Animal Behavior found that cultivated mink had 25% smaller brains than their wild counterparts. Remarkably, once they were returned to the wild, they were able to regain their brain size within fifty generations.
Our brains are not static, but dynamic, both within the course of our individual lives and the long arc of our species. We are capable of enhancing our capabilities or letting them atrophy, of elevating our kind or regressing to bland domestication. Which of these paths we follow depends on the choices we make and the environment we cultivate.
It is fair to say that we are living in a very different informational environment than we were even a few years ago. The advent of generative artificial intelligence has changed the way we find, process, and produce new information at a stunning pace. It is an authentic miracle, perhaps the most remarkable innovation in a lifetime that overlaps with the advent of the internet, mobile phones, instant money transfer, cloud computing, gene editing, electric vehicles, and mRNA vaccines.
There is every reason to believe AI can eclipse all of these inventions – that it will save many lives and eliminate much suffering. We will have new, better therapeutics capable of addressing unsolved illnesses. We will have safer cars that avoid collisions and prevent accidents. We should have more efficient doctors’ appointments and legal processes and visits to the DMV and a thousand other great and tiny stresses. There is no reason we cannot have all of these things, which is not the same as saying they will be perfect.
Increasingly, I do not wonder whether AI will do these things, but what human intelligence will look like when it does. As our civilization upgrades its cognitive capabilities, will our species‘ powers decline? Simply put, as AI gets smarter, will we get dumber?
We are not starting from the sturdiest foundation. A perusal of today’s cognitive environment is deeply depressing, and growing more so. A recent Financial Times piece outlined sharp declines in adolescent and adult cognitive performance across a range of benchmarks. A growing number of US 18-year-olds report having difficulty “thinking or concentrating” and “learning new things,” while more teenagers say they “hardly ever” read for pleasure. Adults fare little better, with a growing number of those in high-income countries like the US, New Zealand, and the Netherlands incapable of using “mathematical reasoning when reviewing and evaluating the validity of statements.”
A New York Times op-ed compiled another set of bleak statistics. Among them, the fact that schools are assigning fewer full books to students because they believe children no longer have the capacity to complete them. Instead of reading Great Expectations, students are given summaries, as if licking a stamp were a substitute for a proper meal. Meanwhile, the share of American adults who read even a single book per year hit 46% in 2023. Approximately 30% of Americans have the reading level of a 10-year-old.
Think about what this means. As you go about your daily life, you are constantly encountering adults who have read zero books in the last year, cannot reason numerically, and struggle with even basic literacy. You will meet young adults and teenagers who have been told that reading a whole book is too hard for them, and who believe they cannot concentrate well or learn new things. You will need to communicate, work, transact, and form relationships with these people. You may fall in love, start a company, or confront an emergency with these people. At some point, you will need their help, and they will need yours. You are these people, in some way, in some degree, and so am I.
Perhaps most worryingly, most of these studies fail to capture the impact of the current AI eruption. In many instances, the most recent scores date back to 2022, the same year ChatGPT was launched, which suggests that other factors are driving the shift. The rise of “screen time,” the explosion of social media, educational “coddling,” political unrest, and a historic pandemic may all have played a role in our decline.
Is it possible that AI reverses this trend? Will we see a return to form in 2026 and beyond? It’s hard to imagine how this could be the case.
David Krakauer, President of the Santa Fe Institute, makes a distinction between “competitive” cognitive artifacts and “complementary” ones. The abacus is a classic complementary artifact. It cannot calculate on your behalf, even though it amplifies your abilities. To actually use it, you must possess a mental model in your head.
Consumer AI interfaces are not like this. They don’t want to think with you, but for you. Their explicit goal is to save you from the peskiness of thinking, from doing actual cognitive work, whether that’s writing a book report, drafting an email, analysing a dataset, or even drawing a picture. You can ask it questions, prod it in one direction or another, but fundamentally, it is competing with you for the same unit of thought.
We know that this is no competition at all. When given the chance to avoid work, especially annoying work, we are all too happy to take it. Already, we see what this looks like. Schools are desperately trying to detect AI-generated work, but are ill-equipped to do so. A New York Magazine piece reported that students are “cheating their way through college,” with more than 90% of surveyed students saying they used ChatGPT on their assignments. We do not know what percentage used AI to simply do their assignment.
Meanwhile, global governments have embarked on “AI education” plans, intent on teaching the next generation, starting as early as kindergarten. Learning about the technology itself, AI “literacy” is seen as imperative for countries like the US, as if it were difficult to figure out how to type into a textbox and press enter, as if this skill requires exposure in early childhood. When, exactly, will the thinking happen? Does the fate of a generation rely on them remembering to flip on ChatGPT’s “Study Mode” every time they have a question?
It does not have to be this way. There are opportunities here and a rich design space for founders curious enough to look. There is no reason we cannot have AI teachers that inspire us to think more rigorously and test our assumptions. Alan Kay, the legendary computer scientist, elegantly captures the point of tension here with his question, “How much help is too much help?” We need machines that guide us toward the right answer, that teach us how to get there again, rather than simply providing it. It is a pity that so few AI founders seem motivated by this possibility; many have become obsessed with raw speed instead.
Adults need this as much as children. Children, at least, know that they are not supposed to do all of their work with AI; adults are actively encouraged to offload as much as possible. There are good reasons for this. AI can legitimately increase productivity and take low-quality work off our plates. It is likely that there is no cognitive benefit to copying numbers from one spreadsheet into another or typing out a calendar invitation. But we are not yet wise enough to use it this way. We frequently apply it to cognitive-rich tasks, avoiding valuable mental effort. Every time we pass off one of these tasks, we allow our brains to atrophy a little more.
Something else happens in this process, too. Scan your emails, scroll social media, read breaking news, delve into the last few reports someone sent you, and you will see it. There is a creative numbness creeping into everything, a blank sterility that comes from asking a mammoth agglomeration to think on your behalf. It must stay within its distribution; it is searching for the center, not the edge.
What is the grand impact of this? Does it matter if someone else chooses to outsource their thinking, to deliver outputs of bland proficiency? Yes. It matters because all of us rely on the competence of others. And it matters because our cognition is impacted by the information environment we collectively create. If every email you read, every podcast you listen to, every book you skim has been pushed through the same denaturing process, what wilderness can your mind hope to discover?
If you care about the content of your mind, you must protect it from domestication. We must, each of us, cultivate a feral intellect. Read difficult books. Sit down and write a terrible sentence, just so you might avoid confecting a decent paragraph. Sit in front of a spreadsheet full of numbers and figure out what they are saying. Brainstorm with a friend. Treat auto-generated summaries with suspicion. Monotask. Do some math, some of the time, in your head. Concentrate. Exhaust your ideas, then push a bit further. Ask an expert if you know one. Read and re-read. Write discursively. Watch a slow film. Ignore Sora. If you can, have your most important conversations with a human. Do not skim, not always. Consent to being bored. Sketch it out on paper. Respond as yourself, not an aggregation of selves “like you.” Teach yourself something. Allow yourself to think quietly. Synthesize. Debug it yourself. Try something new. Read a long article to the end. Draw something badly. Think about what you really want to ask someone. Find the source material.
Do all of these things, do none of these things, come up with your own. I don’t have the answers. “To think too much is a disease,” Fyodor Dostoevsky said. The greater risk today is thinking too little, and never noticing.