Saturday, April 18, 2026

The Dual-Use Instrument: AI's Golden Age of Breakthroughs and the Erosion of Human Cognition

Sam Kriss wrote that in 1931, Soviet neuropsychologist Alexander Luria trekked to the remote foothills of the Alai Mountains and discovered something profound: basic literacy didn't just teach people to read. Literacy rewired how they thought. Illiterate peasants grouped shapes by lived experience (a circle was the moon, a square a drinking bowl). A few years of Soviet schooling flipped the switch.  These same people grouped by abstract geometry and solved hypothetical syllogisms about white bears in the Far North. Literacy birthed a new mind, one capable of abstractions, counterfactuals, and the kind of imaginative leap that fuels revolutionary politics and scientific progress.


Fast-forward to 2026. We're living through the mirror image of Luria's observation, but in reverse. AI chatbots, transformer models, and large language models (LLMs) are delivering superhuman feats of synthesis, prediction, and creation. Yet they're quietly undoing the cognitive habits that literacy once instilled. The same tools that are saving doctors time and accelerating Nobel-worthy science are fostering what one researcher calls "cognitive surrender" - the outsourcing of reasoning itself.


An interesting case is this Substack post, about a general practitioner (GP) medical doctor and digital health leader. The author dove headfirst into ambient AI scribing for 18 months. The promise was intoxicating: walk into a 10-minute consultation, maintain eye contact, speak naturally, and emerge with a complete, structured note. Burnout dropped. Documentation time plummeted by ~26%. It felt like getting the consultation back.


Then the second-order effects hit. Consultations stretched because he stopped curating in real time—why steer when the machine records everything? Follow-up notes were accurate but ailien: comprehensive transcripts lacked his clinical voice, synthesis, or "illness scripts" (those experiential mental models GPs build through thousands of encounters). The act of writing the note had been doing invisible cognitive work, prioritizing, reflecting, & reasoning. Offloading those thought processes broke the feedback loop. The doctor stopped using the tool not because it failed, but because it succeeded too well, causing subtle, deleterious side-effects.

We're not returning to pre-literate sensory immediacy. We're entering something stranger: a world of infinite generated content where direct experience is mediated by AI, and where abstract, counterfactual thinking is disappearing.


The Golden Age: What AI Has Already Delivered

AI breakthroughs are breathtaking.


Transformer models and LLMs have supercharged science in ways that were science fiction five years ago. AlphaFold and its 2024 Nobel Prize-winning successors, including AlphaFold 3 solved the 50-year protein-folding problem, predicting structures for over 200 million proteins with near-experimental accuracy. It now models how proteins interact with DNA, RNA, small molecules, and ions, unlocking rational drug design at unprecedented speed. AI-powered pipelines are delivering drug candidates 75% faster than traditional methods. Autonomous AI agents like Kosmos are compressing six months of PhD-level research into a single 12-hour run.


In materials science, AI has screened millions of candidates to discover new battery chemistries, carbon-capture materials, and quantum-computing components. Weather models run with hyper-accurate long-range forecasts. Self-improving labs iterate experiments in real time, slashing waste and cost. The 2024 to Q1 2026 period alone saw AI contribute to breakthroughs in fusion plasma control, exoplanet classification, and even new mathematical insights.


Economically, the gains are compounding. Generative AI is projected to add 1.5% to U.S. GDP by 2035, rising to 3.7% by 2075 through productivity alone. Industries with high AI exposure saw 10% productivity jumps, 3.9% job growth, and 4.8% wage increases in 2024-2025. Documentation tools like the one Gooch used are slashing administrative drag across veterinary medicine, human medicine, law, and engineering. Code generation, creative ideation, and data synthesis are freeing humans for higher-order work.


Many of these concrete accomplishments are not hype cycles. They are measurable, peer-reviewed revolutions in capability.


The Regressions: Literacy, Rationality, and the Extended Mind

Just as writing once pulled minds into abstraction, AI is pulling them back toward “fake it til you make it” fluency without friction, answers without effort, and sensory immediacy without synthesis.


Literacy rates and deep reading have been sliding since 2014 and the back slide is accelerating with AI. Elite university students increasingly can't finish a novel or parse a complex sentence without AI. One Kansas study found English majors struggling with Dickens' Bleak House—treating a metaphorical Megalosaurus as a literal dinosaur in Victorian London. These students respond like Luria's illiterate peasants: tethered to immediate, concrete reality.


Cognitive science gives insight into the relationship between literacy and cognition. "Extended mind" theory shows that writing isn't just recording thoughts; writing completes these thoughts. Gooch's experience mirrors these findings: AI notes broke the loop that built clinical expertise. Broader studies show AI users exhibit "cognitive offloading," that in turn causes declines in working memory, analytic reasoning, and critical thinking. People accept faulty AI outputs 73% of the time in controlled experiments lending evidence to the deleterious effects of "cognitive surrender.” Attention, skepticism, and the scientific method itself erode when LLMs do the synthesizing.


Steven Pinker's Rationality (2021) warned that we aren't born rational; we build rationality through deliberate practice in logic, probability, Bayesian reasoning, and causal inference. AI short-circuits and prevents skill growth from practice. Why wrestle with a syllogism when the model answers instantly? Why cultivate skepticism when the output is fluent and confident? The Enlightenment ideals, including empiricism, skepticism, & evidence-based progress rely on the very cognitive muscles we're letting atrophy with AI.


Backsliding in Human Potential

This phenomenon is not only about doctors or students. It's about the quiet diminishment of what makes a life meaningful.


In the arts, AI generates poems, paintings, and music at superhuman volume. Why labor through the frustration of original creation when the machine delivers polish? Personal artistic potential, the struggle that forges voice and vision fades in our “Age of AI.”


Economically, wealth creation has always come from human ingenuity: spotting unseen opportunities, iterating through failure, building novel systems. When AI handles the ideation and execution, the incentive to cultivate deep expertise or entrepreneurial grit weakens.


Philosophically, the great explorations of ethics, metaphysics, epistemology demand solitary wrestling with ambiguity. AI offers instant summaries and counterarguments. The joy of building your own worldview erodes. We risk a generation fluent in AI outputs but starved of the internal scaffolding that once produced  Socrates, Newton, Kant, Einstein, da Vinci.


Politics itself may regress from reasoned debate over imagined futures to tribal sensory immediacy. Streamers repeat formulas rather than abstract justice.


A Realistic, Guardedly Optimistic Future

None of these cognitive declines and their effects on society are inevitable.


We stand at an inflection point with enlightenment, wisdom, and agency. The post-literate age does not inevitably mean cognitive collapse. The “Age of AI” can mean augmented humanity if we treat AI as a seductively dangerous instrument, not a prosthesis.


Imagine "AI literacy" curricula that teaches not just prompts, but deliberate cognitive preservation: handwritten notes alongside AI drafts, Socratic challenges to model outputs, mandatory "unplugged" reasoning drills. Doctors like Gooch can (and some do!) use AI for transcription while reclaiming note authorship as a sacred clinical craft. Scientists could let agents run rote experiments while humans focus on the counterfactual leaps AI can't yet replicate.


The same transformers accelerating AlphaFold could help restore rationality.  Personalized tutors that drill children in skepticism, Bayesian thinking, tools that flag our own cognitive biases in real time. Economic gains could fund equal universal infrastructure: clean air, clean water, sanitation, energy, roads, railways, & telecommunication, that could free more people wealth creation, and personal “pursuit of happiness,” including creative and philosophical pursuits.


In the end, the future will not be purely post-literate or pre-AI literacy. It will be hybrid: a new extended mind where AI handles the volume and humans reclaim the voice, the synthesis, the purpose. The kids raised by AI dolls won't inevitably lose abstraction; they will almost certainly evolve a meta-literacy we cannot yet imagine.


We know that society shapes thought. Now you and I get to shape this society. The instrument is in our hands. We must use it carefully to cut away drudgery while sharpening the blade of the human mind. The breakthroughs and progress are themselves accelerating.  


No comments: