The day I couldn't remember `sorted()`
A senior developer I respect — fifteen years in production Python — opened a fresh terminal last week and couldn't recall the keyword argument to sort a list of dictionaries by a field. Not because he never knew it. Because he hadn't typed it himself in over a year. Cursor had typed it for him every time, and his fingers no longer made the shape on their own. He stared at the cursor, laughed at himself, asked Claude.
That moment, multiplied across millions of developers, is what fuels the current wave of "AI is making us dumb" essays. The most-circulated one this week — James Pain's "God damn AI is making me dumb", published May 14 — hit 470 points on Hacker News before lunch. The Financial Times ran its own version on white-collar "AI brain fry." Both are right that something has changed. Both are aiming at the wrong target.
Here is the argument of this essay, stated plainly: AI hasn't made developers dumb. It has made the friction of development invisible — and that friction was always where the learning lived. The skill we are at risk of losing isn't typing sorted(items, key=lambda x: x["date"]). It's the slower, harder skill of noticing when Claude Code's 50-line patch is subtly wrong, because we stopped holding the model of the system in our head while it was being written. The fix isn't using less AI. It's using AI differently.
What the viral essays got right
The cognitive-offload research is real, recent, and serious. The headline study is from MIT's Media Lab — Kosmyna et al., 2025, "Your Brain on ChatGPT" — which wired up 54 participants with EEG sensors while they wrote essays, with and without LLM help. Their finding: "LLM users displayed the weakest connectivity" in the brain networks associated with working memory and language production, compared to participants who wrote unaided. More damning, when LLM users were then asked to write without AI, those networks didn't snap back. They stayed under-engaged. The study's authors call this "cognitive debt."
A separate piece of evidence comes from Microsoft Research and Carnegie Mellon (Lee et al., CHI 2025), which surveyed 319 knowledge workers across 936 GenAI-assisted tasks. The pull quote: "higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." In English: the more you trust the model, the less you check its work. When the model is right 90% of the time, the missing 10% becomes effectively invisible.
So yes — some atrophy is happening. Pain is observing something true. The viral essays aren't wrong; they're just generic. They treat developers like every other knowledge worker. We aren't.
What the viral essays got wrong about developers specifically
Developers have always been cognitive offloaders. We offload to the compiler, which lets us forget the difference between an int and a register. We offload to the linter, which lets us forget which braces belong to which scope. We offload to autocomplete, type systems, Stack Overflow, language servers, and the surprisingly competent suggestion engine in every modern IDE. The job has never been "remember syntax." The job has been building accurate mental models of complex systems and verifying those models against reality. AI changes which surface we offload from, not whether we offload.
The data on output is unambiguous. Microsoft's Global AI Diffusion Report 2026 (Lavista Ferres, May 7, 2026) reports that "git pushes — through which software developers put coding changes online — increased 78% year over year globally." US software-developer employment is at roughly 2.2 million, up 8.5% year over year. We are producing more code than ever, and there are more of us doing it. Whatever AI is, it isn't shrinking the field — a point we made in our coverage of the Stanford AI Index 2026 report and our response to Anthropic's "end of software engineering" claim.
So the question "are we writing less code?" has a clear answer: no, we're writing far more of it. The question that actually matters is the one no one is asking out loud: are we understanding what we ship? Output is a poor proxy for skill. A 78% increase in git pushes is consistent with a workforce that is shipping faster and understanding less. Both can be true. The first is measured; the second has to be discovered the hard way, usually in a postmortem.
The three real cognitive costs (with receipts)
The atrophy that actually matters for developers is not generic dumbness. It is three specific, observable failure modes.
Loss of error-noticing. When Copilot writes fifty lines and you read them, your brain switches modes. You read for does this run — not for is this right. The Lee et al. survey captures the mechanism: as confidence in the model rises, the critical-thinking step gets shorter. The bug that would have been obvious if you'd typed the code yourself slips past, because you never built the local context that would have made it obvious. The kind of bug this produces is specific: it's the one that looks fine and runs fine on the happy path, the kind that appears in a code review three weeks later when someone with fresh eyes asks "why is the retry loop swallowing exceptions?" You wouldn't have written that loop that way. But you accepted it.
Loss of mental-model maintenance. Modern agentic Claude Code or Cursor will navigate the codebase, pull in the relevant files, make the edits, and run the tests — without you ever opening half the files involved. The feature ships. The next feature ships. Six features later, someone asks you a clarifying question about how the user-authentication flow interacts with the new billing module, and you realize you do not actually know. You shipped both. The agent knows. You don't. This is the failure mode most senior engineers I've spoken to are worried about, and it doesn't show up in any productivity metric. It shows up when something breaks and the person who shipped it can't reason about why. We saw a sharp version of this when Claude Code itself measurably regressed in April and a noticeable fraction of users couldn't tell whether the tool had changed or their workflow had.
Loss of struggle-driven learning. The productive frustration of being stuck — really stuck, banging on a problem for two hours — is where the deep neural grooves get cut. Kosmyna et al.'s finding that LLM users' brain connectivity stayed depressed even when they later wrote unaided is exactly what this looks like at the EEG level: the path of least resistance, repeatedly taken, becomes the only path you know how to take. The short-term productivity feels great. The long-term cost is that the next hard problem you face has no reflexes ready for it.
Why "use less AI" is the wrong answer
The ascetic reaction — uninstall Cursor, write everything by hand, regain your soul — is a trap. It is not economically viable for individual developers or for the teams that employ them. More importantly, it is not historically necessary. Every productivity tool that has ever entered software development was greeted with the same panic. Compilers were going to rot out our understanding of assembly. IDEs were going to rot out our understanding of terminals. Stack Overflow was going to rot out our understanding of fundamentals. Each fear contained a grain of truth, and each tool reshaped what deep skill meant rather than eliminating it.
The historical pattern is consistent. The tool first replaces a surface skill — typing for (int i = 0; i < n; i++) by hand, looking up syntax in a manual, writing memory-management code. Then a deeper skill takes its place — algorithmic thinking, system design, evaluation under uncertainty. The deeper skill is harder to teach, harder to measure, and far more valuable than the one the tool replaced. AI is doing the same thing, except the surface skill it's replacing is writing the first draft of code. The deeper skill being promoted is harder to name, but it lives somewhere between editing, system design, and forensic reading of generated artifacts. Anyone who can do those three things at a high level will be more valuable in three years, not less.
"AI hasn't made developers dumb. It has made the friction of development invisible — and that friction was always where the learning lived."
What actually works: five habits that preserve skill
The interventions worth bothering with are the ones that re-introduce just enough friction to keep your brain in the loop without giving up the productivity AI provides. None of these are listicle fluff. Each has a defensible mechanism.
Predict-then-prompt, ten minutes. Before asking Claude or Cursor to write a function, spend ten minutes predicting the shape of the answer. Not the exact code — just the structure, the edge cases, the likely failure modes. Then prompt. You stay engaged with the problem and the model becomes a verifier of your thinking rather than a replacement for it. The Lee et al. paper's mechanism — confidence-in-tool inversely correlates with critical engagement — is exactly what this habit defeats: you arrive at the AI call already having a model, so verifying its output is cheap.
Read more than you generate. Read AI output the way you'd read a junior developer's pull request: line by line, asking why this and not that? If a function uses recursion, ask whether iteration would have been clearer. If a regex appears, run it in your head. The reading-over-generating ratio is a leading indicator of whether you're staying sharp. If you're shipping a hundred AI-generated lines for every ten you've actually read, you've stopped learning.
One no-tools hour a week. Pick a small problem — a Project Euler exercise, a refactor, a script — and solve it without AI. No autocomplete beyond standard editor completion, no Copilot, no chat tab open. Most engineers I know who've tried this find it shocking after six months of agent-assisted work. Things they used to know cold now take three tries. The hour exists to surface those gaps before they show up in an interview or an incident.
Forced re-derivation on review. When AI writes something nontrivial, close the editor and explain it back out loud in plain English. Not "what does it do" — why does it do it that way? If you can't, you haven't understood it, and shipping it is a bet you'll later regret. This is the verification step that the Microsoft + CMU survey kept showing was getting dropped.
System-design journaling before prompting. Write the architectural decisions down in English before you ask AI to implement them. A paragraph is enough. The act of writing the decision externalizes the mental model — it forces you to commit to a structure you can refer back to, audit later, and reason about when the system grows. This is the muscle that agentic Claude Code workflows most quickly erode, because the agent will happily implement a design you never quite specified.
The developers who will win in 2027
Eighteen months from now, the developer market is going to split visibly into two groups.
The first group uses AI as a verifier of their own thinking. They keep the system in their head, sketch the design, ask AI to produce candidate implementations, and read the output the way they'd read a junior PR. Their productivity is up. Their understanding is intact. Their judgement compounds, because they are still doing the practice that judgement is built on. These are the engineers who will be running architecture for organizations that depend on AI without trusting it blindly. They are also the engineers whose careers do not have a ceiling at the complexity AI can fully handle on its own.
The second group uses AI as a replacement for their own thinking. They prompt, paste, and ship. Their commit graphs look identical to the first group's for now. The split shows up when systems break, when requirements get subtle, when the model is confidently wrong about something the developer can't catch — because they don't have the mental model that would let them catch it. Their output volume is high. Their wages will compress, because what they produce is increasingly fungible. We touched this when we reframed the "end of software engineering" claim around builders: the winner isn't the person who built the most. It's the person who can still tell when what got built is wrong.
The cognitive debt research is real. The viral essays are reacting to something true. But the fix isn't reverting. The fix is choosing, deliberately and a little uncomfortably, which group of developers you are going to be in 2027 — and starting today the small habits that put you there.





