DevPik Logo
ai-newsdeveloper-careersai-codingclaude-codecopilotfuture-of-workeditorial

Is AI Making Developers Dumb? What 18 Months of Copilot, Cursor, and Claude Code Actually Did to Your Brain

A senior developer I respect — fifteen years in production Python — opened a fresh terminal last week and couldn't recall the keyword argument to sort a list of dictionaries by a field. Not because he never knew it.

ByMuhammad TayyabPublished:11 min read
Back to Blog
Is AI Making Developers Dumb? What 18 Months of Copilot, Cursor, and Claude Code Actually Did to Your Brain

The day I couldn't remember `sorted()`

A senior developer I respect — fifteen years in production Python — opened a fresh terminal last week and couldn't recall the keyword argument to sort a list of dictionaries by a field. Not because he never knew it. Because he hadn't typed it himself in over a year. Cursor had typed it for him every time, and his fingers no longer made the shape on their own. He stared at the cursor, laughed at himself, asked Claude.

That moment, multiplied across millions of developers, is what fuels the current wave of "AI is making us dumb" essays. The most-circulated one this week — James Pain's "God damn AI is making me dumb", published May 14 — hit 470 points on Hacker News before lunch. The Financial Times ran its own version on white-collar "AI brain fry." Both are right that something has changed. Both are aiming at the wrong target.

Here is the argument of this essay, stated plainly: AI hasn't made developers dumb. It has made the friction of development invisible — and that friction was always where the learning lived. The skill we are at risk of losing isn't typing sorted(items, key=lambda x: x["date"]). It's the slower, harder skill of noticing when Claude Code's 50-line patch is subtly wrong, because we stopped holding the model of the system in our head while it was being written. The fix isn't using less AI. It's using AI differently.

What the viral essays got right

The cognitive-offload research is real, recent, and serious. The headline study is from MIT's Media Lab — Kosmyna et al., 2025, "Your Brain on ChatGPT" — which wired up 54 participants with EEG sensors while they wrote essays, with and without LLM help. Their finding: "LLM users displayed the weakest connectivity" in the brain networks associated with working memory and language production, compared to participants who wrote unaided. More damning, when LLM users were then asked to write without AI, those networks didn't snap back. They stayed under-engaged. The study's authors call this "cognitive debt."

A separate piece of evidence comes from Microsoft Research and Carnegie Mellon (Lee et al., CHI 2025), which surveyed 319 knowledge workers across 936 GenAI-assisted tasks. The pull quote: "higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking." In English: the more you trust the model, the less you check its work. When the model is right 90% of the time, the missing 10% becomes effectively invisible.

So yes — some atrophy is happening. Pain is observing something true. The viral essays aren't wrong; they're just generic. They treat developers like every other knowledge worker. We aren't.

What the viral essays got wrong about developers specifically

Developers have always been cognitive offloaders. We offload to the compiler, which lets us forget the difference between an int and a register. We offload to the linter, which lets us forget which braces belong to which scope. We offload to autocomplete, type systems, Stack Overflow, language servers, and the surprisingly competent suggestion engine in every modern IDE. The job has never been "remember syntax." The job has been building accurate mental models of complex systems and verifying those models against reality. AI changes which surface we offload from, not whether we offload.

The data on output is unambiguous. Microsoft's Global AI Diffusion Report 2026 (Lavista Ferres, May 7, 2026) reports that "git pushes — through which software developers put coding changes online — increased 78% year over year globally." US software-developer employment is at roughly 2.2 million, up 8.5% year over year. We are producing more code than ever, and there are more of us doing it. Whatever AI is, it isn't shrinking the field — a point we made in our coverage of the Stanford AI Index 2026 report and our response to Anthropic's "end of software engineering" claim.

So the question "are we writing less code?" has a clear answer: no, we're writing far more of it. The question that actually matters is the one no one is asking out loud: are we understanding what we ship? Output is a poor proxy for skill. A 78% increase in git pushes is consistent with a workforce that is shipping faster and understanding less. Both can be true. The first is measured; the second has to be discovered the hard way, usually in a postmortem.

The three real cognitive costs (with receipts)

The atrophy that actually matters for developers is not generic dumbness. It is three specific, observable failure modes.

Loss of error-noticing. When Copilot writes fifty lines and you read them, your brain switches modes. You read for does this run — not for is this right. The Lee et al. survey captures the mechanism: as confidence in the model rises, the critical-thinking step gets shorter. The bug that would have been obvious if you'd typed the code yourself slips past, because you never built the local context that would have made it obvious. The kind of bug this produces is specific: it's the one that looks fine and runs fine on the happy path, the kind that appears in a code review three weeks later when someone with fresh eyes asks "why is the retry loop swallowing exceptions?" You wouldn't have written that loop that way. But you accepted it.

Loss of mental-model maintenance. Modern agentic Claude Code or Cursor will navigate the codebase, pull in the relevant files, make the edits, and run the tests — without you ever opening half the files involved. The feature ships. The next feature ships. Six features later, someone asks you a clarifying question about how the user-authentication flow interacts with the new billing module, and you realize you do not actually know. You shipped both. The agent knows. You don't. This is the failure mode most senior engineers I've spoken to are worried about, and it doesn't show up in any productivity metric. It shows up when something breaks and the person who shipped it can't reason about why. We saw a sharp version of this when Claude Code itself measurably regressed in April and a noticeable fraction of users couldn't tell whether the tool had changed or their workflow had.

Loss of struggle-driven learning. The productive frustration of being stuck — really stuck, banging on a problem for two hours — is where the deep neural grooves get cut. Kosmyna et al.'s finding that LLM users' brain connectivity stayed depressed even when they later wrote unaided is exactly what this looks like at the EEG level: the path of least resistance, repeatedly taken, becomes the only path you know how to take. The short-term productivity feels great. The long-term cost is that the next hard problem you face has no reflexes ready for it.

Why "use less AI" is the wrong answer

The ascetic reaction — uninstall Cursor, write everything by hand, regain your soul — is a trap. It is not economically viable for individual developers or for the teams that employ them. More importantly, it is not historically necessary. Every productivity tool that has ever entered software development was greeted with the same panic. Compilers were going to rot out our understanding of assembly. IDEs were going to rot out our understanding of terminals. Stack Overflow was going to rot out our understanding of fundamentals. Each fear contained a grain of truth, and each tool reshaped what deep skill meant rather than eliminating it.

The historical pattern is consistent. The tool first replaces a surface skill — typing for (int i = 0; i < n; i++) by hand, looking up syntax in a manual, writing memory-management code. Then a deeper skill takes its place — algorithmic thinking, system design, evaluation under uncertainty. The deeper skill is harder to teach, harder to measure, and far more valuable than the one the tool replaced. AI is doing the same thing, except the surface skill it's replacing is writing the first draft of code. The deeper skill being promoted is harder to name, but it lives somewhere between editing, system design, and forensic reading of generated artifacts. Anyone who can do those three things at a high level will be more valuable in three years, not less.

"AI hasn't made developers dumb. It has made the friction of development invisible — and that friction was always where the learning lived."

What actually works: five habits that preserve skill

The interventions worth bothering with are the ones that re-introduce just enough friction to keep your brain in the loop without giving up the productivity AI provides. None of these are listicle fluff. Each has a defensible mechanism.

Predict-then-prompt, ten minutes. Before asking Claude or Cursor to write a function, spend ten minutes predicting the shape of the answer. Not the exact code — just the structure, the edge cases, the likely failure modes. Then prompt. You stay engaged with the problem and the model becomes a verifier of your thinking rather than a replacement for it. The Lee et al. paper's mechanism — confidence-in-tool inversely correlates with critical engagement — is exactly what this habit defeats: you arrive at the AI call already having a model, so verifying its output is cheap.

Read more than you generate. Read AI output the way you'd read a junior developer's pull request: line by line, asking why this and not that? If a function uses recursion, ask whether iteration would have been clearer. If a regex appears, run it in your head. The reading-over-generating ratio is a leading indicator of whether you're staying sharp. If you're shipping a hundred AI-generated lines for every ten you've actually read, you've stopped learning.

One no-tools hour a week. Pick a small problem — a Project Euler exercise, a refactor, a script — and solve it without AI. No autocomplete beyond standard editor completion, no Copilot, no chat tab open. Most engineers I know who've tried this find it shocking after six months of agent-assisted work. Things they used to know cold now take three tries. The hour exists to surface those gaps before they show up in an interview or an incident.

Forced re-derivation on review. When AI writes something nontrivial, close the editor and explain it back out loud in plain English. Not "what does it do" — why does it do it that way? If you can't, you haven't understood it, and shipping it is a bet you'll later regret. This is the verification step that the Microsoft + CMU survey kept showing was getting dropped.

System-design journaling before prompting. Write the architectural decisions down in English before you ask AI to implement them. A paragraph is enough. The act of writing the decision externalizes the mental model — it forces you to commit to a structure you can refer back to, audit later, and reason about when the system grows. This is the muscle that agentic Claude Code workflows most quickly erode, because the agent will happily implement a design you never quite specified.

The developers who will win in 2027

Eighteen months from now, the developer market is going to split visibly into two groups.

The first group uses AI as a verifier of their own thinking. They keep the system in their head, sketch the design, ask AI to produce candidate implementations, and read the output the way they'd read a junior PR. Their productivity is up. Their understanding is intact. Their judgement compounds, because they are still doing the practice that judgement is built on. These are the engineers who will be running architecture for organizations that depend on AI without trusting it blindly. They are also the engineers whose careers do not have a ceiling at the complexity AI can fully handle on its own.

The second group uses AI as a replacement for their own thinking. They prompt, paste, and ship. Their commit graphs look identical to the first group's for now. The split shows up when systems break, when requirements get subtle, when the model is confidently wrong about something the developer can't catch — because they don't have the mental model that would let them catch it. Their output volume is high. Their wages will compress, because what they produce is increasingly fungible. We touched this when we reframed the "end of software engineering" claim around builders: the winner isn't the person who built the most. It's the person who can still tell when what got built is wrong.

The cognitive debt research is real. The viral essays are reacting to something true. But the fix isn't reverting. The fix is choosing, deliberately and a little uncomfortably, which group of developers you are going to be in 2027 — and starting today the small habits that put you there.

Frequently Asked Questions

Is AI actually making developers dumber?
There is real evidence of cognitive offloading effects from AI tools. MIT's Kosmyna et al. (2025) EEG study found 'LLM users displayed the weakest connectivity' in working-memory networks, even when those users later wrote unaided. A Microsoft + Carnegie Mellon survey of 319 knowledge workers found that higher confidence in GenAI correlates with less critical thinking effort. But intelligence loss is not what's measured — it's skill atrophy in specific, observable areas: error-noticing, mental-model maintenance, and struggle-driven learning. Developers who use AI as a verifier of their own thinking can preserve those skills; developers who use AI as a replacement cannot.
Does using Copilot or Cursor reduce programming skill?
It depends on how you use them. If you accept Copilot or Cursor suggestions without reading them, the loss of error-noticing is real and measurable — you stop building local context around the code being written. If you treat AI output as a candidate to be reviewed line-by-line, the way you would review a junior developer's pull request, the productivity gain comes without the skill loss. The Microsoft + CMU study suggests the key variable is your level of self-confidence relative to your confidence in the tool. The higher your trust in AI, the lower your scrutiny — and the lower the scrutiny, the faster skills erode.
What does the MIT 'Your Brain on ChatGPT' study actually show?
Kosmyna et al. (arXiv:2506.08872) used EEG to monitor 54 participants writing essays with and without LLM assistance. The headline finding: LLM-assisted writers had the weakest connectivity in brain networks tied to working memory and language production. The more concerning finding was persistence — when LLM-using participants later wrote without AI, their connectivity did not recover to the level of participants who had always written unaided. The authors call this 'cognitive debt.' The study has limits (small N, lab task, short duration), but it provides one of the first physiological measurements of an effect most knowledge workers already report subjectively.
How can I use AI without losing my coding skills?
Five evidence-informed habits help: (1) Spend ten minutes predicting the answer shape before prompting — you arrive at AI as a verifier, not a replacement. (2) Read more code than you generate; review AI output line-by-line. (3) Do one no-tools coding hour per week to surface what's getting rusty. (4) Force yourself to re-derive non-trivial AI output in plain English before merging it. (5) Write architectural decisions down before asking AI to implement them — this preserves the mental-model muscle that agents most quickly erode.
Will AI replace developers?
The data does not support replacement, at least not yet. Microsoft's Global AI Diffusion Report 2026 reports git pushes are up 78% year over year globally, and US software developer employment is up 8.5% to roughly 2.2 million. The labor market is currently hiring more developers, not fewer — partly because AI lowers the cost of building software, which expands the addressable surface of things worth building. What is more likely than replacement is bifurcation: developers who use AI as a verifier of their judgment will compound their advantage, while developers who use AI as a substitute for judgment will see their output become increasingly fungible.
How do I know if I am too dependent on AI?
Three signals: (1) You can no longer write code in your primary language without AI autocomplete or a chat tab open — including for things you used to write fluently. (2) You ship features whose architecture you cannot fully explain back when asked. (3) The productive frustration of being stuck on a problem has become rare, because every dead end gets short-circuited by a prompt. The first signal is mostly cosmetic. The second and third are serious. If you have either, the no-tools hour per week and the forced-re-derivation habit are the most direct interventions.
Is AI making us mentally lazy?
It can, but the mechanism is more specific than general laziness. Cognitive offloading is a long-standing pattern — humans have always pushed routine mental work onto tools (paper, calculators, IDEs, search engines). What is new with AI is that the offloaded layer now includes judgment, evaluation, and creative synthesis, not just storage and retrieval. The Microsoft + CMU survey found that AI shifts critical-thinking effort from generation toward verification and integration. If you do the verification step, the effort is preserved, just moved. If you skip it because the output looks plausible, the effort goes to zero — and that is where the laziness, and the risk, live.
Muhammad Tayyab

Written by

Muhammad Tayyab

CEO & Founder at Mergemain

Muhammad Tayyab builds free, privacy-first developer tools at DevPik. He writes about AI trends, developer tools, and web technologies.

More Articles