AI is working.
That's no longer the question.
The question is what it's doing to you while you use it.
The pattern is starting to show up in the research. A 2025 study from Microsoft Research and Carnegie Mellon (Lee et al., CHI 2025), surveying 319 knowledge workers across 936 real-world AI use cases, found that higher confidence in AI was associated with reduced critical-thinking effort during AI-assisted tasks. A separate 2025 study by Michael Gerlich (Societies), with 666 participants, found that frequent AI use correlated with weaker critical-thinking performance, with cognitive offloading acting as a key driver.
Neither study proves long-term skill loss. Both, though, point to the same mechanism — cognitive offloading. You hand thinking work to AI. The studies can't yet tell us what happens over years of doing that.
methods and domains that ignoring it would be a bet.
This isn't about intelligence.
It's about how capability is built — and how AI interferes with that process. AI doesn't just make work easier. It changes where effort happens. And that matters.
AI removes useful struggle.
Not all struggle builds skill. But some does. The kind that builds capability looks like: attempt → failure → adjustment → insight.
AI compresses that loop. Instead of working through the problem, you verify a solution. That's efficient. But it shifts learning out of the process.
This dynamic isn't new. In 1983, cognitive scientist Lisanne Bainbridge described the "irony of automation": when systems handle routine work, humans lose the repetition that builds judgment — and are left responsible only for the edge cases they're least prepared to handle. AI extends that pattern.
AI resets your effort baseline.
Once you experience instant answers, high-quality drafts, and low friction, anything slower feels broken. So when AI isn't available, it's not that you can't do the work. It's that your tolerance for effort has changed.
AI optimizes for completion, not understanding.
AI systems are designed to produce outputs, resolve tasks, and move forward. They are not designed to build mental models, strengthen reasoning, or improve judgment. That work used to happen in the gap between attempts. Now that gap is disappearing.
"People said the same thing about calculators."
That's true. Each of those tools — calculators, spellcheck, Google — reduced effort and increased output. And people adapted.
So what's different now? Two things.
Scope. Calculators replaced arithmetic. AI replaces multi-step thinking. Spreadsheets came closer — they automated multi-step work — but spreadsheets still forced you to specify the formula. AI lets you skip that step too.
Timing. Previous tools helped after you learned. AI helps instead of learning.
The effect isn't even.
Experts use AI to accelerate execution — they already have the mental models to detect when output is wrong.
Novices use AI to replace thinking — they don't yet have the frameworks to know what to question.
It reduces capability growth for people who don't.
The labor data is starting to show this isn't just theory.
A 2025 Stanford Digital Economy Lab study (Brynjolfsson, Chandar, Chen), analyzing ADP payroll data across roughly 25 million workers, found a 13% relative decline in employment among workers aged 22–25 in AI-exposed occupations since late 2022. Over the same period, employment for workers aged 30 and over in those same occupations grew between 6% and 12%.
The pattern was strongest in roles where AI automates work rather than augments it.
That matters because those roles were never just output. They were training. Entry-level analysis. First drafts. Repetitive execution work. Remove that layer, and you remove pattern recognition development, problem decomposition reps, and judgment built through repetition.
But you stop developing the same way.
Short term: faster teams, higher output, lower cost.
Long term: fewer people who can operate independently, reduced ability to handle edge cases, fragile decision-making under pressure.
It's that capability decays underneath the dependency —
and you only notice when the system fails
and no one knows how to recover.
This isn't an argument against AI.
It's an argument for designing how you use it. Three moves.
Move 1: Add no-AI reps.
Pick one critical task per role. Once per week, that task gets completed without AI. Compare the result to the AI-assisted version — not just for quality, but for what you had to understand to get there.
Move 2: Separate thinking from drafting.
Require a two-step workflow. You define the approach. AI executes or refines. Not "Write a deal memo for the Acme account based on this transcript," but "Here's the deal-memo structure I think this account warrants — challenge my framing before drafting."
Move 3: Require output defense.
Before work is accepted, you must explain why this approach, what assumptions were made, and where it could fail. No explanation means no acceptance.
Most teams think the AI shift is about how good the models get.
It's not.
as you use them.
AI doesn't just replace your work.
It replaces the process that makes you better at it.