Skip to content

Learning with AI

AI can help you learn faster. It can also make you worse at the parts of programming that matter when things break. The difference is not whether you use it. The difference is how much thinking you hand over.

There is a difference between finishing a task and actually learning from it. AI can close the first gap while widening the second.

The Anthropic study found a 17% comprehension gap between developers who used AI heavily and those who did not, even when output quality looked similar. METR also found experienced open-source developers were slower with AI than they expected to be. The message is consistent: speed gains are real, but they are not automatic, and they do not guarantee learning.

The mechanism is not mysterious. Learning needs struggle, recall, and correction. Heavy delegation short-circuits all three. You get the answer, but less of it sticks.

See Anthropic AI coding learning RCT and OECD Digital Education Outlook 2026 for the strongest primary sources behind this.

Instead of: “Write a function that parses this JSON and returns only active users.”

Try: “Explain how I’d approach filtering a JSON array in JavaScript, then I’ll implement it.”

Or: “I wrote this implementation. What did I get wrong?”

The second framing keeps you in the loop. You are using AI to patch gaps in your mental model, not replace the model entirely.

Shen & Tamkin (2026) found developers who used AI for conceptual questions retained significantly more than those who delegated code generation. The output looked similar. The understanding did not.

The biggest gap in the Anthropic coding study was not writing code. It was debugging.

AI writes code -> you do not build a mental model ->
the code breaks -> you cannot debug it ->
you ask AI to debug it -> it fixes the symptom ->
you still do not have a mental model

This is how developers become dependent on AI for problems they used to be able to reason through themselves.


The Anthropic study’s biggest skill gap wasn’t in writing code. It was in debugging. That’s a skill you only develop by getting stuck and finding your own way out.

Treat AI-free sessions like training without a calculator. You’re not proving anything to anyone. You’re building the muscle.

AI-assisted sessionsSolo sessions
Exploring unfamiliar APIsDebugging your own logic
Boilerplate and scaffoldingImplementing algorithms from scratch
Reviewing and refactoringWriting tests without hints
Learning new frameworks fastInternalizing patterns you’ve seen before

Neither mode is better. Both are necessary.

One practical rule: keep at least one regular session each week where AI is off and the job is manual reasoning, debugging, or code reading.

The “illusion of competence” (IJRSI 2025) is subtle. AI’s fluent, confident output feels like your own understanding. It isn’t.

The test: if you can’t explain the code without looking at it, you haven’t learned it.

A practical technique: after AI generates code, close the chat. Wait 10 minutes. Try to rewrite it from memory. The gaps in your rewrite are exactly what you need to study. Don’t skip this step because it’s uncomfortable. That discomfort is the learning.

Another practical test: if you cannot explain why the code works, what assumptions it makes, and how you would debug it when it fails, you have not learned the technique yet.

One of the strongest patterns across the research is that AI helps learning more when it asks questions or scaffolds thinking than when it simply gives answers (Park et al., 2024). The format matters as much as the content.

You can prompt your way into this mode.

These prompts make the tool act more like a tutor and less like a vending machine. The answers feel slower. That is usually a good sign.

This table draws on the studies cited on this page, especially the Anthropic learning RCT, Shen and Tamkin’s conceptual-vs-delegated-use distinction, and the broader literature on productive struggle. It is a practical framework, not a direct taxonomy from one paper.

PatternOutcome
”Explain this pattern”Preserves understanding
”Review my attempt”Improves mental model
”Give me a hint”Keeps productive struggle alive
”Write this for me”Fast output, weak retention
”Fix it” loopsWeak debugging skill, high dependence

“Novice programmers using AI often skip the ‘productive struggle’ phase of learning.” — Prather et al. (2024)

The struggle is the learning. When you hit a wall, the instinct is to immediately ask AI. Resist it for 10 to 15 minutes. Sit with the problem. Try things. Be wrong.

If you still need help after that, ask for a hint, not a solution. “What direction should I be looking?” is a better prompt than “Fix this.”

Copying AI code and moving on teaches nothing. PNAS (2025) found students using AI without guardrails performed worse on assessments than those who learned traditionally, even though their submitted work looked better.

If you copy AI-generated code, you owe yourself three things before moving on:

  1. Explain every line out loud or in a comment
  2. Modify it for a different use case
  3. Break it deliberately and debug it

If you can’t do all three, you don’t understand it. You just have it.

Research-backed practices that don’t require banning AI:

  • Require explanation of AI-generated code in PRs. Not a summary, an explanation. “This function does X because Y” (Kazemitabaar et al., 2025).
  • Include unassisted coding in assessments. The OECD Performance Trap only surfaces when you test without the tool. If you never test without it, you won’t see the gap until it matters.
  • Use AI for onboarding scaffolding. Tutor CoPilot showed the biggest gains for less-experienced users when AI was used to scaffold, not to replace, the learning process.
  • Set AI-free practice time. Frame it as training, not punishment. Senior developers who are good at their jobs practice fundamentals. This is the same thing.

Use AI as a thinking partner, not an answer machine.

The broad pattern is pretty clear: the people who benefit most from AI are still doing a lot of the thinking themselves. They use it to move faster on the boring parts and to challenge their understanding. They do not use it to dodge the hard parts entirely.

The tool doesn’t determine the outcome. Your habits do.

See Learning Impacts for the evidence behind these recommendations.