Skip to content

Common Mistakes

These mistakes waste time and produce poor results. Learn to recognize and avoid them.

The mistake: Only using AI when you’re completely stuck and have exhausted all options.

Why it fails: If you don’t understand the problem, you can’t give good context or verify the output.

The fix: Use AI to speed up tasks you already know how to solve. This builds intuition for its capabilities and limitations.


Mistake 2: Context Rot (Information Overload)

Section titled “Mistake 2: Context Rot (Information Overload)”

The mistake: Dumping entire codebases into context, thinking “more info = better results.”

Why it fails: Too much context increases distraction. The model gets confused and quality drops.

The fix:

  • Give the model search tools instead of pre-loaded code
  • Provide only relevant files
  • Stay under 40% of context window

The mistake: Trying to prompt-engineer around a broken dev environment (pre-existing type errors, misconfigured linters).

Why it fails: Agents have no long-term memory. They rediscover the “ghost error” every session, try to fix it, fail, and get confused.

The fix: Fix your environment first. If typecheck or lint fails before you start, the agent will struggle too.


The mistake: Loading dozens of MCP servers, complex plugins, and massive rule files.

Why it fails: Adds complexity and failure points without solving the core issue (bad context/prompting).

The fix: Keep it simple. Stock configurations often outperform “tool maximalist” setups. Start with zero plugins. Add only what you need after hitting a specific limitation.


The mistake: When AI fails, repeatedly asking “fix it” and appending to the conversation history.

Why it fails: The context now contains bad instructions and broken code. Each failed attempt pollutes the context further.

“Correcting Over and Over: Failed approaches accumulate. Solution: After 2 corrections, /clear and rewrite prompt.” — Anthropic

The fix:

  1. Clear context (/clear, new chat, or Cmd+K)
  2. Revert changes (git checkout . or undo)
  3. Rewrite your prompt with better context
  4. If same failure happens 3+ times, stop and rethink entirely

The mistake: Mixing unrelated tasks in one session.

Why it fails: Context gets polluted with irrelevant information from previous tasks.

The fix: Clear context between unrelated tasks:

Task 1: Fix the auth bug
/clear
Task 2: Add the export feature
/clear
Task 3: Refactor the database layer

The mistake: Accepting AI output without review because “it looks right.”

Why it fails: AI produces plausible-looking code that may be subtly wrong.

The fix:

  • 100% of AI-generated code gets human review
  • Never merge without running tests
  • If you don’t understand it, don’t ship it

The mistake: Having AI write both the code AND the tests, then assuming passing tests = working code.

Why it fails: AI-generated tests often:

  • Test the implementation, not the requirements (tautological tests)
  • Miss edge cases the AI also missed in the code
  • Assert what the code does, not what it should do
  • Have the same blind spots as the code they’re testing

“If the same AI writes the code and the tests, and neither understands the requirements correctly, you have two artifacts that agree with each other but not with reality.”

The fix:

  • Write tests FIRST (or have AI write them), then commit before writing code
  • Review AI-generated tests as critically as AI-generated code
  • Ask: “Would this test fail if the code had [specific bug]?”
  • Tests should encode YOUR understanding of requirements, not the AI’s

  • Switching to an unrelated task
  • After 2+ failed fix attempts
  • When the AI starts “forgetting” earlier instructions
  • When responses become repetitive or circular
MistakeSignalFix
Safety net useOnly ask when stuckStart with familiar tasks
Context rotAI seems confusedReduce context, use tools
Broken environmentSame error every sessionFix linter/types first
MCP hellToo many pluginsStrip to defaults
Append trapRepeated “fix it”Clear and rewrite
Kitchen sinkMixing tasksClear between tasks
Blind trustNo reviewAlways read diffs
Lazy testingAI writes code + testsWrite tests first, review critically