Skip to content

Workflow Archetypes

Most AI coding frustration is not about model quality. It is about using the wrong workflow for the job. These four patterns cover most of the work people actually do.

Use this when the problem is already visible and you want the smallest safe fix.

  1. Reproduce the bug with a failing test, command, or screenshot.
  2. Give the agent the error, the expected behavior, and only the relevant files.
  3. Ask for root-cause analysis before implementation.
  4. Make the smallest fix that resolves the failure.
  5. Re-run the verification signal.

A good prompt sounds like: “Here is the failing test and the relevant files. Explain the root cause, then fix it without changing unrelated behavior.”

The usual mistake is throwing the whole repo at the model and hoping it guesses right.

Full worked example: Scenario - Fix a Bug

Use this when you are adding new behavior and the shape of the work is still easy to change.

  1. Write a short spec with requirements, constraints, and acceptance criteria.
  2. Ask the agent to read the spec and discuss the approach first.
  3. Break execution into small slices: schema, core logic, tests, UI, docs.
  4. Verify each slice before moving on.
  5. Update the spec when scope changes.

A good opening prompt is: “Read spec.md. Tell me what questions you have and propose the implementation plan before writing code.”

Why it works: it turns vague intent into a sequence the model can follow and you can check.

Full worked example: Scenario - Add a Feature

Use this when the code works, but living with it is getting expensive.

  1. Capture current behavior with characterization tests.
  2. Set explicit non-goals: no feature changes, no opportunistic fixes.
  3. Refactor one seam at a time.
  4. Re-run tests after every small step.
  5. Stop when readability or maintainability improves enough.

A good prompt is: “This is a refactor. Preserve behavior. Do not change public interfaces unless the test or spec requires it.”

The trap here is mixing bug fixing, cleanup, and feature work into one big edit.

Full worked example: Scenario - Safe Refactor

Use this when you do not understand the codebase well enough to touch it confidently.

  1. Ask the main agent to map the relevant subsystem.
  2. Use subagents to explore the database layer, API layer, and tests in parallel if needed.
  3. Ask for file paths, patterns, and conventions, not broad summaries.
  4. Write down the useful findings.
  5. Only then move into a bug, feature, or refactor workflow.

A good prompt is: “Investigate how X works in this repo. Report back with entry points, key files, and patterns. Do not modify anything.”

SituationStart With
A broken test or clear bug reportBug Hunt
New endpoint, feature, or UI pathFeature Build
Messy code with stable behaviorRefactor
Unfamiliar systemCodebase Explorer
  • verification matters more than confidence
  • smaller steps are easier to review than giant prompts
  • selective context works better than maximal context
  • when the model gets stuck, a fresh session usually beats another long rescue attempt