Subagent Architectures
Subagents are separate AI instances that handle scoped tasks while your main agent stays focused on implementation.
Why Subagents
Section titled “Why Subagents”Problem: Research pollutes context. Looking up docs, exploring codebases, and investigating options fills your context window with information that is useful once and distracting afterward.
Solution: Delegate research to subagents that run in separate contexts.
When to Use Subagents
Section titled “When to Use Subagents”- Exploring unfamiliar codebases
- Looking up documentation
- Investigating multiple approaches
- Any task that’s “read a lot, summarize a little”
What They Are Good At
Section titled “What They Are Good At”- codebase exploration
- external documentation lookup
- pattern finding across multiple modules
- comparing implementation options before you commit to one
They are usually bad at owning the whole task indefinitely. Use them to reduce noise, not to create management overhead.
This caveat matters. Research and architecture writeups both suggest multi-agent systems help most when the agents have genuinely different scopes, tools, or capabilities.
Evidence tags: Research-backed (METR uplift update, Anthropic: Effective harnesses for long-running agents); Practitioner-backed (Workflow Archetypes).
Example Usage
Section titled “Example Usage”In Claude Code
Section titled “In Claude Code”Use subagents to investigate how authentication is implementedin this codebase. Report back with file paths and patterns.In Cursor
Section titled “In Cursor”Use Background Agents for research tasks. Keep your main Composer session focused on implementation.
Benefits
Section titled “Benefits”| Benefit | Why It Matters |
|---|---|
| Clean main context | No research pollution |
| Parallel investigation | Multiple angles at once |
| Focused summaries | Get answers, not raw exploration |
| Context budget | Each agent gets its own context budget |
The Orchestrator Pattern
Section titled “The Orchestrator Pattern”The main agent should act like an orchestrator:
- decide what needs to be discovered
- send narrow research tasks to subagents
- collect concise findings
- implement in the main context
This works best when search and implementation are different jobs.
Architecture Patterns
Section titled “Architecture Patterns”Research + Implementation
Section titled “Research + Implementation”Main Agent ─┬─> Subagent: "Research auth patterns" │ └─> Returns: "Found JWT in /auth, sessions in /middleware" │ └─> Main continues with focused implementationParallel Exploration
Section titled “Parallel Exploration”Main Agent ─┬─> Subagent 1: "Explore database layer" ├─> Subagent 2: "Explore API layer" └─> Subagent 3: "Explore test patterns"
└─> Main synthesizes findingsBest Practices
Section titled “Best Practices”- Give clear scope — “Investigate X in these files”
- Ask for summary — “Report back with key findings”
- Set constraints — “Don’t modify any files”
- Time-box — Use for investigation, not implementation
- Separate research from build — Let subagents search, let the main agent write code
- Request file paths and patterns — Summaries are better when they point back to evidence
Supporting Evidence
Section titled “Supporting Evidence”- Context Engineering
- Productivity Research
- agent architecture analyses showing orchestration overhead and context-isolation benefits
When Not to Use Them
Section titled “When Not to Use Them”- Tiny one-file edits
- Simple syntax or typo fixes
- Tasks where the overhead is larger than the search space
Next Steps
Section titled “Next Steps”- Workflow Archetypes — where subagents fit in real workflows
- Agent Harness — keeping long-running work stable