Skip to content

Core Concepts

Before diving deeper, let’s establish shared vocabulary.

TermWhat It Means
ModelThe AI itself (e.g., GPT-5.2, Claude Sonnet 4.5). Determines capability and quality
ProviderCompany hosting the model’s API (e.g., OpenAI, Anthropic). Determines pricing and terms
AgenticAI that acts autonomously: reads files, runs commands, iterates on errors
Context WindowHow much text the AI can “see” at once (measured in tokens). More ≠ better
BYOKBring Your Own Key. Use your own API keys instead of a subscription
MCPModel Context Protocol. A standard for connecting AI tools to external services
SkillA reusable instruction set, playbook, or convention that helps an agent perform a class of tasks more reliably
TermWhat It Means
Context EngineeringBuilding systems to provide the right information and tools to the model
Context RotWhen too much irrelevant context makes the AI “dumber”
Context BudgetA practical limit for how much context to load before quality starts to degrade
SubagentA separate AI instance launched for investigation, keeping main context clean
TermWhat It Means
Composer/Agent ModeMulti-file editing mode (vs. single-file autocomplete)
PromptYour instruction to the AI. Quality of prompt = quality of output
VerificationHaving the AI check its own work (tests, linter, type checker)
Close the LoopDesign workflow so the agent can verify its own output

There is no strong primary-source basis for a universal 40% threshold. Treat hard percentages as heuristics, not laws.

The safer idea is simpler: quality drops before the window is full, especially when context is noisy. Use selective retrieval, compaction, and project context files instead of stuffing everything into one prompt.

If you want a working rule, think in ranges rather than one magic number. The practical target depends on the model, the task, and how clean the context is.

Models determine capability — how smart the AI is, how fast it responds.

Providers determine terms — pricing, privacy policy, legal jurisdiction.

The same model can be available through multiple providers:

  • Claude Sonnet 4.5 is available via Anthropic, AWS Bedrock, and Google Vertex
  • GPT-5.2 is available via OpenAI and Azure OpenAI

This matters for enterprise (data residency, compliance) and cost optimization.