Skip to content

Choosing a Model

Pricing changes faster than most documentation gets updated. Use this page for decision logic, not static dollar figures.

If your main need is…Optimize for…
Long-running agent loopsreasoning quality and tool use
Quick edits and completionslatency
Visual/UI implementationmultimodal strength
Sensitive codelocal execution or trusted provider boundaries
Large investigationslong context plus strong context hygiene
  • the task spans many files
  • the change has architectural consequences
  • you need the model to recover from failures and keep a plan straight
  • you are iterating quickly
  • the task is local and well-scoped
  • autocomplete quality matters more than deep planning
  • your data cannot leave your environment
  • you need predictable operational boundaries
  • you can accept some capability tradeoffs for control

Two tools can expose the same model through very different operating constraints.

  • direct provider access
  • aggregator access such as OpenRouter
  • cloud-platform access such as Bedrock, Vertex, or Azure OpenAI
  • local model serving via Ollama, LM Studio, or vLLM

This matters because provider choice changes retention policy, logging surface, and enterprise deployment options.

Do not choose a model from a stale table. Choose it from the workflow you need to support, then verify the live benchmark and access details before committing.