OpenClaw External AI Models Integration (GPT, Claude, Gemini, Llama)
OpenClaw supports multiple LLM providers through single interface. Right provider choice — balance of cost, quality, latency, and privacy requirements. We configure multi-provider setup with fallback logic.
Supported Providers
OpenAI (GPT-4o, GPT-4o-mini): best quality/availability ratio for most tasks. GPT-4o-mini for high-frequency simple requests — 15x cheaper than GPT-4o.
Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku): best choice for long-context tasks (200K tokens) and document work. Claude Haiku — fast and cheap for simple tasks.
Google (Gemini 1.5 Pro/Flash): Gemini 1.5 Pro has 1M token context. Good multimodality.
Self-hosted (Llama 3, Mistral, Qwen): via Ollama, vLLM, LM Studio. Complete privacy, no per-token costs, but requires GPU infrastructure.
Multi-Provider Setup
Router logic: expensive models (GPT-4o) for complex multi-step tasks; cheap (GPT-4o-mini, Haiku) for classification and simple answers; self-hosted for confidential data.
Fallback: OpenAI unavailable → switch to Anthropic. Reduces agent downtime risk.







