How to Switch Models & Configure Providers in OpenClaw
Complete guide to switching between AI models (Claude, Gemini, Ollama, DeepSeek, OpenRouter) and configuring providers in OpenClaw. Covers local models, Docker setups, and common configuration issues.
⚠️ The Problem
🔍 Why This Happens
baseUrl for custom providers, (2) Gateway not restarted after config changes, (3) Stale session state caching the old model, (4) Docker networking issues preventing container-to-container communication, (5) Incorrect model name format, or (6) Provider API compatibility settings missing (like api: "openai-responses" for Ollama).✅ The Fix
## Quick Model Switch (No Config Changes)
The fastest way to switch models without editing config files:
# Switch model for current session/model claude-sonnet-4-20250514# Or use the CLIopenclaw models set anthropic/claude-sonnet-4-20250514If /model says "not allowed", your config may have model restrictions. Use the configure wizard instead.
## Method 1: Interactive Configuration Wizard
The easiest way to reconfigure your provider:
openclaw configureThis walks you through selecting a provider, entering API keys, and choosing a default model.
## Method 2: Direct Config Editing
Edit your config file at ~/.config/openclaw/openclaw.json5 (or ~/.openclaw/clawdbot.json for legacy installs):
{ "models": { "defaults": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" }, "providers": { "anthropic": { "apiKey": "$ANTHROPIC_API_KEY" } } }}Important: After editing, always restart the gateway:
openclaw gateway restart## Configuring Ollama (Local Models)
Ollama provides free local models. Here's how to set it up:
# 1. Install Ollama from https://ollama.ai and pull a modelollama pull llama3.3# 2. Set the API key (any value works, Ollama doesn't validate)export OLLAMA_API_KEY="ollama-local"# 3. Verify Ollama is runningcurl http://127.0.0.1:11434/api/tagsAdd to your config:
{ "models": { "providers": { "ollama": { "baseUrl": "http://127.0.0.1:11434/v1", "apiKey": "ollama-local", "api": "openai-responses" } }, "defaults": { "provider": "ollama", "model": "llama3.3" } }}Critical: The api: "openai-responses" line is required for Ollama to work properly. Without it, you'll see 0/200k tokens and no response.
## Ollama in Docker
When running OpenClaw in Docker with Ollama:
# docker-compose.ymlservices: openclaw: image: ghcr.io/openclaw/openclaw:latest environment: - OLLAMA_API_KEY=ollama-local # Use host.docker.internal for Docker Desktop # Or use the service name if Ollama is in the same compose fileConfig for Docker:
{ "models": { "providers": { "ollama": { // Docker Desktop (Mac/Windows) "baseUrl": "http://host.docker.internal:11434/v1", // OR Linux (use network_mode: host) // "baseUrl": "http://127.0.0.1:11434/v1", // OR same compose file (use service name) // "baseUrl": "http://ollama:11434/v1", "apiKey": "ollama-local", "api": "openai-responses" } } }}Common Docker error:
Error: Config validation failed: models.providers.ollama.baseUrl: Invalid input: expected string, received undefinedThis means you set apiKey but forgot baseUrl. Both are required for explicit Ollama config.
## Configuring Gemini (Google)
# Set your Google API keyexport GOOGLE_API_KEY="your-google-api-key"# Or use environment variables in DockerGOOGLE_MODEL=gemini-2.0-flashCLAWDBOT_DEFAULT_MODEL=google/gemini-2.0-flashConfig file:
{ "models": { "defaults": { "provider": "google", "model": "gemini-2.0-flash" } }}Common error: 403 Forbidden - Usually means your API key doesn't have Gemini API enabled in Google Cloud Console.
## Configuring OpenRouter
OpenRouter provides access to many models through a single API:
# Get your key from https://openrouter.ai/keysexport OPENROUTER_API_KEY="sk-or-..."# Onboard with OpenRouteropenclaw onboard --auth-choice apiKey --token-provider openrouter --token "your-api-key"Config:
{ "env": { "OPENROUTER_API_KEY": "sk-or-..." }, "agents": { "defaults": { "model": { "primary": "openrouter/anthropic/claude-sonnet-4", "fallbacks": ["openrouter/google/gemini-2.0-flash"] } } }}## Configuring DeepSeek (Direct API)
DeepSeek has an OpenAI-compatible API:
{ "env": { "DEEPSEEK_API_KEY": "your-deepseek-key" }, "models": { "providers": { "openai-compatible": { "baseUrl": "https://api.deepseek.com/v1", "headers": { "Authorization": "Bearer $DEEPSEEK_API_KEY" } } } }, "agents": { "defaults": { "model": { "primary": "openai-compatible/deepseek-chat" } } }}## Troubleshooting: Model Won't Change
Symptom: You configure a new model but the old one still appears.
Fixes in order:
# 1. Restart the gateway (most common fix)openclaw gateway restart# 2. Start a fresh session (clears cached state)/new# 3. Verify your config is validopenclaw doctor --fix# 4. Check which models are availableopenclaw models list## Troubleshooting: 0/200k Tokens, No Response (Ollama)
Symptom: TUI shows 0/200k tokens, prompt runs forever, no response.
Root cause: Missing api: "openai-responses" in Ollama config.
Fix:
{ "models": { "providers": { "ollama": { "baseUrl": "http://127.0.0.1:11434/v1", "apiKey": "ollama-local", "api": "openai-responses" // THIS LINE IS CRITICAL } } }}Also verify Ollama is actually responding:
curl http://127.0.0.1:11434/api/tags# Should return list of installed models## Recommended Local Models by Hardware
| Hardware | Recommended Models |
|----------|-------------------|
| 6GB VRAM (RTX 2060) | llama3.2:3b, phi-4, qwen2.5:3b |
| 8GB VRAM | llama3.1:8b, qwen2.5:7b |
| 16GB+ VRAM | qwen2.5-coder:32b, deepseek-r1:14b |
| 32GB+ RAM (CPU) | llama3.1:8b (slower but works) |
Tip: Run ollama run <model> first to test the model responds before configuring OpenClaw.
🔥 Your AI should run your business, not just answer questions.
We'll show you how.$97/mo (going to $197 soon)
📋 Quick Commands
| Command | Description |
|---|---|
| openclaw configure | Interactive wizard to configure provider and model |
| openclaw models list | List all available models from configured providers |
| openclaw models set <provider/model> | Set the default model (e.g., anthropic/claude-sonnet-4-20250514) |
| /model <model-name> | Switch model for current chat session |
| /new | Start a fresh session (clears cached model state) |
| openclaw gateway restart | Restart gateway to apply config changes |
| openclaw doctor --fix | Diagnose and fix common configuration issues |
| ollama list | List locally installed Ollama models |
| ollama pull <model> | Download a model for local use |
Related Issues
📚 You Might Also Like
OpenClaw Configuration Guide: Complete Settings Reference (2026)
Master OpenClaw configuration with this complete reference. All config.yaml settings explained: AI models, channels, multi-agent setup, plugins, secrets management, and more.
OpenClaw vs Cursor
IDE vs full-stack AI assistant — why not both?
Chat with your AI assistant through WhatsApp, the messaging app you already use every day. Send voice notes, share files, and get things done without switching apps.
What Is an AI Assistant? (And Why You Might Want One)
You've heard of Siri and Alexa. But what if you could have an AI assistant that actually remembers your conversations and helps with real work?
🐙 Your AI should run your business.
Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)
Join Vibe Combinator →