🦞OpenClaw Guide
Models

How to Switch Models & Configure Providers in OpenClaw

Complete guide to switching between AI models (Claude, Gemini, Ollama, DeepSeek, OpenRouter) and configuring providers in OpenClaw. Covers local models, Docker setups, and common configuration issues.

⚠️ The Problem

You want to switch AI models or configure a new provider (Ollama, Gemini, OpenRouter, DeepSeek) but the model won't change, shows 0 tokens with no response, or configuration commands fail with validation errors.

🔍 Why This Happens

Model switching issues typically stem from: (1) Missing required configuration fields like baseUrl for custom providers, (2) Gateway not restarted after config changes, (3) Stale session state caching the old model, (4) Docker networking issues preventing container-to-container communication, (5) Incorrect model name format, or (6) Provider API compatibility settings missing (like api: "openai-responses" for Ollama).

The Fix

## Quick Model Switch (No Config Changes)

The fastest way to switch models without editing config files:

bash
# Switch model for current session/model claude-sonnet-4-20250514# Or use the CLIopenclaw models set anthropic/claude-sonnet-4-20250514

If /model says "not allowed", your config may have model restrictions. Use the configure wizard instead.

## Method 1: Interactive Configuration Wizard

The easiest way to reconfigure your provider:

bash
openclaw configure

This walks you through selecting a provider, entering API keys, and choosing a default model.

## Method 2: Direct Config Editing

Edit your config file at ~/.config/openclaw/openclaw.json5 (or ~/.openclaw/clawdbot.json for legacy installs):

json5
{  "models": {    "defaults": {      "provider": "anthropic",      "model": "claude-sonnet-4-20250514"    },    "providers": {      "anthropic": {        "apiKey": "$ANTHROPIC_API_KEY"      }    }  }}

Important: After editing, always restart the gateway:

bash
openclaw gateway restart

## Configuring Ollama (Local Models)

Ollama provides free local models. Here's how to set it up:

bash
# 1. Install Ollama from https://ollama.ai and pull a modelollama pull llama3.3# 2. Set the API key (any value works, Ollama doesn't validate)export OLLAMA_API_KEY="ollama-local"# 3. Verify Ollama is runningcurl http://127.0.0.1:11434/api/tags

Add to your config:

json5
{  "models": {    "providers": {      "ollama": {        "baseUrl": "http://127.0.0.1:11434/v1",        "apiKey": "ollama-local",        "api": "openai-responses"      }    },    "defaults": {      "provider": "ollama",      "model": "llama3.3"    }  }}

Critical: The api: "openai-responses" line is required for Ollama to work properly. Without it, you'll see 0/200k tokens and no response.

## Ollama in Docker

When running OpenClaw in Docker with Ollama:

yaml
# docker-compose.ymlservices:  openclaw:    image: ghcr.io/openclaw/openclaw:latest    environment:      - OLLAMA_API_KEY=ollama-local    # Use host.docker.internal for Docker Desktop    # Or use the service name if Ollama is in the same compose file

Config for Docker:

json5
{  "models": {    "providers": {      "ollama": {        // Docker Desktop (Mac/Windows)        "baseUrl": "http://host.docker.internal:11434/v1",        // OR Linux (use network_mode: host)        // "baseUrl": "http://127.0.0.1:11434/v1",        // OR same compose file (use service name)        // "baseUrl": "http://ollama:11434/v1",        "apiKey": "ollama-local",        "api": "openai-responses"      }    }  }}

Common Docker error:

bash
Error: Config validation failed: models.providers.ollama.baseUrl: Invalid input: expected string, received undefined

This means you set apiKey but forgot baseUrl. Both are required for explicit Ollama config.

## Configuring Gemini (Google)

bash
# Set your Google API keyexport GOOGLE_API_KEY="your-google-api-key"# Or use environment variables in DockerGOOGLE_MODEL=gemini-2.0-flashCLAWDBOT_DEFAULT_MODEL=google/gemini-2.0-flash

Config file:

json5
{  "models": {    "defaults": {      "provider": "google",      "model": "gemini-2.0-flash"    }  }}

Common error: 403 Forbidden - Usually means your API key doesn't have Gemini API enabled in Google Cloud Console.

## Configuring OpenRouter

OpenRouter provides access to many models through a single API:

bash
# Get your key from https://openrouter.ai/keysexport OPENROUTER_API_KEY="sk-or-..."# Onboard with OpenRouteropenclaw onboard --auth-choice apiKey --token-provider openrouter --token "your-api-key"

Config:

json5
{  "env": {    "OPENROUTER_API_KEY": "sk-or-..."  },  "agents": {    "defaults": {      "model": {        "primary": "openrouter/anthropic/claude-sonnet-4",        "fallbacks": ["openrouter/google/gemini-2.0-flash"]      }    }  }}

## Configuring DeepSeek (Direct API)

DeepSeek has an OpenAI-compatible API:

json5
{  "env": {    "DEEPSEEK_API_KEY": "your-deepseek-key"  },  "models": {    "providers": {      "openai-compatible": {        "baseUrl": "https://api.deepseek.com/v1",        "headers": {          "Authorization": "Bearer $DEEPSEEK_API_KEY"        }      }    }  },  "agents": {    "defaults": {      "model": {        "primary": "openai-compatible/deepseek-chat"      }    }  }}

## Troubleshooting: Model Won't Change

Symptom: You configure a new model but the old one still appears.

Fixes in order:

bash
# 1. Restart the gateway (most common fix)openclaw gateway restart# 2. Start a fresh session (clears cached state)/new# 3. Verify your config is validopenclaw doctor --fix# 4. Check which models are availableopenclaw models list

## Troubleshooting: 0/200k Tokens, No Response (Ollama)

Symptom: TUI shows 0/200k tokens, prompt runs forever, no response.

Root cause: Missing api: "openai-responses" in Ollama config.

Fix:

json5
{  "models": {    "providers": {      "ollama": {        "baseUrl": "http://127.0.0.1:11434/v1",        "apiKey": "ollama-local",        "api": "openai-responses"  // THIS LINE IS CRITICAL      }    }  }}

Also verify Ollama is actually responding:

bash
curl http://127.0.0.1:11434/api/tags# Should return list of installed models

## Recommended Local Models by Hardware

| Hardware | Recommended Models |

|----------|-------------------|

| 6GB VRAM (RTX 2060) | llama3.2:3b, phi-4, qwen2.5:3b |

| 8GB VRAM | llama3.1:8b, qwen2.5:7b |

| 16GB+ VRAM | qwen2.5-coder:32b, deepseek-r1:14b |

| 32GB+ RAM (CPU) | llama3.1:8b (slower but works) |

Tip: Run ollama run <model> first to test the model responds before configuring OpenClaw.

🔥 Your AI should run your business, not just answer questions.

We'll show you how.$97/mo (going to $197 soon)

Join Vibe Combinator →

📋 Quick Commands

CommandDescription
openclaw configureInteractive wizard to configure provider and model
openclaw models listList all available models from configured providers
openclaw models set <provider/model>Set the default model (e.g., anthropic/claude-sonnet-4-20250514)
/model <model-name>Switch model for current chat session
/newStart a fresh session (clears cached model state)
openclaw gateway restartRestart gateway to apply config changes
openclaw doctor --fixDiagnose and fix common configuration issues
ollama listList locally installed Ollama models
ollama pull <model>Download a model for local use

Related Issues

🐙 Your AI should run your business.

Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)

Join Vibe Combinator →

Still stuck?

Join our Discord community for real-time help.

Join Discord