🦞OpenClaw Guide
Models

Setting Up Ollama & Local Models with OpenClaw

Complete guide to configuring Ollama for local LLM inference with OpenClaw, including Docker setup, WSL2 configuration, and troubleshooting common issues like 0/200k tokens and empty responses.

⚠️ The Problem

You're trying to use Ollama with OpenClaw for local LLM inference, but experiencing issues like: the TUI sits at "0/200k tokens" with no response, models don't appear in the provider list, or you get empty responses despite the connection appearing successful. Common symptoms include: - TUI shows tokens 0/200k (0%) and never progresses - "Say hello in one sentence" produces no output - clawdbot doctor passes but model doesn't respond - Docker setup fails with validation errors - Model works with Python/curl but not OpenClaw

🔍 Why This Happens

There are several common causes for Ollama integration failures: 1. Missing or incorrect API format: Ollama requires api: "openai-responses" or api: "openai-completions" in your config, not the default format. 2. Wrong baseUrl format: The URL must include /v1 suffix: http://127.0.0.1:11434/v1 — not just http://127.0.0.1:11434. 3. Docker networking: When running OpenClaw in Docker, 127.0.0.1 refers to the container, not your host. You need host.docker.internal (Docker Desktop) or --network=host (Linux). 4. Incomplete config validation: Setting apiKey alone without baseUrl causes: Config validation failed: models.providers.ollama.baseUrl: Invalid input: expected string, received undefined 5. Model too large for hardware: Large models (32B+) consume RAM needed for context, causing timeouts or empty responses. 6. Ollama not running: The Ollama service must be actively serving, not just installed.

The Fix

## Step 1: Verify Ollama is Running

First, confirm Ollama is running and has models available:

bash
# Check if Ollama is running and list modelsollama list# If not running, start the serviceollama serve# Test that the API respondscurl http://127.0.0.1:11434/api/tags

You should see your installed models listed. If ollama list fails, Ollama isn't running.

## Step 2: Configure OpenClaw Correctly

The most critical setting is the API format. Add this to your ~/.openclaw/openclaw.json:

json
{  "models": {    "providers": {      "ollama": {        "baseUrl": "http://127.0.0.1:11434/v1",        "apiKey": "ollama-local",        "api": "openai-responses",        "models": [          {            "id": "llama3.2:latest",            "name": "Llama 3.2",            "contextWindow": 32768,            "maxOutput": 8192          }        ]      }    }  },  "agents": {    "defaults": {      "model": {        "primary": "ollama/llama3.2:latest"      }    }  }}

Critical points: - baseUrl MUST end with /v1 - api MUST be set to "openai-responses" or "openai-completions" - Model ID must match exactly what ollama list shows (including the :tag) - apiKey can be any non-empty string (Ollama doesn't validate it)

## Step 3: Docker-Specific Configuration

If running OpenClaw in Docker, you need different networking:

bash
# For Docker Desktop (macOS/Windows)docker compose exec openclaw-gateway node dist/index.js config set 'models.providers.ollama={"baseUrl":"http://host.docker.internal:11434/v1","apiKey":"ollama-local","api":"openai-responses"}'

Alternative: Environment variable approach (add to docker-compose.yml):

yaml
environment:  - OLLAMA_API_KEY=ollama-local  - OLLAMA_BASE_URL=http://host.docker.internal:11434/v1

For Linux without Docker Desktop: - Use --network=host on the container, OR - Add Ollama's container IP to /etc/hosts and reference by hostname

## Step 4: WSL2-Specific Setup

WSL2 has networking quirks. Use these settings:

bash
# Create/edit ~/.openclaw/.envecho 'OLLAMA_API_KEY=ollama-local' >> ~/.openclaw/.env# Ensure Ollama is accessible from WSL2curl http://localhost:11434/api/tags

If localhost doesn't work, try 127.0.0.1 or check your WSL2 networking configuration.

## Step 5: Choose the Right Model Size

Hardware recommendations from community experience: | VRAM | RAM | Recommended Models | |------|-----|--------------------| | 6GB (RTX 2060) | 16GB | Llama 3.2 3B, Phi-4, Qwen 2.5 3B | | 8GB | 16GB | Llama 3.1 8B, Mistral 7B | | 12GB+ | 32GB | Qwen 2.5-coder:14B, DeepSeek-R1:14B | | 24GB+ | 64GB+ | 32B models (with limited context) |

Warning: A 32B model on 32GB RAM leaves little space for context, causing empty responses or timeouts.

## Step 6: Restart and Verify

bash
# Restart the gateway to apply changesopenclaw gateway restart# Run diagnosticsopenclaw doctor --fix# Verify model is configuredopenclaw models list# Check model status shows your Ollama modelopenclaw models status

## Step 7: Test the Connection

bash
# Start the TUI and send a test messageopenclaw tui# Type a simple prompt> Say hello in one sentence.

If you still see 0/200k tokens after 30+ seconds, check the gateway logs:

bash
# In another terminalopenclaw gateway logs --follow

Look for error messages like connection refused, timeout, or model loading errors.

## Alternative: Implicit Auto-Discovery

If you don't want to explicitly define models, OpenClaw can auto-discover them:

bash
# Just set the API key, don't define providers.ollama explicitlyexport OLLAMA_API_KEY=ollama-local# OpenClaw will discover models at http://127.0.0.1:11434 automaticallyopenclaw models list

This works when you DON'T have an explicit models.providers.ollama block in your config.

🔥 Your AI should run your business, not just answer questions.

We'll show you how.$97/mo (going to $197 soon)

Join Vibe Combinator →

📋 Quick Commands

CommandDescription
ollama listList installed Ollama models and verify service is running
ollama serveStart the Ollama service if not running
curl http://127.0.0.1:11434/api/tagsTest Ollama API is responding
openclaw gateway restartRestart gateway after config changes
openclaw doctor --fixRun diagnostics and auto-fix common issues
openclaw models listShow all configured models including Ollama
openclaw models statusCheck model connectivity and auth status
openclaw gateway logs --followStream gateway logs to debug connection issues
ollama psShow currently loaded/running models in Ollama

Related Issues

🐙 Your AI should run your business.

Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)

Join Vibe Combinator →

Still stuck?

Join our Discord community for real-time help.

Join Discord