🦞OpenClaw Guide
Models

Using DeepSeek, MiniMax, Kimi & Alternative Models with OpenClaw

Configure alternative model providers like DeepSeek, MiniMax, Kimi K2.5, and Groq with OpenClaw. Troubleshoot 'no output' issues and learn direct API vs OpenRouter setup.

⚠️ The Problem

You're trying to use alternative model providers like DeepSeek, MiniMax, Kimi K2.5, or Groq with OpenClaw, but encountering issues such as: no output appearing in the UI or terminal despite API keys being configured; models showing as connected but returning 0 tokens; or models not appearing in the clawdbot configure wizard.

🔍 Why This Happens

Alternative models require specific configuration that differs from the default Anthropic/OpenAI setup. Common causes include: (1) Model not specified correctly in config - each provider has its own model ID format; (2) Gateway not restarted after adding/changing API keys; (3) Environment variables not loaded properly; (4) Using the wrong provider format (e.g., trying to use Kimi directly instead of through OpenRouter); (5) Rate limits on free tiers being exhausted; (6) Model IDs not matching what the provider expects - these change frequently.

The Fix

## Diagnosing 'No Output' Issues If you see your model connected but getting no response (shows tokens 0/131k (0%) in TUI), run these diagnostic commands:

bash
# Check if gateway is running and model is configuredclawdbot status# Verify model configurationclawdbot models status# Watch logs in real-time while testingclawdbot gateway logs --follow# Check what model is actually setclawdbot config get agents.defaults.model

## Setting Up DeepSeek ### Option 1: Via OpenRouter (Recommended) OpenRouter provides access to DeepSeek models with a unified API:

bash
# Get an API key from https://openrouter.ai/keysclawdbot onboard --auth-choice apiKey --token-provider openrouter --token "sk-or-your-key"

Then configure in ~/.clawdbot/clawdbot.json5:

json5
{  env: { OPENROUTER_API_KEY: "sk-or-..." },  agents: {    defaults: {      model: {        primary: "openrouter/deepseek/deepseek-chat",        fallbacks: ["openrouter/deepseek/deepseek-r1:free"]      }    }  }}

### Option 2: Direct DeepSeek API If you have a paid DeepSeek API key directly from DeepSeek, use their OpenAI-compatible endpoint:

json5
{  env: {     DEEPSEEK_API_KEY: "your-deepseek-api-key"  },  models: {    providers: {      "openai-compatible": {        baseUrl: "https://api.deepseek.com/v1",        headers: {          "Authorization": "Bearer $DEEPSEEK_API_KEY"        }      }    }  },  agents: {    defaults: {      model: { primary: "openai-compatible/deepseek-chat" }    }  }}

## Setting Up Kimi K2.5 ### Via OpenRouter Kimi is from Moonshot AI. The correct OpenRouter model ID is:

json5
{  agents: {    defaults: {      model: { primary: "openrouter/moonshotai/kimi-k2.5" },      models: {        "openrouter/moonshotai/kimi-k2.5": { alias: "kimi" }      }    }  },  env: {    OPENROUTER_API_KEY: "sk-or-your-key-here"  }}

### Via NVIDIA Free API NVIDIA offers free Kimi K2.5 API keys. Configure with custom base URL:

json5
{  models: {    providers: {      "nvidia-kimi": {        baseUrl: "https://integrate.api.nvidia.com/v1",        headers: {          "Authorization": "Bearer $NVIDIA_API_KEY"        }      }    }  },  agents: {    defaults: {      model: { primary: "nvidia-kimi/kimi-k2-5" }    }  }}

Note: NVIDIA's free tier can be slow (minutes to respond). OpenRouter is faster for production use.

### Via Direct Moonshot API Get an API key from https://www.moonshot.cn/ (Moonshot AI website):

json5
{  env: { MOONSHOT_API_KEY: "your-key" },  models: {    providers: {      "moonshot": {        baseUrl: "https://api.moonshot.cn/v1",        headers: {          "Authorization": "Bearer $MOONSHOT_API_KEY"        }      }    }  }}

## Setting Up MiniMax MiniMax requires specific model naming:

json5
{  env: { MINIMAX_API_KEY: "your-minimax-key" },  agents: {    defaults: {      model: { primary: "minimax/abab6.5s-chat" }    }  }}

Important: Always restart the gateway after config changes:

bash
clawdbot gateway restart

## Setting Up Groq Groq offers fast inference for open models:

json5
{  env: { GROQ_API_KEY: "gsk_..." },  agents: {    defaults: {      model: { primary: "groq/llama-3.1-8b-instant" }    }  }}

Warning: Groq's free tier has strict rate limits and often gets exhausted. If you see 0 tokens being sent, you may have hit the rate limit.

## Troubleshooting No Output ### Check 1: Verify Environment Variables

bash
# For Groqecho $GROQ_API_KEY# For OpenRouterecho $OPENROUTER_API_KEY# For MiniMaxecho $MINIMAX_API_KEY

### Check 2: Verify Model is Listed

bash
clawdbot models list

The model ID you configured should appear in this list. If not, the provider isn't configured correctly.

### Check 3: Watch Gateway Logs

bash
clawdbot gateway logs --tail 50

Look for errors like: - 401 Unauthorized - API key is wrong or expired - 403 Forbidden - Rate limited or key doesn't have access - 404 Not Found - Model ID doesn't exist on that provider - Connection refused - Provider endpoint is wrong

### Check 4: Test Model Probe

bash
clawdbot models status --probe

This will attempt to connect to each configured provider and report any errors.

## Switching Between Models Once configured, you can switch models using:

bash
# In terminal/model openrouter/deepseek/deepseek-chat# Or use an alias if configured/model kimi

You can always change models later - nothing is permanent. Experiment with different providers to find what works best for your use case.

🔥 Your AI should run your business, not just answer questions.

We'll show you how.$97/mo (going to $197 soon)

Join Vibe Combinator →

📋 Quick Commands

CommandDescription
clawdbot models listList all configured models and their providers
clawdbot models statusCheck authentication status for each provider
clawdbot models status --probeTest connection to each provider with a live probe
clawdbot gateway restartRestart gateway after config changes (required!)
clawdbot gateway logs --followWatch gateway logs in real-time for debugging
clawdbot config get agents.defaults.modelCheck what model is currently configured as default
/model <model-id>Switch to a different model in chat

Related Issues

    🐙 Your AI should run your business.

    Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)

    Join Vibe Combinator →

    Still stuck?

    Join our Discord community for real-time help.

    Join Discord