Ollama Local LLM — No Output or Broken Tools
Local models via Ollama show empty responses or can't use tools. Configuration tips.
⚠️ The Problem
You've set up Ollama with a local model, but:
``
Answer ✔️ Completed
(no actual response text)
``
Or the model hallucinates tool calls instead of actually calling them.🔍 Why This Happens
Local LLMs (especially smaller ones) have limitations:
1. Weak tool support — Many models can't reliably call tools
2. Low max tokens — Response cut off before completing
3. Wrong API format — Model expects different prompt format
4. Reasoning models — Some models output thinking but no final answer
✅ The Fix
First, increase the max tokens in your model config:
json
{ "models": { "providers": { "ollama": { "models": [{ "id": "llama3:8b", "maxTokens": 8192 }] } } }}For better tool support, use a model known to work well:
- llama3:8b or llama3:70b
- mistral:7b
- codellama:13b (for coding tasks)
Avoid models designed primarily for reasoning or coding without chat capability.
If tools still don't work, you can disable them for this model:
json
"agents": { "defaults": { "models": { "ollama/yourmodel": { "tools": { "enabled": false } } } }}Test the model directly:
bash
curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "llama3:8b", "messages": [{"role": "user", "content": "Hello"}]}'🔥 Your AI should run your business, not just answer questions.
We'll show you how.$97/mo (going to $197 soon)
📋 Quick Commands
| Command | Description |
|---|---|
| ollama list | List installed models |
| ollama run llama3:8b | Test model directly |
| openclaw models status | Check model configuration |
Related Issues
📚 You Might Also Like
🐙 Your AI should run your business.
Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)
Join Vibe Combinator →