Ollama Local LLM — No Output or Broken Tools
Local models via Ollama show empty responses or can't use tools. Configuration tips.
⚠️ The Problem
You've set up Ollama with a local model, but:
Answer ✔️ Completed(no actual response text)Or the model hallucinates tool calls instead of actually calling them.
🔍 Why This Happens
Local LLMs (especially smaller ones) have limitations:
- Weak tool support — Many models can't reliably call tools
- Low max tokens — Response cut off before completing
- Wrong API format — Model expects different prompt format
- Reasoning models — Some models output thinking but no final answer
✅ The Fix
First, increase the max tokens in your model config:
{ "models": { "providers": { "ollama": { "models": [{ "id": "llama3:8b", "maxTokens": 8192 }] } } }}For better tool support, use a model known to work well:
llama3:8borllama3:70bmistral:7bcodellama:13b(for coding tasks)
Avoid models designed primarily for reasoning or coding without chat capability.
If tools still don't work, you can disable them for this model:
"agents": { "defaults": { "models": { "ollama/yourmodel": { "tools": { "enabled": false } } } }}Test the model directly:
curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "llama3:8b", "messages": [{"role": "user", "content": "Hello"}]}'🔥 Your AI should run your business, not just answer questions.
We'll show you how.Free to join.
📋 Quick Commands
| Command | Description |
|---|---|
| ollama list | List installed models |
| ollama run llama3:8b | Test model directly |
| openclaw models status | Check model configuration |
Related Issues
📚 You Might Also Like
🐙 Your AI should run your business.
Weekly live builds + template vault. We'll show you how to make AI actually work.Free to join.
Join Vibe Combinator →