🦞OpenClaw Guide
Memory

Memory & Semantic Search — Setup & Configuration

Complete guide to configuring OpenClaw's memory system, semantic search, embeddings, and fixing common issues like context overflow, memory files not writing, and voice message failures.

⚠️ The Problem

Users experience various memory-related issues including: context overflow errors preventing the bot from responding, daily memory logs not being written to the /memory folder, semantic vector search causing rate limiting with free-tier providers, bot losing context mid-conversation or after gateway restarts, and voice messages being ignored when session memory is enabled. Common error messages include: - Context overflow: prompt too large for the model. Try again with less input or a larger-context model. - ⚠️ Agent failed before reply: No API key found for provider "anthropic" - Memory files showing incorrect timestamps or missing days - Bot saying messages were "forgotten / compacted" or having no memory of recent conversations - Voice messages being silently dropped with 80%+ failure rate when memorySearch.experimental.sessionMemory is enabled

🔍 Why This Happens

Memory issues stem from several root causes: 1. Aggressive Compaction Settings: The default safeguard compaction mode summarizes conversations when hitting token thresholds, causing unexpected context loss. The memoryFlush feature can be too aggressive, truncating history at soft thresholds. 2. Rate Limiting with Free-Tier Embedding Providers: Using Gemini free tier or other rate-limited providers for vector memory search causes the embedding service to fail, which cascades into bot failures. 3. Gateway Not Running or Crashing: Memory logs are written by the gateway service. If it's not running, crashes silently, or restarts unexpectedly, daily logs won't be created and sessions lose context. 4. File Permission or Path Issues: Especially on Windows, the bot may lack permissions to write to the memory directory, or path separators cause issues. 5. Session Memory + Voice Message Bug: The experimental sessionMemory feature has a known conflict with voice message processing in recent releases (particularly post-January 2026), causing voice messages to be silently dropped. 6. Incorrect Pruning Configuration: The contextPruning settings (mode, TTL, keepLastAssistants) directly affect how much conversation history is retained.

The Fix

## Step 1: Diagnose Your Memory Issue First, identify which type of memory issue you're experiencing: ``bash # Check gateway status (is it running?) openclaw gateway status # Check your OpenClaw version openclaw --version # Verify memory feature is enabled openclaw configure --get features.memory # Check memory directory exists and has correct permissions ls -la ~/.openclaw/memory/ df -h ~/.openclaw/

## Step 2: Fix Context Overflow Issues If you're seeing Context overflow: prompt too large for the model, try these fixes in order: Quick Fix - Reset the Session: ``bash # In Telegram/Discord, use the reset command /reset **Increase Context Limits:** json5 // In ~/.openclaw/config.json { "contextTokens": 400000, // Double your current limit "contextPruning": { "mode": "cache-ttl", "ttl": "24h", // Longer TTL "keepLastAssistants": 100 // Keep more history } } **Disable Aggressive Compaction:** json5 { "compaction": { "mode": "archive", // Preserves full conversation "memoryFlush": { "enabled": false // Disable aggressive flush } } } **Nuclear Option - Disable All Pruning:** json5 { "contextPruning": { "mode": "disabled" // No automatic pruning at all } }

## Step 3: Fix Rate-Limited Embedding Providers If using Gemini free tier or another rate-limited provider for vector search and getting failures: Option A - Disable Vector Memory Temporarily: ``bash openclaw configure --section memory # Set provider to "none" or "local" **Option B - Edit Config Directly:** json5 // In ~/.openclaw/config.json { "memory": { "provider": "none" // or "local" for local embeddings } } **Then restart:** bash openclaw gateway restart **Option C - Switch to a Non-Rate-Limited Provider:** json5 { "memorySearch": { "enabled": true, "provider": "openai", // More reliable, requires API key "query": { "maxResults": 30, "minScore": 0.15 } } }

## Step 4: Fix Memory Files Not Being Written If daily memory logs aren't appearing in /memory: Check Gateway Status: ``bash openclaw gateway status **Verify Memory Directory and Permissions:** bash # Linux/macOS ls -la ~/.openclaw/data/ mkdir -p ~/.openclaw/memory chmod 755 ~/.openclaw/memory # Windows (PowerShell as Admin) New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.openclaw\memory" **Check for Path Issues (Windows):** Windows users may have issues with file path separators. Ensure your config uses forward slashes or properly escaped backslashes. **Force Memory Flush:** bash # Restart gateway to trigger memory write openclaw gateway restart # Wait 24 hours for the next daily log (logs are written at midnight UTC)

## Step 5: Fix Voice Messages Being Ignored If voice messages work only ~20% of the time or are silently dropped: Disable Experimental Session Memory: ``json5 { "memorySearch": { "experimental": { "sessionMemory": false // Disable this - known conflict with voice } } } **Downgrade to a Stable Version (if issue persists):** bash npm install -g openclaw@2026.1.15 # Known stable for voice openclaw gateway restart **Check Voice Pipeline Configuration:** json5 { "voice": { "transcription": { "provider": "openai", // Whisper is most reliable "model": "whisper-1" } } }

## Step 6: Fix Context Loss Mid-Conversation If the bot keeps "waking up" fresh or loses context unexpectedly: Check for Gateway Crashes: ``bash # Check system logs for crashes journalctl -u openclaw --since "24 hours ago" | tail -50 # Check gateway logs openclaw gateway logs --lines 200 # Look for OOM (Out of Memory) kills dmesg | grep -i "killed process" **Verify Session Persistence:** bash # Check for session resume files ls -t ~/clawd/memory/session-resume-*.json 2>/dev/null # Check memory file timestamps ls -la ~/.openclaw/memory/ **Increase System Resources:** If running on Raspberry Pi or low-memory VPS, the gateway may be OOM-killed: bash # Add swap space sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile

## Step 7: Recommended Memory Configuration Here's a battle-tested config for maximum memory retention: ``json5 // ~/.openclaw/config.json { "contextTokens": 400000, "contextPruning": { "mode": "cache-ttl", "ttl": "24h", "keepLastAssistants": 100 }, "compaction": { "mode": "archive", "memoryFlush": { "enabled": false } }, "memorySearch": { "enabled": true, "provider": "local", // or "openai" for better quality "query": { "maxResults": 30, "minScore": 0.15 }, "experimental": { "sessionMemory": false // Disable until voice bug is fixed } }, "features": { "memory": true } } **After making changes:** bash openclaw gateway restart

🔥 Your AI should run your business, not just answer questions.

We'll show you how.$97/mo (going to $197 soon)

Join Vibe Combinator →

📋 Quick Commands

CommandDescription
openclaw gateway statusCheck if the gateway service is running (required for memory features)
openclaw gateway restartRestart the gateway to apply config changes and fix stuck states
openclaw configure --get features.memoryVerify memory feature is enabled in your configuration
openclaw configure --section memoryInteractive configuration for memory settings
openclaw --versionCheck your OpenClaw version (critical for debugging)
/resetReset the current session in Telegram/Discord (clears context overflow)
/newStart a new session (alternative to /reset)
ls -la ~/.openclaw/memory/List memory files and check timestamps
journalctl -u openclaw --since "24 hours ago"Check system logs for gateway crashes (Linux)

Related Issues

    🐙 Your AI should run your business.

    Weekly live builds + template vault. We'll show you how to make AI actually work.$97/mo (going to $197 soon)

    Join Vibe Combinator →

    Still stuck?

    Join our Discord community for real-time help.

    Join Discord