Want complete control over your AI assistant? Self-hosting means your data stays on your hardware, your conversations remain private, and you can customize everything. This guide covers everything from hardware requirements to daily maintenance.
Why Self-Host Your AI Assistant?
Before we dive into the how, let's talk about the why. Self-hosting your AI assistant offers advantages that cloud-based solutions simply can't match.
Privacy first: Your conversations, commands, and data never leave your network. No third party sees your emails, calendar, or personal information. See our private AI assistant guide for more on this.
Cost control: Pay for hardware once, then just API costs. No monthly subscriptions piling up. Most users spend $5-20/month on API calls.
Customization: Tweak every setting. Add custom skills. Integrate with any tool. You're not limited by what a company decides to support.
Reliability: No outages because a cloud service went down. Your AI runs when you want it to run.
Learning opportunity: Understanding how AI works at this level makes you a more capable technologist.
Hardware Requirements: What You Actually Need
Good news: you probably already have hardware that can run an AI assistant. Here's the breakdown:
Minimum requirements:
- Any modern computer (Mac, Windows, Linux)
- 4GB RAM (8GB recommended)
- 10GB free disk space
- Stable internet connection
Recommended setup:
- Mac mini, Intel NUC, or old laptop
- 16GB RAM
- SSD storage
- Wired ethernet connection
For power users:
- Dedicated mini server
- 32GB+ RAM (for local LLM inference)
- NVIDIA GPU (optional, for local models)
My recommendation: Start with what you have. An old MacBook Air from 2018 runs OpenClaw perfectly. Upgrade only when you hit limits.
The key insight: you're not running the AI model locally (unless you want to). You're running the *assistant* locally — it calls cloud APIs for the actual AI work. This means hardware requirements are modest.
A self-hosted AI assistant has several pieces working together. Understanding them helps with troubleshooting.
The core components:
1. OpenClaw Gateway — The brain. Handles conversations, memory, and tool execution.
2. Node.js runtime — Powers the gateway. Install once and forget it.
3. Messaging bridge — Connects to Telegram, WhatsApp, or Discord.
4. API keys — For the AI provider (Anthropic, OpenAI) and any integrations.
5. Storage — SQLite database for memory and settings. No complex database setup needed.
Optional components:
- Home Assistant for smart home control
- Email server access for inbox management
- Calendar integration for scheduling
Step-by-Step: Initial Setup
Let's get your self-hosted AI running. Follow along with your terminal open.
Step 1: Install Node.js
Mac (with Homebrew):
`brew install node`
Windows: Download from nodejs.org
Linux:
`curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -`
`sudo apt-get install -y nodejs`
Step 2: Install OpenClaw
`npm install -g openclaw`
Step 3: Run the setup wizard
`openclaw setup`
The wizard will ask for:
- Your Anthropic API key (get one at console.anthropic.com)
- Which messaging app to connect
- Basic preferences
Step 4: Start the gateway
`openclaw start`
That's it for basic setup. See our 30-minute setup guide for detailed walkthroughs of each step.
Running 24/7: Keeping Your AI Always Available
An AI assistant is most useful when it's always available. Here's how to keep it running.
Option 1: Background daemon (simplest)
`openclaw start --daemon`
This keeps OpenClaw running in the background. It restarts automatically if it crashes.
Option 2: System service (most reliable)
On Mac, OpenClaw can install itself as a launch agent:
`openclaw service install`
This starts the assistant when your Mac boots and keeps it running.
On Linux, create a systemd service:
```
[Unit]
Description=OpenClaw AI Assistant
After=network.target
[Service]
ExecStart=/usr/bin/openclaw start
Restart=always
User=youruser
[Install]
WantedBy=multi-user.target
```
Option 3: Docker (for the container-curious)
`docker run -d --name openclaw -v /path/to/data:/data openclaw/openclaw`
What about sleep/hibernate?
If your computer sleeps, so does your assistant. For true 24/7, either disable sleep or use dedicated hardware like a mini PC that stays on.
Integrating Tools and Services
The power of self-hosting is integration. Connect your AI to everything.
Email integration
Your AI can read and send email. Configure IMAP/SMTP access:
`openclaw setup email`
Works with Gmail, Outlook, and any standard email provider. Full guide: Set up AI email assistantCalendar sync
Connect Google Calendar or Apple Calendar:
`openclaw setup calendar`
Your AI can then check your schedule, create events, and send reminders. See Automate calendar with AI.
Smart home (Home Assistant)
If you use Home Assistant, connect it:
`openclaw setup homeassistant`
Now "turn off the lights" and "what's the temperature?" just work. Guide: Control smart home with AI.
Task management
Connect Todoist or other task apps:
`openclaw setup todoist`
Custom integrations
OpenClaw supports custom skills. If it has an API, you can integrate it.
Memory and Context: Making Your AI Smart
Self-hosting gives you full control over AI memory. This is where the magic happens.
How memory works
Your AI stores conversations, facts, and preferences in a local database. Unlike ChatGPT, nothing gets sent to external servers.
Configuring memory
Edit memory settings:
`openclaw config edit`
Key options:
```
memory:
enabled: true
retention: forever # or "30d", "90d", etc.
maxTokens: 100000 # how much context to keep
```
Teaching your AI
Tell your AI things you want it to remember:
- "My wife's name is Sarah"
- "I prefer morning meetings"
- "I'm allergic to shellfish"
It will remember and use these facts appropriately.
Memory hygiene
Occasionally review what your AI knows:
"What do you know about me?"
"Forget that I mentioned the secret project"
Full deep dive: AI that remembers: How memory changes everything
Security Best Practices
Self-hosting means you're responsible for security. Here's how to stay safe.
API key protection
- Never share your API keys
- Store them in environment variables, not config files
- Rotate keys periodically
Network security
- Don't expose OpenClaw to the public internet
- Use messaging apps (Telegram, WhatsApp) as the interface instead
- If you must expose it, use a reverse proxy with authentication
Data encryption
- Enable full-disk encryption on your host machine
- Use HTTPS for any web interfaces
Access control
- Only authorized users should be able to message your bot
- Set up user whitelists in your messaging app configuration
Regular updates
`npm update -g openclaw`
Keep your software updated for security patches.
Backup strategy
Back up your data directory regularly:
`cp -r ~/.openclaw ~/backups/openclaw-$(date +%Y%m%d)`
Or use your existing backup solution (Time Machine, etc.).
Troubleshooting Common Issues
Things will break. Here's how to fix them.
"OpenClaw won't start"
1. Check Node.js is installed: `node --version`
2. Check for port conflicts: `lsof -i :3000`
3. Look at logs: `openclaw logs`
"My messages aren't getting through"
1. Verify messaging app connection: `openclaw status`
2. Check API keys are valid
3. Restart the gateway: `openclaw restart`
"AI responses are slow"
1. Check your internet connection
2. Try a different AI provider or model
3. Reduce context size if memory is huge
"Memory not working"
1. Check memory is enabled: `openclaw config show`
2. Verify database isn't corrupted: `openclaw db check`
3. Rebuild if needed: `openclaw db rebuild`
"Integration stopped working"
1. Re-authenticate: `openclaw setup [integration]`
2. Check the service's status (is Gmail down?)
3. Review integration-specific logs
Getting help
- Check the documentation
- Join the community Discord
- See full troubleshooting guide
Advanced: Local LLM Support
Want to go fully local? Run the AI model itself on your hardware.
When to consider this:
- Maximum privacy (no API calls at all)
- Offline usage requirements
- Cost optimization at high volume
- Experimentation with different models
Hardware requirements for local LLMs:
- 16GB+ RAM minimum
- 32GB+ RAM recommended
- NVIDIA GPU with 8GB+ VRAM (optional but helpful)
- 50GB+ disk space per model
Options for local models:
- Ollama: Easy setup, good model selection
- llama.cpp: Direct GGUF model support
- LM Studio: GUI for local models
Configuring OpenClaw for local:
```
provider: ollama
model: llama3:8b
endpoint: http://localhost:11434
```
Reality check: Local models are getting better but still lag behind Claude and GPT-4 for complex tasks. Many users run local for simple queries and cloud for heavy lifting.
See our detailed guide: How to run AI locally
Maintenance and Updates
A self-hosted system needs occasional care. Here's the maintenance routine.
Weekly:
- Check logs for errors: `openclaw logs --since 7d`
- Verify integrations are working
- Quick test message to make sure everything responds
Monthly:
- Update OpenClaw: `npm update -g openclaw`
- Check disk space usage
- Review memory/database size
- Back up your data
Quarterly:
- Rotate API keys
- Review security settings
- Clean up old logs
- Test restore from backup
Annually:
- Evaluate hardware needs
- Review and clean up AI memory
- Update Node.js to latest LTS
Automating maintenance
Set up a cron job for updates and backups:
```
# Weekly update and backup
0 3 * * 0 npm update -g openclaw && cp -r ~/.openclaw ~/backups/openclaw-weekly
```
Most of this is set-and-forget. The beauty of self-hosting done right: it just works.
Is Self-Hosting Right for You?
Self-hosting isn't for everyone. Here's how to decide.
Self-hosting is for you if:
✓ Privacy is a top priority
✓ You enjoy tinkering with technology
✓ You want maximum customization
✓ You have spare hardware or a home server
✓ You're comfortable with command-line basics
Maybe skip self-hosting if:
✗ You want zero maintenance
✗ Technical setup intimidates you
✗ You need 100% uptime guarantees
✗ You're happy with cloud services
The middle ground: Start with a simple self-hosted setup using the 30-minute guide. You can always go deeper later.
What's next?Set up your AI assistant in 30 minutes10 ways I use my AI assistantOpenClaw vs ChatGPT
Real People Using AI Assistants
“I was intimidated by self-hosting at first, but the setup was easier than expected. Now I have an AI assistant that knows my whole work context and never shares data with anyone.”
“Running my AI on a Mac mini that stays on 24/7. It's like having a personal assistant that's always there, completely private. Best tech project I've done in years.”
“The privacy aspect sold me. As a lawyer, I can't have client information going through third-party AI services. Self-hosting solved that completely.”