2026-04-27
There's a certain kind of optimism that comes with reading a project's README. "Personal AI with 100+ skills." "Connect Claude and other LLMs to your messaging apps." "Most users complete the initial setup in under 10 minutes."
"Most users"...
The Premise
OpenClaw is an open-source AI assistant framework that lets you run a local LLM – via Ollama – and interact with it through your existing messaging apps: Discord, iMessage, WhatsApp, Telegram, and more. The pitch is compelling: a private, self-hosted AI that lives in your chat apps, runs on your hardware, and doesn't send your data anywhere.
I wanted to try it. I have a MacBook Air M3 with 24GB of RAM. How hard could it be?
Act One: The Port Problem
It started before I even got to OpenClaw. The setup wizard asked me to install Ollama, so I did. Then it asked me to restart setup. Then Ollama wouldn't start.
Error: listen tcp 127.0.0.1:11434: bind: address already in use
Something was already sitting on port 11434. Running lsof -i :11434 revealed an orphaned Ollama process – PID 26121, very much alive, doing absolutely nothing useful. A quick kill -9 26121 sorted it out. First obstacle cleared. Confidence: high.
Act Two: The Version Gap
OpenClaw's launcher uses ollama launch openclaw. My freshly installed Ollama responded with:
Error: unknown command "launch" for "ollama"
Turns out ollama launch is a newer feature requiring v0.15.3 or above. My "fresh" install had pulled v0.5.4. After updating, the command worked.
Act Three: The JSON Wars
Config files are not to be trifled with.
I wanted to add iMessage as a channel, which meant editing ~/.openclaw/openclaw.json. I added the iMessage block (which I had to figure out myself). The config parser responded:
JSON5: invalid character '{' at 68:1
I'd added a second root-level JSON object instead of merging into the existing one. I fixed that. Then I got:
JSON5: invalid character '\"' at 67:3
Missing comma after the wizard block. After several rounds of cat -n and nano, I eventually gave up on surgical edits and just rewrote the whole file with a heredoc. Sometimes the right move is to start over.
Act Four: Permissions, macOS Style
iMessage needs Full Disk Access to read ~/Library/Messages/chat.db. Granting this should be simple: System Settings → Privacy & Security → Full Disk Access → add the binary.
Except imsg is a command-line binary, not a .app bundle, so it doesn't appear in the list. And the file picker's "Go to Folder" shortcut (⌘⇧G) didn't work inside System Settings. Dragging from Finder worked eventually – but only after we discovered the trick: grant Full Disk Access to iTerm itself, not the binary. Run imsg chats --limit 1 from iTerm, and it inherits iTerm's permissions.
[3020] (+1**********) last=2026-04-27T21:48:13.727Z
One chat, retrieved. Small victory.
Act Five: Discord
Setting up the Discord bot required creating an application in the Discord Developer Portal, generating an OAuth2 URL, and inviting the bot to a server. The gotcha: the default "Guild Install" scope only includes applications.commands, not bot. Without the bot scope, the bot joins the server invisibly – no member list entry, no ability to receive messages.
Once that was fixed, the bot appeared. It showed as online. It showed a typing indicator when messaged. It never actually responded.
The logs revealed the problem: stuck session: state=processing age=121s. The model was timing out. The config was still pointing at kimi-k2.6:cloud – a cloud-hosted model that never actually downloaded. The bot was trying to query a model that didn't exist.
After updating to llama3.2 (and later llama3.2:1b for speed), and keeping Ollama warm in a separate terminal tab, the bot finally responded:
{"version": "1.0", "status": "success", "data": {"memory": "This is your long-term memory."}}
When asked, "How can you help me?", it responded with:
{"OpenClaw did not find any relevant results for 'Pi vs Claude Code'. Try adjusting the query or checking other resources."}}
Not exactly "How can I help you today?" – but it was something. We were in business.
What I Learned
The hardware isn't the bottleneck. An M3 MacBook Air with 24GB RAM is more than capable of running local LLMs. The issues were entirely software and configuration.
Small models have real limits. llama3.2:1b is fast but struggles to follow complex tool-calling instructions. If you want OpenClaw to actually be useful, you need at least a 7B model.
Keep Ollama warm. The model needs to be loaded into memory before the first message arrives, or it'll time out. Running ollama run <model> in a separate terminal before messaging the bot makes a real difference.
The OpenClaw community exists for a reason. This is genuinely complex software with a lot of moving parts. The docs are good but the gap between "docs working" and "your specific setup working" can be wide. Their Discord community is probably worth joining before you start, not after you're stuck.
The Current State
Two hours later, I have:
- A Discord bot that responds to messages (slowly, with a small model)
- iMessage connected with Full Disk Access granted (I have removed the iMessage channel from the config file as the channel was cluttering the log file with errors)
- A gateway that stays running as long as I don't close the terminal
- A sore back
Next steps: download a proper 7B model (llama3.3... 42gb) when I have emotional bandwidth to spare, set up the gateway as a background service so it survives reboots, and actually try some of the 100+ skills OpenClaw advertises.
~edit~
I did download a 7b model (llama3.3) and my laptop had a conniption fit. So, no locally run fun for me and I'm not willing to pay $$ for tokens so I guess we have to wait until local models get smarter. Oh well. That was fun.