[{"data":1,"prerenderedAt":1967},["ShallowReactive",2],{"blog-post-wsl2-windows-ai-agents-setup":3,"related-posts-wsl2-windows-ai-agents-setup":378},{"id":4,"title":5,"author":6,"body":10,"category":355,"date":356,"description":357,"extension":358,"featured":359,"image":360,"imageHeight":361,"imageWidth":361,"meta":362,"navigation":363,"path":364,"readingTime":365,"seo":366,"seoTitle":367,"stem":368,"tags":369,"updatedDate":356,"__hash__":377},"blog/blog/wsl2-windows-ai-agents-setup.md","How to Install WSL2 on Windows for AI Agents: OpenClaw, Hermes, and Claude Code (2026)",{"name":7,"role":8,"avatar":9},"Shabnam Katoch","Growth Head","/img/avatars/shabnam-profile.jpeg",{"type":11,"value":12,"toc":335},"minimark",[13,17,20,23,26,29,34,41,44,55,58,61,67,74,77,83,90,93,103,109,116,119,123,128,135,146,155,159,165,168,171,175,181,184,195,201,204,208,215,230,238,242,248,251,254,257,260,267,271,274,277,280,283,286,296,300,304,307,311,314,318,321,325,328,332],[14,15,16],"p",{},"All three major agent frameworks recommend WSL2 as the primary Windows path. Here's the unified setup that works for all of them, the 4 Windows-specific traps that break your agent, and the option that skips WSL2 entirely.",[14,18,19],{},"A developer on the Blink blog summarized it perfectly: \"You just read through Docker setup, WSL2 configuration, systemd, Node.js version management, firewall rules, CVE patches, security hardening, and three different auto-start strategies. None of that is the agent. All of it is infrastructure.\"",[14,21,22],{},"That's the Windows AI agent experience in one paragraph.",[14,24,25],{},"OpenClaw's official docs say: \"WSL2 is recommended. Native Windows might be trickier.\" Hermes calls WSL2 \"our most battle-tested Windows path.\" Claude Code requires a Unix terminal. On Windows, that means WSL2.",[14,27,28],{},"Here's the unified setup that gets all three frameworks running on Windows, the four traps that will waste your afternoon if you don't know about them, and the alternative that makes all of this unnecessary.",[30,31,33],"h2",{"id":32},"the-wsl2-setup-10-minutes-one-command-to-start","The WSL2 setup (10 minutes, one command to start)",[14,35,36],{},[37,38],"img",{"alt":39,"src":40},"WSL2 setup in four steps: PowerShell install, Ubuntu username, Node.js 22, and agent framework install","/img/blog/wsl2-setup-4-steps.jpg",[14,42,43],{},"Step 1: Enable WSL2. Open PowerShell as Administrator. Run:",[45,46,51],"pre",{"className":47,"code":49,"language":50},[48],"language-text","wsl --install\n","text",[52,53,49],"code",{"__ignoreMap":54},"",[14,56,57],{},"This enables WSL2 and installs Ubuntu 24.04. Restart when prompted. After restart, Ubuntu launches automatically. Create a Linux username and password (separate from your Windows login).",[14,59,60],{},"Step 2: Verify WSL version. In PowerShell:",[45,62,65],{"className":63,"code":64,"language":50},[48],"wsl --list --verbose\n",[52,66,64],{"__ignoreMap":54},[14,68,69,70,73],{},"You should see Ubuntu with VERSION 2. If it shows VERSION 1, upgrade: ",[52,71,72],{},"wsl --set-version Ubuntu 2",".",[14,75,76],{},"Step 3: Install Node.js 22. Inside Ubuntu (open from Start Menu):",[45,78,81],{"className":79,"code":80,"language":50},[48],"curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -\nsudo apt-get install -y nodejs\n",[52,82,80],{"__ignoreMap":54},[14,84,85,86,89],{},"Verify: ",[52,87,88],{},"node --version"," (should show v22.x).",[14,91,92],{},"Step 4: Install your framework. This is where the paths diverge.",[14,94,95,96,99,100],{},"For OpenClaw: ",[52,97,98],{},"npm install -g clawdbot"," then ",[52,101,102],{},"openclaw setup",[14,104,105,106],{},"For Hermes: ",[52,107,108],{},"curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash",[14,110,111,112,115],{},"For Claude Code: ",[52,113,114],{},"npm install -g @anthropic-ai/claude-code"," then set your API key",[14,117,118],{},"The unified WSL2 advantage: One Linux environment runs all three. Install OpenClaw for always-on messaging, Hermes for goal-tracked automation, and Claude Code for coding. They share the same Ubuntu, the same Node.js, the same filesystem. No conflicts.",[30,120,122],{"id":121},"the-4-windows-specific-traps-read-this-before-debugging-for-3-hours","The 4 Windows-specific traps (read this before debugging for 3 hours)",[124,125,127],"h3",{"id":126},"trap-1-cross-filesystem-io-is-10-20x-slower","Trap 1: Cross-filesystem I/O is 10-20x slower",[14,129,130,131,134],{},"What happens: You store your agent data on your Windows drive (accessible inside WSL as ",[52,132,133],{},"/mnt/c/Users/...","). Every file read and write is 10-20x slower than native Linux filesystem access. Your agent feels sluggish. Memory file access crawls.",[14,136,137,138,141,142,145],{},"The fix: Always work in your WSL home directory (",[52,139,140],{},"/home/yourusername/","). Never store agent data on ",[52,143,144],{},"/mnt/c/",". The speed difference is dramatic and invisible unless you know to look for it.",[14,147,148,149,154],{},"For the ",[150,151,153],"a",{"href":152},"/blog/openclaw-not-working","general OpenClaw troubleshooting guide",", our guide covers the broader diagnosis including WSL-specific issues.",[124,156,158],{"id":157},"trap-2-wsl-ip-changes-on-every-restart","Trap 2: WSL IP changes on every restart",[14,160,161],{},[37,162],{"alt":163,"src":164},"WSL2 IP changes on every restart, breaking port forwarding for external access","/img/blog/wsl2-ip-changes-trap.jpg",[14,166,167],{},"What happens: WSL2 uses a virtual network adapter. The IP address changes after every Windows restart. If you set up port forwarding to access your agent from your phone, the forwarding rule breaks after reboot.",[14,169,170],{},"The fix: Create a PowerShell script that detects the current WSL IP and updates the forwarding rule. Run it at startup. Or use Tailscale inside WSL2 for persistent access without port forwarding.",[124,172,174],{"id":173},"trap-3-wsl2-eats-your-ram","Trap 3: WSL2 eats your RAM",[14,176,177],{},[37,178],{"alt":179,"src":180},"WSL2 defaults to 80% of system RAM; .wslconfig caps it at 4GB so Windows stays responsive","/img/blog/wsl2-ram-config.jpg",[14,182,183],{},"What happens: By default, WSL2 can use up to 80% of your system RAM. Your Windows machine becomes sluggish while your agent runs. Background tasks slow to a crawl.",[14,185,186,187,190,191,194],{},"The fix: Create ",[52,188,189],{},".wslconfig"," in ",[52,192,193],{},"C:\\Users\\YourName\\",":",[45,196,199],{"className":197,"code":198,"language":50},[48],"[wsl2]\nmemory=4GB\nprocessors=2\n",[52,200,198],{"__ignoreMap":54},[14,202,203],{},"This limits WSL2 to 4GB RAM and 2 CPU cores. Enough for one agent. Adjust based on your hardware.",[124,205,207],{"id":206},"trap-4-systemd-isnt-always-enabled","Trap 4: systemd isn't always enabled",[14,209,210,211,214],{},"What happens: OpenClaw's daemon mode (",[52,212,213],{},"--install-daemon",") and Hermes's gateway need systemd to auto-start. Some older WSL2 versions don't enable systemd by default.",[14,216,217,218,221,222,225,226,229],{},"The fix: Add ",[52,219,220],{},"[boot] systemd=true"," to ",[52,223,224],{},"/etc/wsl.conf"," inside Ubuntu. Restart WSL: ",[52,227,228],{},"wsl --shutdown"," then reopen.",[14,231,232,233,237],{},"If setting up WSL2, configuring systemd, managing RAM limits, refreshing port forwarding on every reboot, and debugging cross-filesystem performance sounds like more Windows infrastructure work than building agent workflows, ",[150,234,236],{"href":235},"/openclaw-alternative","BetterClaw eliminates the WSL2 requirement entirely",". No WSL2. No Linux kernel. No PowerShell scripts. The agent runs in the cloud. You access it from any browser on any operating system. Free tier with 1 agent and BYOK. $19/month per agent for Pro. 60-second deploy.",[30,239,241],{"id":240},"which-framework-to-install-first-the-decision","Which framework to install first (the decision)",[14,243,244],{},[37,245],{"alt":246,"src":247},"Decision tree for picking your primary AI agent framework: OpenClaw/BetterClaw, Hermes Agent, Claude Code, or all three","/img/blog/wsl2-framework-decision-tree.jpg",[14,249,250],{},"OpenClaw if you want a messaging agent with 50+ channel support and the largest skill ecosystem (230K+ stars). Self-hosted. You manage updates, security (138+ CVEs), and infrastructure.",[14,252,253],{},"Hermes if you want goal tracking, multi-agent Kanban, and self-maintaining skills. Fewer channels (20) but more durable task completion. Reportedly more stable than OpenClaw.",[14,255,256],{},"Claude Code if you want a coding agent in your terminal. Different category entirely (coding vs life automation). Most developers run Claude Code alongside one of the other two.",[14,258,259],{},"All three coexist in the same WSL2 Ubuntu without conflicts. OpenClaw uses npm. Hermes uses Python (uv). Claude Code uses npm with a separate global package. No collisions.",[14,261,148,262,266],{},[150,263,265],{"href":264},"/compare","detailed comparison of OpenClaw vs Claude Code",", our comparison covers the category distinction.",[30,268,270],{"id":269},"the-honest-question-do-you-actually-need-wsl2","The honest question (do you actually need WSL2?)",[14,272,273],{},"Here's the take nobody else gives you.",[14,275,276],{},"WSL2 is a workaround. The frameworks need Linux. Your machine runs Windows. WSL2 bridges the gap. But the gap exists because you're running a server workload on a desktop operating system.",[14,278,279],{},"The always-on agent use case (gateway running 24/7, responding to messages while you sleep) doesn't belong on a Windows laptop with WSL2. It belongs on a server. Your laptop sleeps. Your agent stops. Your gateway loses its websocket connections. Your cron jobs miss their schedules.",[14,281,282],{},"WSL2 is excellent for development and testing. It's fragile for production. The RAM limits, the IP changes, the systemd quirks, the cross-filesystem penalty. All of these exist because WSL2 is a compatibility layer, not a native runtime.",[14,284,285],{},"If you want a production always-on agent, you have three options: a VPS ($5-10/month, you manage everything), dedicated hardware ($80-700, always on at home), or a managed platform ($0-19/month, no infrastructure).",[14,287,288,289,295],{},"If the managed path interests you, ",[150,290,294],{"href":291,"rel":292},"https://app.betterclaw.io/sign-in",[293],"nofollow","give BetterClaw a try",". Free tier with 1 agent and BYOK. $19/month per agent for Pro. No WSL2. No Linux. No Windows infrastructure. The agent runs on our servers. You interact through your chat apps. The operating system on your laptop doesn't matter.",[30,297,299],{"id":298},"frequently-asked-questions","Frequently Asked Questions",[124,301,303],{"id":302},"do-i-need-wsl2-to-run-ai-agents-on-windows","Do I need WSL2 to run AI agents on Windows?",[14,305,306],{},"For OpenClaw and Hermes, WSL2 is the recommended path. OpenClaw's docs call WSL2 \"recommended for best performance.\" Hermes calls it \"our most battle-tested Windows path.\" Claude Code requires a Unix terminal (WSL2 on Windows). Native Windows support exists for OpenClaw and Hermes but with known bugs and limited features. BetterClaw doesn't need WSL2 at all (cloud-based).",[124,308,310],{"id":309},"how-long-does-wsl2-setup-take-for-ai-agents","How long does WSL2 setup take for AI agents?",[14,312,313],{},"The WSL2 installation itself takes 5-10 minutes (one command + restart + Node.js install). Installing an agent framework adds 2-5 minutes. The traps (filesystem performance, RAM limits, systemd, port forwarding) can add hours of debugging if you don't know about them. Total with traps avoided: 15-20 minutes. Total without knowing the traps: potentially an afternoon.",[124,315,317],{"id":316},"can-i-run-openclaw-and-hermes-in-the-same-wsl2-instance","Can I run OpenClaw and Hermes in the same WSL2 instance?",[14,319,320],{},"Yes. They use different runtimes (OpenClaw uses Node.js, Hermes uses Python/uv) and don't conflict. Claude Code also coexists as a separate npm global package. Many power users run all three in the same Ubuntu WSL2 environment for different purposes: OpenClaw for messaging, Hermes for goal-tracked automation, Claude Code for coding.",[124,322,324],{"id":323},"does-wsl2-work-for-always-on-agents","Does WSL2 work for always-on agents?",[14,326,327],{},"For testing: yes. For production: not recommended. WSL2 shuts down when Windows sleeps. The IP address changes on restart. RAM is shared with Windows. A VPS ($5-10/month), dedicated hardware (Pi or mini PC), or a managed platform like BetterClaw ($0-19/month) is more reliable for 24/7 always-on agents.",[124,329,331],{"id":330},"whats-the-alternative-to-wsl2-for-windows-ai-agent-users","What's the alternative to WSL2 for Windows AI agent users?",[14,333,334],{},"BetterClaw runs entirely in the cloud. No WSL2, no Linux, no local installation. The agent runs on managed infrastructure accessible from any browser. Free tier with 1 agent and BYOK. $19/month per agent for Pro. For users who want self-hosted without WSL2, Docker Desktop on Windows is an option but adds its own complexity (Hyper-V, container management, port mapping).",{"title":54,"searchDepth":336,"depth":336,"links":337},2,[338,339,346,347,348],{"id":32,"depth":336,"text":33},{"id":121,"depth":336,"text":122,"children":340},[341,343,344,345],{"id":126,"depth":342,"text":127},3,{"id":157,"depth":342,"text":158},{"id":173,"depth":342,"text":174},{"id":206,"depth":342,"text":207},{"id":240,"depth":336,"text":241},{"id":269,"depth":336,"text":270},{"id":298,"depth":336,"text":299,"children":349},[350,351,352,353,354],{"id":302,"depth":342,"text":303},{"id":309,"depth":342,"text":310},{"id":316,"depth":342,"text":317},{"id":323,"depth":342,"text":324},{"id":330,"depth":342,"text":331},"Guides","2026-05-15","OpenClaw, Hermes, and Claude Code all need WSL2 on Windows. Here's the unified setup, the 4 traps that waste your afternoon, and the option that skips it.","md",false,"/img/blog/wsl2-windows-ai-agents-setup.jpg",null,{},true,"/blog/wsl2-windows-ai-agents-setup","10 min read",{"title":5,"description":357},"WSL2 for AI Agents: OpenClaw, Hermes, Claude Code","blog/wsl2-windows-ai-agents-setup",[370,371,372,373,374,375,376],"WSL2 AI agents","WSL2 OpenClaw","WSL2 Hermes Agent","Claude Code Windows","WSL2 setup Windows 10","AI agent Windows","WSL2 guide 2026","a_k6r3O8wlq58qwd_uAK0BiOJb9FDvhAjQcoBjhKRMg",[379,714,1121],{"id":380,"title":381,"author":382,"body":383,"category":355,"date":696,"description":697,"extension":358,"featured":359,"image":698,"imageHeight":361,"imageWidth":361,"meta":699,"navigation":363,"path":700,"readingTime":701,"seo":702,"seoTitle":703,"stem":704,"tags":705,"updatedDate":696,"__hash__":713},"blog/blog/free-openclaw-agent-openrouter-setup.md","How to Run a Free OpenClaw Agent in 5 Minutes Using OpenRouter",{"name":7,"role":8,"avatar":9},{"type":11,"value":384,"toc":684},[385,391,394,397,400,403,406,409,412,416,425,428,431,437,441,448,451,454,460,464,472,475,478,481,485,488,495,501,507,513,516,524,528,531,534,547,550,553,557,560,563,566,569,577,581,584,587,590,593,601,605,608,611,614,622,626,629,632,639,642,644,649,652,657,660,665,668,673,676,681],[14,386,387],{},[388,389,390],"em",{},"No API bill. No credit card. No infrastructure headaches. Here's exactly how we did it.",[14,392,393],{},"Someone dropped a comment on one of our Reddit threads last week that stopped me mid-scroll.",[14,395,396],{},"\"BYOK sounds great but what if I don't want to pay for an API key either?\"",[14,398,399],{},"Fair. Really fair.",[14,401,402],{},"We've been saying \"bring your own API keys\" like it's the generous option. But for someone who just wants to test whether an AI agent is actually useful before spending a dollar, even getting an OpenRouter key feels like one more step in a wall of friction.",[14,404,405],{},"So we tried something. We set up a completely working OpenClaw agent, on BetterClaw's free tier, using only free models from OpenRouter.",[14,407,408],{},"$0 total. Not \"$5 free credits.\" Not \"basically free.\" Zero.",[14,410,411],{},"Here's exactly what we did, what we ran into, and what you should know before you try it.",[30,413,415],{"id":414},"step-1-get-a-free-api-key-from-openrouter-2-minutes","Step 1: Get a Free API Key from OpenRouter (2 Minutes)",[14,417,418,419,424],{},"Go to ",[150,420,423],{"href":421,"rel":422},"https://openrouter.ai/",[293],"openrouter.ai",". Sign up. That's it.",[14,426,427],{},"You now have access to 30+ free models. The ones worth knowing about for agent work: Llama 3.3 70b, DeepSeek R1, and Qwen3 Coder 480b. The base limit is 50 requests per day on most free models, but some go up to 1,000 free requests per day.",[14,429,430],{},"For a daily briefing agent or a lightweight personal assistant? 1,000 requests per day is more than enough. Most real-world agent usage runs 10 to 30 requests per session.",[14,432,433],{},[37,434],{"alt":435,"src":436},"OpenRouter free tier dashboard showing Llama 3.3 70b, DeepSeek R1, and Qwen3 Coder 480b with daily request limits ranging from 50 to 1,000 free requests per day","/img/blog/free-openclaw-agent-openrouter-setup-openrouter-models.jpg",[30,438,440],{"id":439},"step-2-sign-up-for-betterclaw-free-tier-2-minutes","Step 2: Sign Up for BetterClaw Free Tier (2 Minutes)",[14,442,418,443,447],{},[150,444,446],{"href":291,"rel":445},[293],"the BetterClaw app",". No card. No trial countdown. Takes about 2 minutes.",[14,449,450],{},"When it asks for your API key, paste in the OpenRouter key you just generated. Then select one of the free models as your default.",[14,452,453],{},"Which free model should you pick? Honestly, for most agent tasks, Llama 3.3 70b or DeepSeek R1 handle daily briefings, summarization, email triage, and basic research just fine. They're not Claude Sonnet. But for a free agent doing routine tasks, they're more than good enough.",[14,455,456],{},[37,457],{"alt":458,"src":459},"BetterClaw LLM configuration screen showing OpenRouter API key field and free model dropdown with Llama 3.3 70b selected as default for the agent","/img/blog/free-openclaw-agent-openrouter-setup-llm-config.jpg",[30,461,463],{"id":462},"step-3-connect-your-channel-1-minute","Step 3: Connect Your Channel (1 Minute)",[14,465,466,467,471],{},"BetterClaw connects to 15+ platforms out of the box. For a free setup, Telegram is the cleanest option. Takes about 60 seconds to get a bot token from BotFather and paste it into the channel config. For the ",[150,468,470],{"href":469},"/guide/integrate-telegram-with-betterclaw","step-by-step Telegram walkthrough",", our setup guide covers BotFather and pairing.",[14,473,474],{},"If you want to connect to Slack, Discord, or WhatsApp instead, the process is similar. The agent starts responding on whatever channel you pick.",[14,476,477],{},"And that's it. You're done.",[14,479,480],{},"Five minutes from nothing to a working AI agent. Cost: $0.",[30,482,484],{"id":483},"whats-the-catch-with-free-models","\"What's the Catch With Free Models?\"",[14,486,487],{},"Here's where we'll be straight with you.",[14,489,490,494],{},[491,492,493],"strong",{},"Free models are slower than paid models."," You'll notice the latency. Not unbearable, but noticeable. Expect 3 to 8 seconds per response instead of under 2.",[14,496,497,500],{},[491,498,499],{},"Complex multi-step tasks get shaky."," If you need your agent to run a 10-step research chain with tool calls at each step, free models stumble. They'll misinterpret instructions, skip steps, or hallucinate a tool result. Single-step and two-step tasks? Totally fine.",[14,502,503,506],{},[491,504,505],{},"Rate limits exist."," You're on shared infrastructure. During peak hours you might get queued. Not often, but it happens.",[14,508,509,512],{},[491,510,511],{},"Quality varies by model and by day."," Some days DeepSeek R1 is sharp. Some days it rambles. You learn which model handles which task better over time. This is the honest part of \"free.\"",[14,514,515],{},"But for a \"try before you spend\" setup or a simple daily assistant use case, free models work better than most people expect. We were surprised.",[14,517,518,519,523],{},"If you're curious about how different models perform on OpenClaw tasks generally, we've done a ",[150,520,522],{"href":521},"/blog/best-llm-for-openclaw-glm-5-1-claude-sonnet-minimax","detailed LLM comparison for OpenClaw use cases"," that goes deeper into benchmarks and tradeoffs.",[30,525,527],{"id":526},"will-i-hit-the-100-task-limit","\"Will I Hit the 100 Task Limit?\"",[14,529,530],{},"With this setup? Probably not in month one.",[14,532,533],{},"Here's the rough math:",[535,536,537,541,544],"ul",{},[538,539,540],"li",{},"1 daily briefing cron: ~30 tasks per month",[538,542,543],{},"1 weekly report cron: ~4 tasks per month",[538,545,546],{},"15 to 20 ad-hoc requests per week: ~70 tasks per month",[14,548,549],{},"Total: roughly 100. Tight, but workable if you're not running 5 daily crons.",[14,551,552],{},"If you find yourself constantly hitting the limit, that's the signal the agent is useful enough to upgrade. Month one on free? You'll be fine.",[30,554,556],{"id":555},"why-we-built-this-option","Why We Built This Option",[14,558,559],{},"We've talked to a lot of people who got interested in OpenClaw after seeing it hit 230,000+ GitHub stars and land on the front page of Hacker News. They followed a setup tutorial, ran into Docker issues or YAML configs, and quietly gave up.",[14,561,562],{},"The OpenClaw maintainer himself once warned: \"if you can't understand how to run a command line, this is far too dangerous of a project for you to use safely.\"",[14,564,565],{},"That's a real barrier. Not everyone needs to cross it just to try an AI agent.",[14,567,568],{},"BetterClaw exists because we think more people should be able to experience what a well-configured autonomous agent actually feels like, without the infrastructure tax. The free tier plus OpenRouter's free models is the lowest-friction version of that we've been able to build.",[14,570,571,572,576],{},"If you've been curious about ",[150,573,575],{"href":574},"/openclaw-hosting","what managed OpenClaw hosting actually includes"," versus spinning something up on a VPS yourself, that page walks through the full comparison.",[30,578,580],{"id":579},"the-moment-it-actually-starts-feeling-useful","The Moment It Actually Starts Feeling Useful",[14,582,583],{},"Here's what we didn't expect to be true: the first time your free-tier agent quietly runs a morning briefing, summarizes your overnight Slack threads, or answers a question without you lifting a finger, something clicks.",[14,585,586],{},"It's not the technology that lands. It's the time.",[14,588,589],{},"Most people who try this setup report the same thing. The agent does something useful. They go back to their day. Then they check the output an hour later and think, \"I would have spent 20 minutes on that.\"",[14,591,592],{},"That's when the \"is this worth paying for\" question answers itself.",[14,594,595,596,600],{},"If you want your OpenClaw agent running in 60 seconds with your own API keys and no usage caps, ",[150,597,599],{"href":598},"/pricing","BetterClaw's Pro plan is $19/month per agent"," (up to 25 agents, each billed at $19/month). Bring your own keys, pick any of 28+ model providers, and the infrastructure is completely managed. No Docker. No YAML. No 2 AM debugging.",[30,602,604],{"id":603},"what-happens-when-youre-ready-to-upgrade","What Happens When You're Ready to Upgrade",[14,606,607],{},"The free tier is a starting point, not a ceiling.",[14,609,610],{},"When you move to Pro at $19/month, you get persistent memory with hybrid vector and keyword search, real-time health monitoring, auto-pause on anomalies, and multi-channel support from a single agent. You can also swap in Claude Sonnet or GPT-4o the moment you want sharper reasoning on complex tasks.",[14,612,613],{},"The upgrade takes about 30 seconds inside the same dashboard.",[14,615,616,617,621],{},"If you've been running agents on a VPS or trying to self-host OpenClaw and the maintenance burden is getting old, take a look at ",[150,618,620],{"href":619},"/compare/self-hosted","how BetterClaw compares to self-hosting and managed alternatives",". The hidden costs of DIY infrastructure add up faster than most people realize.",[30,623,625],{"id":624},"give-it-a-try","Give It a Try",[14,627,628],{},"If you've been on the fence about whether AI agents are actually useful, this is the lowest-stakes test you can run.",[14,630,631],{},"No credit card. No infrastructure decision. No API bill waiting at the end of the month.",[14,633,634,638],{},[150,635,637],{"href":291,"rel":636},[293],"Sign up for BetterClaw's free tier",", grab a free OpenRouter key, and have a working agent in 5 minutes. If it's useful, you'll know. If it's not, you've lost nothing but 5 minutes.",[14,640,641],{},"We handle the infrastructure. You handle the interesting part.",[30,643,299],{"id":298},[14,645,646],{},[491,647,648],{},"What is a free OpenClaw agent and how does it work?",[14,650,651],{},"A free OpenClaw agent is a fully functional AI assistant built on OpenClaw, the open-source agent framework with 230,000+ GitHub stars, deployed on BetterClaw's free tier and powered by OpenRouter's free model tier. BetterClaw handles the hosting, security, and channel connections. OpenRouter provides access to models like Llama 3.3 70b and DeepSeek R1 at no cost. You get a working autonomous agent at $0.",[14,653,654],{},[491,655,656],{},"How does OpenRouter's free tier compare to a paid API key for OpenClaw?",[14,658,659],{},"OpenRouter's free models are slower (3 to 8 seconds per response vs. under 2 for paid) and less reliable on complex multi-step tasks. For simple daily tasks like briefings, summarization, and research lookups, the quality gap is small. For sophisticated chains with multiple tool calls, a paid model like Claude Sonnet or GPT-4o will perform significantly better. The free setup is ideal for evaluation and lightweight personal use.",[14,661,662],{},[491,663,664],{},"How long does it take to set up a free OpenClaw agent with BetterClaw?",[14,666,667],{},"About 5 minutes total. Getting a free API key from OpenRouter takes roughly 2 minutes. Signing up for BetterClaw's free tier takes another 2 minutes. Connecting your Telegram channel takes about 1 minute. The agent starts responding immediately after setup. No Docker, no YAML, no terminal required.",[14,669,670],{},[491,671,672],{},"Is BetterClaw's free tier actually free, or are there hidden costs?",[14,674,675],{},"The free tier is genuinely free. No credit card required, no trial period, no automatic upgrade. You get 1 agent slot and up to 100 tasks per month at $0. If you use OpenRouter's free models, your total monthly cost is $0. If you exceed the task limit or want persistent memory and multi-channel support, the Pro plan is $19/month per agent (up to 25 agents, each billed at $19/month) with bring-your-own API keys.",[14,677,678],{},[491,679,680],{},"Is it safe to run an OpenClaw agent on a free plan?",[14,682,683],{},"Yes. BetterClaw runs all agents in Docker-sandboxed execution environments with AES-256 encryption for credentials, regardless of which plan you're on. The security architecture is the same on free as on Pro. Your API keys are never stored in plaintext. Given that security researchers have found over 30,000 internet-exposed OpenClaw instances without authentication, using a managed platform with built-in sandboxing is significantly safer than a self-hosted setup, especially for someone new to agent infrastructure.",{"title":54,"searchDepth":336,"depth":336,"links":685},[686,687,688,689,690,691,692,693,694,695],{"id":414,"depth":336,"text":415},{"id":439,"depth":336,"text":440},{"id":462,"depth":336,"text":463},{"id":483,"depth":336,"text":484},{"id":526,"depth":336,"text":527},{"id":555,"depth":336,"text":556},{"id":579,"depth":336,"text":580},{"id":603,"depth":336,"text":604},{"id":624,"depth":336,"text":625},{"id":298,"depth":336,"text":299},"2026-04-25","Run a fully working OpenClaw agent for $0 using BetterClaw's free tier and OpenRouter's free models. No credit card. No Docker. Setup takes 5 minutes.","/img/blog/free-openclaw-agent-openrouter-setup.jpg",{},"/blog/free-openclaw-agent-openrouter-setup","7 min read",{"title":381,"description":697},"Free OpenClaw Agent Setup With OpenRouter in 5 Minutes","blog/free-openclaw-agent-openrouter-setup",[706,707,708,709,710,711,712],"free openclaw agent","openrouter free api key","openclaw free setup","betterclaw free tier","run openclaw for free","openclaw no credit card","free ai agent setup","MH1BX0hSPYHJ6IMEreHwoKBfo2S3-hR0WbT0Ys9JfSs",{"id":715,"title":716,"author":717,"body":718,"category":355,"date":356,"description":1105,"extension":358,"featured":359,"image":1106,"imageHeight":361,"imageWidth":361,"meta":1107,"navigation":363,"path":1108,"readingTime":365,"seo":1109,"seoTitle":1110,"stem":1111,"tags":1112,"updatedDate":356,"__hash__":1120},"blog/blog/hermes-agent-docker-install.md","How to Install Hermes Agent with Docker: Step-by-Step Guide (2026)",{"name":7,"role":8,"avatar":9},{"type":11,"value":719,"toc":1090},[720,723,726,729,732,738,741,745,751,754,760,763,766,772,775,778,784,787,801,805,808,814,817,820,823,830,834,847,853,856,863,866,873,880,887,891,897,900,906,912,918,925,932,936,942,945,974,979,998,1004,1008,1011,1014,1017,1020,1027,1029,1033,1048,1052,1055,1059,1071,1075,1078,1082],[14,721,722],{},"Hermes Agent ships an official Docker image. Three commands to setup, one to run 24/7. But there are two Docker modes that most guides conflate, and one data persistence mistake that wipes your skills. Here's the guide that covers both.",[14,724,725],{},"A developer on MindStudio wrote: \"You will forget which container holds which agent within two weeks.\"",[14,727,728],{},"He was running four Hermes instances. Different models. Different Telegram bots. Different skill libraries. All in separate Docker containers. All with nearly identical names. And no labeling system.",[14,730,731],{},"That's the Docker experience in one sentence. It works. You just have to manage it.",[14,733,734,735,73],{},"Hermes Agent (23,000+ GitHub stars, growing fast) ships an official Docker image from Nous Research. The install is genuinely simple: three commands to setup, one to run. But the Docker deployment has two distinct modes that most guides conflate, and one data persistence mistake that silently wipes your accumulated skills on the next ",[52,736,737],{},"docker pull",[14,739,740],{},"Here's the complete guide.",[30,742,744],{"id":743},"the-setup-three-commands-five-minutes","The setup (three commands, five minutes)",[14,746,747],{},[37,748],{"alt":749,"src":750},"nousresearch/hermes-agent installation flow: create data directory, run setup wizard, start gateway daemon","/img/blog/hermes-docker-install-flow.jpg",[14,752,753],{},"Step 1: Create the data directory.",[45,755,758],{"className":756,"code":757,"language":50},[48],"mkdir -p ~/.hermes\n",[52,759,757],{"__ignoreMap":54},[14,761,762],{},"This is where your config, API keys, sessions, skills, and memories live on the host machine.",[14,764,765],{},"Step 2: Run the setup wizard.",[45,767,770],{"className":768,"code":769,"language":50},[48],"docker run -it --rm -v ~/.hermes:/opt/data nousresearch/hermes-agent setup\n",[52,771,769],{"__ignoreMap":54},[14,773,774],{},"This drops you into the interactive wizard. It asks for your LLM provider (Anthropic, OpenAI, DeepSeek, OpenRouter, etc.), your API key, and which messaging channels to connect (Telegram, Discord, Slack, WhatsApp).",[14,776,777],{},"Step 3: Run the gateway in the background.",[45,779,782],{"className":780,"code":781,"language":50},[48],"docker run -d --name hermes --restart unless-stopped -v ~/.hermes:/opt/data -p 8642:8642 nousresearch/hermes-agent gateway run\n",[52,783,781],{"__ignoreMap":54},[14,785,786],{},"Your agent is now running 24/7. Port 8642 exposes the gateway's OpenAI-compatible API server and health endpoint. It's optional if you only use messaging platforms but required for the dashboard and external tools.",[14,788,789,790,793,794,796,797,800],{},"The critical detail: The ",[52,791,792],{},"-v ~/.hermes:/opt/data"," volume mount is what keeps your data safe. Without it, your config, skills, and memories live inside the container and vanish on the next ",[52,795,737],{},". Always mount ",[52,798,799],{},"/opt/data"," to a host directory.",[30,802,804],{"id":803},"the-two-docker-modes-this-is-where-most-guides-get-confusing","The two Docker modes (this is where most guides get confusing)",[14,806,807],{},"Here's what nobody tells you about Hermes and Docker.",[14,809,810],{},[37,811],{"alt":812,"src":813},"Hermes Docker Mode 1 (Hermes inside Docker for VPS deployment) vs Mode 2 (Docker as a sandboxed terminal backend for local development)","/img/blog/hermes-docker-mode-1-vs-mode-2.jpg",[14,815,816],{},"Mode 1: Hermes running inside Docker. This is the standard deployment. The entire agent (gateway, skills, memory, messaging) runs inside the container. You interact through Telegram, Discord, or other channels. The container is your server. This is what the setup above configures.",[14,818,819],{},"Mode 2: Docker as a terminal backend. Hermes runs on your host machine (not in Docker). But every command the agent executes runs inside a Docker sandbox container. The sandbox survives across tool calls, new sessions, and subagents. This is for developers who want the agent on their machine but want command execution isolated.",[14,821,822],{},"The confusion: Most guides mix these two modes. \"Install Hermes with Docker\" could mean either. If you want a 24/7 agent on a VPS, you want Mode 1. If you want safe local development with isolated execution, you want Mode 2.",[14,824,148,825,829],{},[150,826,828],{"href":827},"/blog/openclaw-security-risks","detailed comparison of Hermes features in v0.13",", our security analysis covers how both Docker modes handle credential isolation.",[30,831,833],{"id":832},"the-production-checklist-what-breaks-after-day-one","The production checklist (what breaks after day one)",[14,835,836,837,840,841,843,844,846],{},"Problem 1: Volume mount missing. You ran ",[52,838,839],{},"docker run"," without ",[52,842,792],{},". The agent works. Skills accumulate. Memory grows. Then you update with ",[52,845,737],{}," and restart. Everything is gone. The container was the only copy.",[14,848,849,850,73],{},"The fix: Always use the volume mount. Always verify with ",[52,851,852],{},"docker inspect hermes | grep Mounts",[14,854,855],{},"Problem 2: Node version mismatch inside the container. Docker users hit issues when the container's Node.js version conflicts with skills that expect a specific version. The official image pins Node, but community images vary.",[14,857,858,859,862],{},"The fix: Use only the official ",[52,860,861],{},"nousresearch/hermes-agent"," image. Community images may lag behind or use incompatible base images.",[14,864,865],{},"Problem 3: .venv permission issues. On some host configurations, the Python virtual environment inside the container has permission conflicts with the mounted volume. Skills fail to install with cryptic permission errors.",[14,867,868,869,872],{},"The fix: Ensure the container user has write permissions to the mounted directory. ",[52,870,871],{},"chown -R 1000:1000 ~/.hermes"," on the host before starting the container.",[14,874,875,876,879],{},"Problem 4: Port 8642 conflicts. If you're running multiple Hermes instances (different agents, different bots), each needs a different host port mapping. ",[52,877,878],{},"docker run -p 8643:8642"," for the second instance, etc.",[14,881,882,883,886],{},"If managing Docker volumes, port mappings, Node version conflicts, permission issues, and multi-container orchestration for an AI agent sounds like more DevOps than agent building, ",[150,884,885],{"href":235},"BetterClaw eliminates the Docker layer entirely",". No containers. No volume mounts. No port mapping. Deploy in 60 seconds from a browser. Free tier with 1 agent and BYOK. $19/month per agent for Pro.",[30,888,890],{"id":889},"updating-the-part-that-catches-people","Updating (the part that catches people)",[14,892,893],{},[37,894],{"alt":895,"src":896},"Hermes Docker update steps: docker pull, docker stop, docker rm, docker run — data in ~/.hermes survives because it lives on the host","/img/blog/hermes-docker-update-steps.jpg",[14,898,899],{},"The update process:",[14,901,902,903],{},"Pull the new image: ",[52,904,905],{},"docker pull nousresearch/hermes-agent:latest",[14,907,908,909],{},"Stop and remove the old container: ",[52,910,911],{},"docker stop hermes && docker rm hermes",[14,913,914,915],{},"Start a new container with the same volume mount: ",[52,916,917],{},"docker run -d --name hermes --restart unless-stopped -v ~/.hermes:/opt/data -p 8642:8642 nousresearch/hermes-agent gateway run",[14,919,920,921,924],{},"Your data survives because it lives in ",[52,922,923],{},"~/.hermes"," on the host. The container is disposable. The data directory is permanent. This is the correct Docker pattern for stateful applications.",[14,926,927,928,931],{},"The mistake to avoid: Using ",[52,929,930],{},"docker compose up -d"," with a build step instead of pull. If your compose file builds from source instead of pulling the official image, updates require rebuilding, which takes longer and can introduce build failures.",[30,933,935],{"id":934},"docker-compose-for-the-organized","Docker Compose (for the organized)",[14,937,938],{},[37,939],{"alt":940,"src":941},"docker-compose.yml with multiple Hermes agents: hermes-work on port 8642 and hermes-personal on port 8643, with separate data directories and Telegram bots","/img/blog/hermes-docker-compose-multi-agent.jpg",[14,943,944],{},"If you prefer declarative configuration, here's the pattern:",[14,946,947,948,951,952,955,956,958,959,962,963,966,967,970,971,73],{},"Create a ",[52,949,950],{},"docker-compose.yml"," with the service name ",[52,953,954],{},"hermes",", using the official image ",[52,957,861],{},", restart policy ",[52,960,961],{},"unless-stopped",", port ",[52,964,965],{},"8642:8642",", volume ",[52,968,969],{},"~/.hermes:/opt/data",", and command ",[52,972,973],{},"gateway run",[14,975,976,977],{},"Then: ",[52,978,930],{},[14,980,981,982,985,986,989,990,993,994,997],{},"For multiple agents: Duplicate the service block with different names, ports, and data directories. ",[52,983,984],{},"hermes-work"," on port 8642 with ",[52,987,988],{},"~/.hermes-work:/opt/data",". ",[52,991,992],{},"hermes-personal"," on port 8643 with ",[52,995,996],{},"~/.hermes-personal:/opt/data",". Each agent has isolated skills, memory, and messaging channels.",[14,999,148,1000,1003],{},[150,1001,1002],{"href":264},"comparison between managed and self-hosted agent deployment",", our comparison covers what you manage yourself versus what a platform handles.",[30,1005,1007],{"id":1006},"the-honest-assessment-docker-for-hermes-vs-managed-alternatives","The honest assessment (Docker for Hermes vs managed alternatives)",[14,1009,1010],{},"Here's the take.",[14,1012,1013],{},"Docker is the right choice for Hermes if you want full control, you're comfortable with container management, and you're running on a VPS you already have. The official image is well-maintained. The volume mount pattern is clean. Updates are pull-stop-remove-restart.",[14,1015,1016],{},"Docker is the wrong choice if you don't want to manage containers, you're not comfortable with port mapping and volume permissions, or you need the agent running without thinking about infrastructure.",[14,1018,1019],{},"AlphaSignal's recommendation for Hermes v0.13 was \"Tenacity, not Production.\" The Docker setup is stable. The agent inside it is still maturing. Known issues: macOS Python 3.13 conflicts, .venv permissions, and /goal's judge model can complete goals prematurely.",[14,1021,1022,1023,1026],{},"If you want an always-on agent without Docker, volume mounts, port mapping, and container lifecycle management, ",[150,1024,294],{"href":291,"rel":1025},[293],". Free tier with 1 agent and BYOK. $19/month per agent for Pro. 15+ messaging channels. Persistent memory. No Docker required. The agent runs. The infrastructure is ours.",[30,1028,299],{"id":298},[124,1030,1032],{"id":1031},"how-do-i-install-hermes-agent-with-docker","How do I install Hermes Agent with Docker?",[14,1034,1035,1036,1039,1040,1043,1044,1047],{},"Three commands: create a data directory (",[52,1037,1038],{},"mkdir -p ~/.hermes","), run the setup wizard (",[52,1041,1042],{},"docker run -it --rm -v ~/.hermes:/opt/data nousresearch/hermes-agent setup","), then start the gateway (",[52,1045,1046],{},"docker run -d --restart unless-stopped -v ~/.hermes:/opt/data -p 8642:8642 nousresearch/hermes-agent gateway run","). Total time: about 5 minutes. The setup wizard asks for your API provider and messaging channels.",[124,1049,1051],{"id":1050},"whats-the-difference-between-hermes-docker-mode-1-and-mode-2","What's the difference between Hermes Docker Mode 1 and Mode 2?",[14,1053,1054],{},"Mode 1 runs the entire Hermes agent inside a Docker container (standard VPS deployment). Mode 2 runs Hermes on your host but uses Docker as a sandboxed terminal backend for command execution. Mode 1 is for always-on agents. Mode 2 is for local development with isolated execution. Most VPS deployments use Mode 1.",[124,1056,1058],{"id":1057},"how-do-i-update-hermes-agent-in-docker-without-losing-data","How do I update Hermes Agent in Docker without losing data?",[14,1060,1061,1062,1064,1065,1067,1068,1070],{},"Pull the new image (",[52,1063,905],{},"), stop and remove the old container (",[52,1066,911],{},"), then start a new container with the same volume mount. Your data in ",[52,1069,923],{}," survives because it's on the host, not inside the container. The image is stateless by design.",[124,1072,1074],{"id":1073},"how-much-does-it-cost-to-run-hermes-agent-with-docker","How much does it cost to run Hermes Agent with Docker?",[14,1076,1077],{},"The Docker image and Hermes software are free (MIT license). You need a VPS ($5-10/month for Hetzner, DigitalOcean, or Contabo with 2+ CPU cores and 8GB RAM) plus your AI model API costs (varies by provider and usage). Total: $5-50/month depending on model choice and usage volume. BetterClaw offers managed deployment at $0 (free tier) or $19/month (Pro) without Docker.",[124,1079,1081],{"id":1080},"is-hermes-agent-stable-enough-for-production-docker-deployment","Is Hermes Agent stable enough for production Docker deployment?",[14,1083,1084,1085,1087,1088,800],{},"For testing and personal use: yes, the Docker setup is reliable. For production business use: proceed with caution. AlphaSignal assessed v0.13 as \"Tenacity, not Production.\" Known issues include Python 3.13 conflicts, .venv permission bugs in Docker, and the /goal judge model making premature completion decisions. Use the official ",[52,1086,861],{}," image and always mount ",[52,1089,799],{},{"title":54,"searchDepth":336,"depth":336,"links":1091},[1092,1093,1094,1095,1096,1097,1098],{"id":743,"depth":336,"text":744},{"id":803,"depth":336,"text":804},{"id":832,"depth":336,"text":833},{"id":889,"depth":336,"text":890},{"id":934,"depth":336,"text":935},{"id":1006,"depth":336,"text":1007},{"id":298,"depth":336,"text":299,"children":1099},[1100,1101,1102,1103,1104],{"id":1031,"depth":342,"text":1032},{"id":1050,"depth":342,"text":1051},{"id":1057,"depth":342,"text":1058},{"id":1073,"depth":342,"text":1074},{"id":1080,"depth":342,"text":1081},"Install Hermes Agent with Docker in 5 minutes. Official image, setup wizard, gateway mode, two Docker modes explained, and the data persistence mistake to avoid.","/img/blog/hermes-agent-docker-install.jpg",{},"/blog/hermes-agent-docker-install",{"title":716,"description":1105},"Hermes Agent Docker Install: Step-by-Step (2026)","blog/hermes-agent-docker-install",[1113,1114,1115,1116,1117,1118,1119],"Hermes Agent Docker","install Hermes Docker","Hermes Agent Docker setup","Hermes Docker guide","Hermes Agent container","Hermes Docker compose","Hermes VPS Docker","srncRXC1lvz8E0AwKv6eh-efi_eGAJqDnluR05yXwnM",{"id":1122,"title":1123,"author":1124,"body":1125,"category":355,"date":1949,"description":1950,"extension":358,"featured":359,"image":1951,"imageHeight":361,"imageWidth":361,"meta":1952,"navigation":363,"path":1953,"readingTime":1954,"seo":1955,"seoTitle":1956,"stem":1957,"tags":1958,"updatedDate":1949,"__hash__":1966},"blog/blog/openclaw-ollama-guide.md","OpenClaw + Ollama: What Works and What Doesn't (2026)",{"name":7,"role":8,"avatar":9},{"type":11,"value":1126,"toc":1934},[1127,1132,1135,1138,1141,1144,1150,1153,1157,1160,1171,1178,1184,1187,1193,1196,1239,1242,1250,1256,1260,1263,1268,1271,1276,1279,1284,1287,1292,1307,1310,1316,1320,1325,1328,1331,1336,1339,1344,1347,1352,1360,1366,1370,1373,1379,1385,1391,1397,1409,1412,1415,1555,1563,1569,1573,1576,1580,1583,1589,1624,1627,1631,1638,1654,1658,1669,1682,1688,1692,1695,1698,1701,1721,1724,1732,1735,1743,1747,1750,1755,1758,1763,1766,1771,1774,1831,1837,1845,1849,1852,1855,1858,1861,1864,1870,1872,1877,1883,1888,1891,1896,1914,1919,1922,1927,1930],[14,1128,1129],{},[388,1130,1131],{},"We tested every recommended local model. Some chat fine. None reliably call tools. Here's the full picture.",[14,1133,1134],{},"I spent a Saturday afternoon trying to get Qwen3 8B running through Ollama as my OpenClaw agent's primary model. Zero API costs. Full privacy. The dream setup.",[14,1136,1137],{},"The model loaded. The gateway started. I typed \"hello.\" It responded instantly. This is going to work.",[14,1139,1140],{},"Then I asked it to check my calendar. The agent generated a narrative essay about how it would check my calendar if it could, instead of actually calling the calendar tool. I asked it to search the web. Same thing. Beautiful prose about web searching. Zero actual web searches.",[14,1142,1143],{},"Three hours later, I'd tried four different models, two different API configurations, and one custom provider workaround from a GitHub issue. The chat worked perfectly every time. The tool calling failed silently every time.",[14,1145,1146,1147],{},"Here's what nobody tells you about the OpenClaw Ollama setup: ",[491,1148,1149],{},"chat and tool calling are completely different capabilities, and local models in 2026 handle the first one well and the second one poorly.",[14,1151,1152],{},"This guide covers exactly what works, exactly what doesn't, and the specific scenarios where Ollama with OpenClaw is genuinely worth the effort.",[30,1154,1156],{"id":1155},"the-fundamental-problem-streaming-breaks-tool-calling","The fundamental problem: streaming breaks tool calling",[14,1158,1159],{},"This is the root cause of most OpenClaw Ollama failures, and it's documented in GitHub Issue #5769.",[14,1161,1162,1163,1166,1167,1170],{},"OpenClaw sends ",[52,1164,1165],{},"stream: true"," on every model request. This is fine for cloud providers like Anthropic and OpenAI, whose streaming implementations properly emit tool call responses. But Ollama's streaming implementation doesn't correctly return ",[52,1168,1169],{},"tool_calls"," delta chunks.",[14,1172,1173,1174,1177],{},"What happens: your local model decides to call a tool (web_search, exec, browser). It generates the tool call in its response. But the streaming protocol drops it. OpenClaw receives empty content with ",[52,1175,1176],{},"finish_reason: \"stop\""," instead of the tool call. The tool never executes.",[14,1179,1180,1183],{},[491,1181,1182],{},"The result: your agent can have conversations but can't perform actions."," No file operations. No web searches. No shell commands. No skill execution. The model writes about what it would do instead of doing it.",[14,1185,1186],{},"This affects every Ollama model configured through OpenClaw. Mistral, Qwen, Llama, DeepSeek local variants. All of them.",[14,1188,1189,1192],{},[491,1190,1191],{},"OpenClaw + Ollama = chat works. Tool calling doesn't."," This isn't a config problem. It's an architectural mismatch between OpenClaw's streaming requirement and Ollama's tool call implementation.",[14,1194,1195],{},"The community has proposed a fix: a per-provider config option to disable streaming when tools are present. The suggested code is straightforward:",[45,1197,1201],{"className":1198,"code":1199,"language":1200,"meta":54,"style":54},"language-javascript shiki shiki-themes github-light","const shouldStream = !(context.tools?.length && isOllamaProvider(model));\n","javascript",[52,1202,1203],{"__ignoreMap":54},[1204,1205,1208,1212,1216,1219,1222,1226,1229,1232,1236],"span",{"class":1206,"line":1207},"line",1,[1204,1209,1211],{"class":1210},"sD7c4","const",[1204,1213,1215],{"class":1214},"sYu0t"," shouldStream",[1204,1217,1218],{"class":1210}," =",[1204,1220,1221],{"class":1210}," !",[1204,1223,1225],{"class":1224},"sgsFI","(context.tools?.",[1204,1227,1228],{"class":1214},"length",[1204,1230,1231],{"class":1210}," &&",[1204,1233,1235],{"class":1234},"s7eDp"," isOllamaProvider",[1204,1237,1238],{"class":1224},"(model));\n",[14,1240,1241],{},"As of March 2026, this hasn't been merged into a release. Until it is, local models through Ollama are limited to chat-only interactions.",[14,1243,1244,1245,1249],{},"For a detailed breakdown of all ",[150,1246,1248],{"href":1247},"/blog/openclaw-local-model-not-working","five ways local models fail in OpenClaw"," (including discovery timeouts, WSL2 networking, and the CLI vs API confusion), our troubleshooting guide covers each failure mode.",[14,1251,1252],{},[37,1253],{"alt":1254,"src":1255},"Diagram showing OpenClaw streaming request flow with Ollama tool call being dropped","/img/blog/openclaw-ollama-streaming-bug.jpg",[30,1257,1259],{"id":1258},"what-actually-works-with-openclaw-ollama","What actually works with OpenClaw + Ollama",[14,1261,1262],{},"The streaming bug kills tool calling. But not everything in OpenClaw requires tools. Here's what genuinely works.",[14,1264,1265],{},[491,1266,1267],{},"Basic conversation",[14,1269,1270],{},"This works perfectly. Ask questions. Get answers. Have discussions. The agent responds through whatever chat platform you've connected (Telegram, WhatsApp, Slack). If all you want is a private chatbot that runs on your hardware, Ollama delivers.",[14,1272,1273],{},[491,1274,1275],{},"Memory and context",[14,1277,1278],{},"Ollama models maintain conversation context through OpenClaw's memory system. The agent remembers previous messages, stores preferences, and builds context over time. This works the same as cloud models for conversational interactions.",[14,1280,1281],{},[491,1282,1283],{},"SOUL.md personality",[14,1285,1286],{},"Your agent's personality configuration works normally with local models. Customize tone, behavior rules, and working context. The model follows the system prompt instructions.",[14,1288,1289],{},[491,1290,1291],{},"Model switching mid-conversation",[14,1293,1294,1295,1298,1299,1302,1303,1306],{},"The ",[52,1296,1297],{},"/model"," command works with Ollama models. You can switch between local and cloud providers on the fly. Type ",[52,1300,1301],{},"/model ollama/qwen3:8b"," for a quick local response, then ",[52,1304,1305],{},"/model anthropic/claude-sonnet-4-6"," when you need tool execution.",[14,1308,1309],{},"This hybrid approach is actually the best use of Ollama in OpenClaw: local for chat, cloud for actions.",[14,1311,1312],{},[37,1313],{"alt":1314,"src":1315},"OpenClaw chat working correctly with Ollama local model on Telegram","/img/blog/openclaw-ollama-chat-working.jpg",[30,1317,1319],{"id":1318},"what-breaks-and-why-you-cant-config-your-way-around-it","What breaks (and why you can't config your way around it)",[14,1321,1322],{},[491,1323,1324],{},"Tool calling (the big one)",[14,1326,1327],{},"Every skill that requires the agent to call a tool fails silently. This includes: web search, file read/write, shell command execution, browser automation, email skills, calendar skills, and essentially every skill that makes an agent more than a chatbot.",[14,1329,1330],{},"The model generates the intent to call the tool. The streaming protocol loses it. OpenClaw never receives the instruction. No error message appears. The agent just produces text instead of action.",[14,1332,1333],{},[491,1334,1335],{},"Cron jobs that require actions",[14,1337,1338],{},"Scheduled tasks that involve tool use (morning briefings that check your calendar, email triage that reads your inbox) fail for the same reason. The cron fires. The model responds. But no tools execute. You get a narrative about what the agent would do, not an actual result.",[14,1340,1341],{},[491,1342,1343],{},"Sub-agent parallel processing",[14,1345,1346],{},"Sub-agents inherit the tool calling limitation. If your main agent spawns workers for parallel tasks, those workers can't execute tools either. The parallelism works. The execution doesn't.",[14,1348,1349],{},[491,1350,1351],{},"Browser relay",[14,1353,1354,1355,1359],{},"OpenClaw's ",[150,1356,1358],{"href":1357},"/blog/openclaw-browser-relay","browser automation"," requires precise tool calling to click elements, fill forms, and navigate pages. Local models can't generate the structured tool calls needed. Browser relay with Ollama simply doesn't function.",[14,1361,1362],{},[37,1363],{"alt":1364,"src":1365},"Terminal showing OpenClaw agent generating text about tool use instead of executing it","/img/blog/openclaw-ollama-tool-failure.jpg",[30,1367,1369],{"id":1368},"the-models-the-community-actually-recommends","The models the community actually recommends",[14,1371,1372],{},"Despite the tool calling limitation, some local models work noticeably better than others for the chat-only use case.",[14,1374,1375,1378],{},[491,1376,1377],{},"glm-4.7-flash (~25GB VRAM):"," The community favorite. Multiple users in GitHub Discussion #2936 call it \"huge bang for the buck.\" Strong reasoning and code generation. Runs on an RTX 4090, though not entirely in VRAM.",[14,1380,1381,1384],{},[491,1382,1383],{},"qwen3-coder-30b:"," Good for code-heavy conversations. Requires significant hardware (24GB+ RAM for quantized versions).",[14,1386,1387,1390],{},[491,1388,1389],{},"hermes-2-pro and mistral:7b:"," Ollama's official recommendations for tool calling. These are the models most likely to work when the streaming fix eventually lands, since they have native tool calling support in non-streaming mode.",[14,1392,1393,1396],{},[491,1394,1395],{},"Models under 8B parameters:"," Frequent failures on agent tasks even in chat-only mode. Context tracking degrades quickly. Instructions get ignored or misinterpreted. Not recommended for anything beyond basic Q&A.",[14,1398,1399,1400,1403,1404],{},"🎥 ",[491,1401,1402],{},"Watch: OpenClaw with Ollama Local Models Setup and Limitations","\nIf you want to see the Ollama configuration in action (including what the tool calling failure actually looks like and which models perform best for chat-only use), this community walkthrough provides an honest demonstration.\n🎬 ",[150,1405,1408],{"href":1406,"rel":1407},"https://www.youtube.com/results?search_query=openclaw+ollama+local+model+setup+2026",[293],"Watch on YouTube",[14,1410,1411],{},"For local models, plan for 30B+ parameters with at least 64K context window. Anything smaller struggles with OpenClaw's system prompts and multi-turn conversations.",[14,1413,1414],{},"Ollama's own OpenClaw integration docs recommend 64K minimum context. Many popular models default to much less. Set it explicitly in your config:",[45,1416,1420],{"className":1417,"code":1418,"language":1419,"meta":54,"style":54},"language-json shiki shiki-themes github-light","{\n  \"models\": {\n    \"providers\": {\n      \"ollama\": {\n        \"baseUrl\": \"http://127.0.0.1:11434\",\n        \"apiKey\": \"ollama-local\",\n        \"api\": \"ollama\",\n        \"models\": [{\n          \"id\": \"qwen3:8b\",\n          \"contextWindow\": 65536\n        }]\n      }\n    }\n  }\n}\n","json",[52,1421,1422,1427,1435,1442,1450,1466,1479,1492,1501,1514,1525,1531,1537,1543,1549],{"__ignoreMap":54},[1204,1423,1424],{"class":1206,"line":1207},[1204,1425,1426],{"class":1224},"{\n",[1204,1428,1429,1432],{"class":1206,"line":336},[1204,1430,1431],{"class":1214},"  \"models\"",[1204,1433,1434],{"class":1224},": {\n",[1204,1436,1437,1440],{"class":1206,"line":342},[1204,1438,1439],{"class":1214},"    \"providers\"",[1204,1441,1434],{"class":1224},[1204,1443,1445,1448],{"class":1206,"line":1444},4,[1204,1446,1447],{"class":1214},"      \"ollama\"",[1204,1449,1434],{"class":1224},[1204,1451,1453,1456,1459,1463],{"class":1206,"line":1452},5,[1204,1454,1455],{"class":1214},"        \"baseUrl\"",[1204,1457,1458],{"class":1224},": ",[1204,1460,1462],{"class":1461},"sYBdl","\"http://127.0.0.1:11434\"",[1204,1464,1465],{"class":1224},",\n",[1204,1467,1469,1472,1474,1477],{"class":1206,"line":1468},6,[1204,1470,1471],{"class":1214},"        \"apiKey\"",[1204,1473,1458],{"class":1224},[1204,1475,1476],{"class":1461},"\"ollama-local\"",[1204,1478,1465],{"class":1224},[1204,1480,1482,1485,1487,1490],{"class":1206,"line":1481},7,[1204,1483,1484],{"class":1214},"        \"api\"",[1204,1486,1458],{"class":1224},[1204,1488,1489],{"class":1461},"\"ollama\"",[1204,1491,1465],{"class":1224},[1204,1493,1495,1498],{"class":1206,"line":1494},8,[1204,1496,1497],{"class":1214},"        \"models\"",[1204,1499,1500],{"class":1224},": [{\n",[1204,1502,1504,1507,1509,1512],{"class":1206,"line":1503},9,[1204,1505,1506],{"class":1214},"          \"id\"",[1204,1508,1458],{"class":1224},[1204,1510,1511],{"class":1461},"\"qwen3:8b\"",[1204,1513,1465],{"class":1224},[1204,1515,1517,1520,1522],{"class":1206,"line":1516},10,[1204,1518,1519],{"class":1214},"          \"contextWindow\"",[1204,1521,1458],{"class":1224},[1204,1523,1524],{"class":1214},"65536\n",[1204,1526,1528],{"class":1206,"line":1527},11,[1204,1529,1530],{"class":1224},"        }]\n",[1204,1532,1534],{"class":1206,"line":1533},12,[1204,1535,1536],{"class":1224},"      }\n",[1204,1538,1540],{"class":1206,"line":1539},13,[1204,1541,1542],{"class":1224},"    }\n",[1204,1544,1546],{"class":1206,"line":1545},14,[1204,1547,1548],{"class":1224},"  }\n",[1204,1550,1552],{"class":1206,"line":1551},15,[1204,1553,1554],{"class":1224},"}\n",[14,1556,1557,1558,1562],{},"For guidance on choosing the right model for your specific use case, our ",[150,1559,1561],{"href":1560},"/blog/openclaw-model-comparison","model comparison"," covers cost-per-task data across local and cloud providers.",[14,1564,1565],{},[37,1566],{"alt":1567,"src":1568},"Comparison chart of Ollama local models showing VRAM requirements and capability ratings","/img/blog/openclaw-ollama-model-comparison.jpg",[30,1570,1572],{"id":1571},"the-three-ollama-gotchas-that-waste-hours","The three Ollama gotchas that waste hours",[14,1574,1575],{},"Beyond the tool calling bug, three configuration issues eat the most time.",[124,1577,1579],{"id":1578},"gotcha-1-model-discovery-timeout","Gotcha 1: Model discovery timeout",[14,1581,1582],{},"When OpenClaw starts, it tries to auto-discover Ollama models. If Ollama is slow (common when the model isn't pre-loaded), discovery times out silently. Your gateway starts. Your model is listed. But requests fail.",[14,1584,1585,1588],{},[491,1586,1587],{},"Fix:"," Pre-load the model before starting OpenClaw:",[45,1590,1594],{"className":1591,"code":1592,"language":1593,"meta":54,"style":54},"language-bash shiki shiki-themes github-light","ollama run qwen3:8b\n# Wait for \"success,\" then Ctrl+C\nopenclaw gateway start\n","bash",[52,1595,1596,1607,1613],{"__ignoreMap":54},[1204,1597,1598,1601,1604],{"class":1206,"line":1207},[1204,1599,1600],{"class":1234},"ollama",[1204,1602,1603],{"class":1461}," run",[1204,1605,1606],{"class":1461}," qwen3:8b\n",[1204,1608,1609],{"class":1206,"line":336},[1204,1610,1612],{"class":1611},"sAwPA","# Wait for \"success,\" then Ctrl+C\n",[1204,1614,1615,1618,1621],{"class":1206,"line":342},[1204,1616,1617],{"class":1234},"openclaw",[1204,1619,1620],{"class":1461}," gateway",[1204,1622,1623],{"class":1461}," start\n",[14,1625,1626],{},"Or define models explicitly in your config to skip discovery entirely (shown above).",[124,1628,1630],{"id":1629},"gotcha-2-wsl2-networking","Gotcha 2: WSL2 networking",[14,1632,1633,1634,1637],{},"If you're running OpenClaw in WSL2 and Ollama on the Windows host (or vice versa), ",[52,1635,1636],{},"127.0.0.1"," doesn't resolve across the boundary. Your config says localhost. Your curl works. But OpenClaw can't reach Ollama.",[14,1639,1640,1642,1643,1646,1647,1650,1651,73],{},[491,1641,1587],{}," Use the actual WSL2 IP from ",[52,1644,1645],{},"hostname -I",". Or bind Ollama to ",[52,1648,1649],{},"0.0.0.0"," with ",[52,1652,1653],{},"OLLAMA_HOST=0.0.0.0:11434 ollama serve",[124,1655,1657],{"id":1656},"gotcha-3-the-cli-vs-api-confusion","Gotcha 3: The CLI vs API confusion",[14,1659,1660,1661,1664,1665,1668],{},"GitHub Issue #11283 documents this bizarre behavior: you configure Ollama as a remote API provider with a ",[52,1662,1663],{},"baseUrl",". OpenClaw should make HTTP API calls. Instead, it tries to execute ",[52,1666,1667],{},"ollama run"," as a shell command on your local machine. This happens when OpenClaw's model routing falls back to a cloud model that then tries to \"help\" by calling Ollama via CLI.",[14,1670,1671,1673,1674,1677,1678,1681],{},[491,1672,1587],{}," Make sure your Ollama model is explicitly defined in the ",[52,1675,1676],{},"models.providers"," section with ",[52,1679,1680],{},"api: \"ollama\""," and is listed in the models array. Don't rely on auto-discovery for remote Ollama.",[14,1683,1684],{},[37,1685],{"alt":1686,"src":1687},"Terminal showing three common Ollama configuration errors with fix commands","/img/blog/openclaw-ollama-gotchas.jpg",[30,1689,1691],{"id":1690},"the-honest-cost-comparison-ollama-vs-cheap-cloud-providers","The honest cost comparison: Ollama vs cheap cloud providers",[14,1693,1694],{},"The appeal of Ollama is zero API costs. But \"zero API costs\" and \"zero cost\" are different things.",[14,1696,1697],{},"Running Ollama on hardware you own means electricity, hardware depreciation, and your time debugging issues. A Mac Mini M4 running 24/7 consumes roughly $3-5/month in electricity. The machine itself costs $600+ and depreciates.",[14,1699,1700],{},"Meanwhile, cloud providers in 2026 are absurdly cheap:",[535,1702,1703,1709,1715],{},[538,1704,1705,1708],{},[491,1706,1707],{},"DeepSeek V3.2:"," $0.28/$0.42 per million tokens. A full month of moderate agent usage: $3-8/month.",[538,1710,1711,1714],{},[491,1712,1713],{},"Gemini 2.5 Flash free tier:"," 1,500 requests/day. $0/month for personal use.",[538,1716,1717,1720],{},[491,1718,1719],{},"Claude Haiku 4.5:"," $1/$5 per million tokens. Moderate usage: $5-10/month.",[14,1722,1723],{},"And critically: these cloud providers have working tool calling. Your agent can actually do things.",[14,1725,1726,1727,1731],{},"For the full breakdown of ",[150,1728,1730],{"href":1729},"/blog/cheapest-openclaw-ai-providers","which cloud providers cost what for OpenClaw",", our provider comparison covers five alternatives that cost 90% less than most people expect.",[14,1733,1734],{},"The cheapest model isn't the one with the lowest per-token price. It's the one that can do the job. An Ollama model that can chat but can't call tools isn't a cheaper agent. It's a more expensive chatbot.",[14,1736,1737,1738,1742],{},"If you want tool calling that works, multi-channel support, and zero Ollama debugging, ",[150,1739,1741],{"href":1740},"/","BetterClaw supports all 28+ cloud providers"," with BYOK and zero configuration. $19/month per agent. 60-second deploy. Every model routes correctly because the streaming issue doesn't exist with cloud APIs.",[30,1744,1746],{"id":1745},"when-ollama-with-openclaw-genuinely-makes-sense","When Ollama with OpenClaw genuinely makes sense",[14,1748,1749],{},"I'm not going to pretend Ollama is never the right choice. Three scenarios justify the setup.",[14,1751,1752],{},[491,1753,1754],{},"Privacy-first deployments",[14,1756,1757],{},"If your data absolutely cannot leave your network, local models are the only option. Government, healthcare, legal, defense: these environments have compliance requirements that no cloud provider can satisfy. The tool calling limitation is real, but for conversational interaction with sensitive data, Ollama delivers complete data sovereignty.",[14,1759,1760],{},[491,1761,1762],{},"Offline and air-gapped environments",[14,1764,1765],{},"No internet? No API calls. Ollama runs entirely locally. If you need an AI assistant in an environment without reliable connectivity, local models are it.",[14,1767,1768],{},[491,1769,1770],{},"Hybrid heartbeat routing",[14,1772,1773],{},"Use Ollama for heartbeats (the 48 daily status checks that cost tokens on cloud providers) and a cloud model for everything else. Heartbeats don't require tool calling. They're simple status checks. Running them locally saves $4-15/month depending on your cloud model pricing.",[45,1775,1777],{"className":1417,"code":1776,"language":1419,"meta":54,"style":54},"{\n  \"agent\": {\n    \"model\": {\n      \"primary\": \"anthropic/claude-sonnet-4-6\",\n      \"heartbeat\": \"ollama/hermes-2-pro:latest\"\n    }\n  }\n}\n",[52,1778,1779,1783,1790,1797,1809,1819,1823,1827],{"__ignoreMap":54},[1204,1780,1781],{"class":1206,"line":1207},[1204,1782,1426],{"class":1224},[1204,1784,1785,1788],{"class":1206,"line":336},[1204,1786,1787],{"class":1214},"  \"agent\"",[1204,1789,1434],{"class":1224},[1204,1791,1792,1795],{"class":1206,"line":342},[1204,1793,1794],{"class":1214},"    \"model\"",[1204,1796,1434],{"class":1224},[1204,1798,1799,1802,1804,1807],{"class":1206,"line":1444},[1204,1800,1801],{"class":1214},"      \"primary\"",[1204,1803,1458],{"class":1224},[1204,1805,1806],{"class":1461},"\"anthropic/claude-sonnet-4-6\"",[1204,1808,1465],{"class":1224},[1204,1810,1811,1814,1816],{"class":1206,"line":1452},[1204,1812,1813],{"class":1214},"      \"heartbeat\"",[1204,1815,1458],{"class":1224},[1204,1817,1818],{"class":1461},"\"ollama/hermes-2-pro:latest\"\n",[1204,1820,1821],{"class":1206,"line":1468},[1204,1822,1542],{"class":1224},[1204,1824,1825],{"class":1206,"line":1481},[1204,1826,1548],{"class":1224},[1204,1828,1829],{"class":1206,"line":1494},[1204,1830,1554],{"class":1224},[14,1832,1833],{},[37,1834],{"alt":1835,"src":1836},"Hybrid model routing diagram showing Ollama for heartbeats and Claude for tool-based tasks","/img/blog/openclaw-ollama-hybrid-routing.jpg",[14,1838,1839,1840,1844],{},"For the full model routing setup, our ",[150,1841,1843],{"href":1842},"/blog/openclaw-model-routing","intelligent provider switching guide"," covers the config patterns.",[30,1846,1848],{"id":1847},"where-this-is-heading","Where this is heading",[14,1850,1851],{},"The streaming + tool calling bug will get fixed eventually. The proposed patch is clean. The community wants it. It's a matter of when, not if.",[14,1853,1854],{},"When it lands, the best local models (glm-4.7-flash, qwen3-coder-30b) will become genuinely useful for agent tasks. Tool calling will work. Skills will execute. The gap between local and cloud will narrow significantly for the subset of tasks that don't require frontier-level reasoning.",[14,1856,1857],{},"But \"narrowing\" isn't \"closing.\" Cloud models like Claude Sonnet and GPT-4o will still outperform local models on complex multi-step reasoning, long-context accuracy, and prompt injection resistance for the foreseeable future. The hardware requirements for running competitive local models (25GB+ VRAM, 64GB+ RAM for larger models) put them out of reach for most users.",[14,1859,1860],{},"The practical future is hybrid. Cloud for the tasks that need it. Local for the tasks that don't. OpenClaw's model routing architecture already supports this. The tooling just needs to catch up.",[14,1862,1863],{},"For now, if you need an agent that can act (not just talk), cloud providers are the reliable path. If you need complete privacy for conversational AI, Ollama works today.",[14,1865,1866,1867,1869],{},"If you want an agent that works with any provider without debugging streaming protocols, ",[150,1868,294],{"href":598},". $19/month per agent, BYOK with any cloud provider or combination. 60-second deploy. The tool calling just works because we handle the model integration layer. You build workflows instead of workarounds.",[30,1871,299],{"id":298},[14,1873,1874],{},[491,1875,1876],{},"Does OpenClaw work with Ollama local models?",[14,1878,1879,1880,1882],{},"Partially. Chat and conversation work correctly with Ollama models through OpenClaw. Tool calling (web search, file operations, shell commands, browser automation, skills) does not work due to a streaming protocol bug documented in GitHub Issue #5769. OpenClaw sends ",[52,1881,1165],{}," on all requests, but Ollama's streaming implementation drops tool call responses. Until this is patched, local models are limited to chat-only interactions.",[14,1884,1885],{},[491,1886,1887],{},"How does Ollama compare to cloud providers for OpenClaw?",[14,1889,1890],{},"Ollama offers zero API costs and complete data privacy but lacks working tool calling in OpenClaw. Cloud providers (Claude Sonnet at $3/$15, DeepSeek at $0.28/$0.42, Gemini Flash free tier) have reliable tool calling, larger context windows, and better multi-step reasoning. For agent tasks that require actions (email, calendar, web search), cloud providers are significantly more capable. For private conversational AI, Ollama works well.",[14,1892,1893],{},[491,1894,1895],{},"How do I set up Ollama with OpenClaw?",[14,1897,1898,1899,1902,1903,1906,1907,1910,1911,1913],{},"Install Ollama and pull your model (",[52,1900,1901],{},"ollama pull qwen3:8b","). Pre-load the model before starting OpenClaw to avoid discovery timeouts. Configure your ",[52,1904,1905],{},"~/.openclaw/openclaw.json"," with the Ollama provider, setting ",[52,1908,1909],{},"contextWindow"," to at least 65536. Start the gateway and test. If on WSL2, use the actual network IP instead of ",[52,1912,1636],{},". Expect chat to work and tool calling to fail.",[14,1915,1916],{},[491,1917,1918],{},"Is running OpenClaw with Ollama cheaper than cloud APIs?",[14,1920,1921],{},"Not always. Ollama has zero token costs but requires dedicated hardware ($600+ Mac Mini or GPU-capable machine) and electricity ($3-5/month). DeepSeek V3.2 runs a full agent for $3-8/month via API. Gemini Flash has a free tier. When you factor in hardware cost, electricity, and the time debugging Ollama issues, cheap cloud providers often cost less overall. The exception: if you already have capable hardware and need complete data privacy.",[14,1923,1924],{},[491,1925,1926],{},"Which Ollama models work best with OpenClaw?",[14,1928,1929],{},"For chat-only use: glm-4.7-flash (best quality, needs ~25GB VRAM), qwen3-coder-30b (strong for code, needs 24GB+ RAM), and hermes-2-pro or mistral:7b (Ollama's recommended tool calling models, will be first to work when the streaming fix lands). Avoid models under 8B parameters for agent tasks. Set context window to 64K+ minimum in your config.",[1931,1932,1933],"style",{},"html pre.shiki code .sgsFI, html code.shiki .sgsFI{--shiki-default:#24292E}html pre.shiki code .sYu0t, html code.shiki .sYu0t{--shiki-default:#005CC5}html pre.shiki code .sYBdl, html code.shiki .sYBdl{--shiki-default:#032F62}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html pre.shiki code .s7eDp, html code.shiki .s7eDp{--shiki-default:#6F42C1}html pre.shiki code .sAwPA, html code.shiki .sAwPA{--shiki-default:#6A737D}html pre.shiki code .sD7c4, html code.shiki .sD7c4{--shiki-default:#D73A49}",{"title":54,"searchDepth":336,"depth":336,"links":1935},[1936,1937,1938,1939,1940,1945,1946,1947,1948],{"id":1155,"depth":336,"text":1156},{"id":1258,"depth":336,"text":1259},{"id":1318,"depth":336,"text":1319},{"id":1368,"depth":336,"text":1369},{"id":1571,"depth":336,"text":1572,"children":1941},[1942,1943,1944],{"id":1578,"depth":342,"text":1579},{"id":1629,"depth":342,"text":1630},{"id":1656,"depth":342,"text":1657},{"id":1690,"depth":336,"text":1691},{"id":1745,"depth":336,"text":1746},{"id":1847,"depth":336,"text":1848},{"id":298,"depth":336,"text":299},"2026-03-18","OpenClaw Ollama chat works fine. Tool calling breaks silently. Here's what the streaming bug means, which models perform best, and when cloud is smarter.","/img/blog/openclaw-ollama-guide.jpg",{},"/blog/openclaw-ollama-guide","14 min read",{"title":1123,"description":1950},"OpenClaw + Ollama: Local Model Setup & Tool Calling Fix (2026)","blog/openclaw-ollama-guide",[1959,1960,1961,1962,1963,1964,1965],"OpenClaw Ollama","OpenClaw local model","Ollama tool calling OpenClaw","OpenClaw Ollama setup","best Ollama model OpenClaw","OpenClaw offline","OpenClaw local vs cloud","c2fBWyl3QlcEzEyKFp6TH6gajHuEVToWYu_UGTbUobI",1778850201346]