[{"data":1,"prerenderedAt":1632},["ShallowReactive",2],{"blog-post-openclaw-anthropic-subscription-ban":3,"related-posts-openclaw-anthropic-subscription-ban":395},{"id":4,"title":5,"author":6,"body":10,"category":371,"date":372,"description":373,"extension":374,"featured":375,"image":376,"imageHeight":377,"imageWidth":377,"meta":378,"navigation":379,"path":380,"readingTime":381,"seo":382,"seoTitle":383,"stem":384,"tags":385,"updatedDate":377,"__hash__":394},"blog/blog/openclaw-anthropic-subscription-ban.md","Your Claude Subscription No Longer Works With OpenClaw. Here's What To Do.",{"name":7,"role":8,"avatar":9},"Shabnam Katoch","Growth Head","/img/avatars/shabnam-profile.jpeg",{"type":11,"value":12,"toc":359},"minimark",[13,17,20,23,26,31,43,49,67,73,76,79,82,89,93,96,102,108,111,114,117,126,132,136,142,148,159,165,171,177,181,186,191,197,203,215,220,226,230,233,239,242,250,256,260,263,266,269,273,282,288,294,297,312,316,321,324,329,335,340,343,348,351,356],[14,15,16],"p",{},"Anthropic revoked OAuth for all third-party tools on April 4, 2026. Your agent is broken. Here's the 5-minute fix and three options for what comes next.",[14,18,19],{},"If you opened your OpenClaw gateway this morning and your agent isn't responding, you're not alone. Your Claude Pro or Max subscription stopped working with OpenClaw on April 4, 2026 at 12pm PT.",[14,21,22],{},"This isn't a bug. It's permanent. Anthropic revoked OAuth authentication for every third-party application. OpenClaw, NanoClaw, OpenCode, and every other tool that used your Claude subscription credentials lost access simultaneously. The subscription still works on Claude.ai, Claude Code, and Claude Cowork. It doesn't work anywhere else.",[14,24,25],{},"Here's the immediate fix, followed by three options for keeping your agent running long-term.",[27,28,30],"h2",{"id":29},"the-5-minute-fix-get-your-agent-working-again-right-now","The 5-minute fix (get your agent working again right now)",[14,32,33,37,38,42],{},[34,35,36],"strong",{},"Step 1:"," Go to ",[39,40,41],"code",{},"console.anthropic.com",". Create an account if you don't have one.",[14,44,45,48],{},[34,46,47],{},"Step 2:"," Generate an API key. Copy it.",[14,50,51,54,55,58,59,62,63,66],{},[34,52,53],{},"Step 3:"," Open your OpenClaw config. Replace your subscription OAuth token with the new API key. The field is typically ",[39,56,57],{},"ANTHROPIC_API_KEY"," in your ",[39,60,61],{},".env"," file or the model configuration section of ",[39,64,65],{},"openclaw.json",".",[14,68,69,72],{},[34,70,71],{},"Step 4:"," Restart your gateway.",[14,74,75],{},"Your agent works again. Same model. Same quality. Different billing. You're now on per-token API pricing instead of your flat-rate subscription.",[14,77,78],{},"Anthropic offered a $200 credit to affected users. Check your email for the redemption link. That credit covers roughly 2 months of moderate Sonnet usage with optimization, or about 3 weeks without optimization.",[14,80,81],{},"The API key fix takes 5 minutes. The billing change takes longer to accept. Your $20/month flat-rate Claude access just became $10-150/month variable depending on how you configure your agent.",[14,83,84],{},[85,86],"img",{"alt":87,"src":88},"The 5-minute fix: get your agent working again right now.","/img/blog/openclaw-anthropic-subscription-ban-fix.jpg",[27,90,92],{"id":91},"what-this-actually-costs-you-now-the-math-nobody-wants-to-hear","What this actually costs you now (the math nobody wants to hear)",[14,94,95],{},"Here's what nobody tells you about the subscription-to-API transition.",[14,97,98,101],{},[34,99,100],{},"Before the ban:"," Claude Pro at $20/month covered unlimited API-equivalent usage through your subscription. Heavy users were getting $200+ worth of API calls for $20. Anthropic was subsidizing the difference.",[14,103,104,107],{},[34,105,106],{},"After the ban:"," Claude Sonnet costs $3 per million input tokens and $15 per million output tokens. Claude Haiku costs $1/$5. Claude Opus costs $15/$75.",[14,109,110],{},"What this means for your agent:",[14,112,113],{},"A moderate-usage agent (50 messages/day on Sonnet) with default OpenClaw settings costs approximately $40-87/month in API fees. That's because OpenClaw sends the full conversation context with every request, including 48 daily heartbeats.",[14,115,116],{},"With optimization (model routing, session resets, context limits, heartbeat routing to Haiku), the same agent costs $10-20/month.",[14,118,119,120,125],{},"The difference between $87/month and $10/month is configuration, not usage. For the ",[121,122,124],"a",{"href":123},"/blog/cheapest-openclaw-ai-providers","complete list of cost optimization settings",", our API cost reduction guide covers how to configure each one.",[14,127,128],{},[85,129],{"alt":130,"src":131},"What this actually costs you now: the math nobody wants to hear.","/img/blog/openclaw-anthropic-subscription-ban-costs.jpg",[27,133,135],{"id":134},"option-a-stay-on-claude-with-api-billing-10-20month-optimized","Option A: Stay on Claude with API billing ($10-20/month optimized)",[14,137,138,141],{},[34,139,140],{},"Who this is for:"," Users who prefer Claude's quality and are willing to optimize their configuration to control costs.",[14,143,144,147],{},[34,145,146],{},"What to do:"," Use the API key from the 5-minute fix above. Then optimize:",[14,149,150,151,154,155,158],{},"Set your primary model to Sonnet (not Opus). Route heartbeats to Haiku ($0.29/month instead of $4.32/month on Opus). Use ",[39,152,153],{},"/new"," every 20-25 messages to reset the conversation buffer. Set ",[39,156,157],{},"maxContextTokens"," to 6,000-8,000.",[14,160,161,164],{},[34,162,163],{},"Expected cost:"," $10-20/month for moderate usage. The $200 credit covers 10-20 months at this rate.",[14,166,167,170],{},[34,168,169],{},"The trade-off:"," You're now managing per-token costs instead of paying flat-rate. You need to monitor your usage and optimize your configuration. The $200 credit buys time but eventually runs out.",[14,172,173],{},[85,174],{"alt":175,"src":176},"Option A: stay on Claude with API billing. Get it under $20/month with four settings.","/img/blog/openclaw-anthropic-subscription-ban-option-a.jpg",[27,178,180],{"id":179},"option-b-switch-to-a-cheaper-model-provider-0-15month","Option B: Switch to a cheaper model provider ($0-15/month)",[14,182,183,185],{},[34,184,140],{}," Users who care more about cost than sticking with Claude specifically.",[14,187,188,190],{},[34,189,146],{}," Replace your Claude API key with a different provider's key in your OpenClaw config.",[14,192,193,196],{},[34,194,195],{},"DeepSeek V3"," at $0.27/$1.10 per million tokens. Moderate usage costs $5-15/month. Community consensus: roughly 90% of Claude quality for everyday agent tasks.",[14,198,199,202],{},[34,200,201],{},"Gemini 2.5 Flash"," with 1,500 free requests/day. Personal usage costs $0/month. Quality is adequate for routine tasks but noticeably below Claude for complex reasoning.",[14,204,205,206,210,211,214],{},"For the ",[121,207,209],{"href":208},"/blog/openclaw-best-free-model","ranked comparison of free and budget model alternatives",", our ",[121,212,213],{"href":123},"cheapest providers guide"," covers five options with specific quality assessments.",[14,216,217,219],{},[34,218,169],{}," Quality varies by task. Claude remains the community favorite for complex, nuanced work. DeepSeek and Gemini are competitive for routine tasks but fall short on multi-step reasoning and instruction following.",[14,221,222],{},[85,223],{"alt":224,"src":225},"Option B: switch to a cheaper model provider. $0-15/month.","/img/blog/openclaw-anthropic-subscription-ban-option-b.jpg",[27,227,229],{"id":228},"option-c-use-the-ban-as-a-reason-to-rethink-the-whole-setup","Option C: Use the ban as a reason to rethink the whole setup",[14,231,232],{},"Here's where most people get it wrong.",[14,234,235,236,238],{},"The Anthropic ban changed the billing. It didn't change the other problems. You're still managing Docker. Still patching CVEs (138+ in 2026). Still vetting ClawHub skills (1,400+ malicious during ClawHavoc). Still storing credentials in plaintext ",[39,237,61],{}," files.",[14,240,241],{},"If you're already reconfiguring your agent because the billing changed, it's a natural moment to ask: do I want to keep managing the infrastructure too?",[14,243,244,245,249],{},"If managing API keys, model routing, cost optimization, and infrastructure maintenance feels like more reconfiguration than you want, ",[121,246,248],{"href":247},"/openclaw-alternative","BetterClaw handles all of it",". 28+ model providers from a dropdown (Claude API, DeepSeek, Gemini, OpenAI, and more). Smart context management reduces per-request token volume, which directly lowers your post-ban API costs. BYOK with zero inference markup. Free tier with 1 agent. $19/month per agent for Pro. 60-second deploy. The billing change is handled. The infrastructure is handled. You configure the agent, not the server.",[14,251,252],{},[85,253],{"alt":254,"src":255},"Option C: use the ban as a reason to rethink the whole setup.","/img/blog/openclaw-anthropic-subscription-ban-option-c.jpg",[27,257,259],{"id":258},"why-anthropic-did-this-the-30-second-version","Why Anthropic did this (the 30-second version)",[14,261,262],{},"Anthropic was losing money. Subscription users were getting API-equivalent workloads at flat-rate pricing. Boris Cherny, Head of Claude Code, explained that subscriptions weren't built for the usage patterns of third-party agent tools. The economics weren't sustainable.",[14,264,265],{},"Peter Steinberger (OpenClaw creator, now at OpenAI) was temporarily banned from Claude entirely on April 10. His account was reinstated hours later after the post went viral. He connected the dots publicly: Anthropic copied popular features into their own products (Claude Cowork, Claude Dispatch), then locked out the open-source tools.",[14,267,268],{},"The ban is permanent. Anthropic has no plans to reverse it. OAuth tokens are now restricted to Anthropic's own products. Third-party access is API-only going forward.",[27,270,272],{"id":271},"what-to-do-right-now-priority-order","What to do right now (priority order)",[14,274,275,278,279,281],{},[34,276,277],{},"Right now (5 minutes):"," Get the API key from ",[39,280,41],{},". Swap it into your config. Restart your gateway. Agent works again.",[14,283,284,287],{},[34,285,286],{},"This week:"," Redeem the $200 credit from Anthropic (check your email). Configure model routing and session management to keep costs under $20/month.",[14,289,290,293],{},[34,291,292],{},"This month:"," Decide your long-term approach. Stay on Claude API with optimization. Switch to DeepSeek or Gemini for lower costs. Or move to a managed platform that handles the model switching, cost optimization, and infrastructure for you.",[14,295,296],{},"The ban forced a decision everyone was eventually going to make anyway. The flat-rate subsidy was never going to last. The question was always when, not if. Now you know: it was April 4, 2026 at 12pm PT.",[14,298,299,300,306,307,311],{},"If you want the model decision and the infrastructure decision handled by a platform instead of by your config files, ",[121,301,305],{"href":302,"rel":303},"https://app.betterclaw.io/sign-in",[304],"nofollow","give BetterClaw a try",". Free tier with 1 agent and BYOK. $19/month per agent for Pro. For the complete migration guide, our ",[121,308,310],{"href":309},"/migrate","migration page"," covers what transfers and how. 60-second deploy. 28+ providers. The ban changed the billing. We handle the rest.",[27,313,315],{"id":314},"frequently-asked-questions","Frequently Asked Questions",[14,317,318],{},[34,319,320],{},"Why did Anthropic ban Claude subscriptions from OpenClaw?",[14,322,323],{},"Anthropic revoked OAuth authentication for all third-party tools on April 4, 2026 because subscription pricing wasn't sustainable for agent-level API workloads. Users paying $20/month were consuming $200+ worth of API calls. Boris Cherny (Head of Claude Code) confirmed subscriptions weren't designed for third-party tool usage patterns. The ban covers OpenClaw, NanoClaw, OpenCode, and every other external application.",[14,325,326],{},[34,327,328],{},"Can I still use Claude with OpenClaw after the ban?",[14,330,331,332,334],{},"Yes. The ban only affects subscription-based OAuth tokens. You can still use Claude by creating an API key at ",[39,333,41],{}," and adding it to your OpenClaw config. Billing changes from flat-rate subscription ($20/month) to per-token API pricing ($3/$15 per million tokens for Sonnet). Anthropic offered a $200 credit to affected users.",[14,336,337],{},[34,338,339],{},"How much does OpenClaw cost after the Anthropic ban?",[14,341,342],{},"With default settings on Sonnet: $40-87/month in API fees (context bloat from full conversation history sent with every request). With optimization (model routing, session resets, context limits, heartbeat routing to Haiku): $10-20/month. The $200 Anthropic credit covers 10-20 months of optimized usage. The key is optimization, not the model itself.",[14,344,345],{},[34,346,347],{},"What's the cheapest way to run OpenClaw after the ban?",[14,349,350],{},"Switch to DeepSeek V3 ($0.27/$1.10 per million tokens, roughly $5-15/month for moderate use) or Gemini 2.5 Flash (1,500 free requests/day). Both work with OpenClaw by swapping the API key in your config. Quality is slightly below Claude for complex tasks but adequate for routine agent work. BetterClaw's free tier ($0/month, 1 agent, BYOK with a free model) is the cheapest option with managed hosting.",[14,352,353],{},[34,354,355],{},"Is the Anthropic ban permanent?",[14,357,358],{},"All signs point to yes. Anthropic updated their terms on February 20, 2026 to explicitly prohibit OAuth in third-party tools. Enforcement followed on April 4. The direction is API billing for external tools, subscription pricing reserved for Anthropic's own products (Claude.ai, Claude Code, Cowork). Steinberger was temporarily banned from Claude entirely on April 10, suggesting aggressive enforcement. No reversal has been announced or hinted.",{"title":360,"searchDepth":361,"depth":361,"links":362},"",2,[363,364,365,366,367,368,369,370],{"id":29,"depth":361,"text":30},{"id":91,"depth":361,"text":92},{"id":134,"depth":361,"text":135},{"id":179,"depth":361,"text":180},{"id":228,"depth":361,"text":229},{"id":258,"depth":361,"text":259},{"id":271,"depth":361,"text":272},{"id":314,"depth":361,"text":315},"Troubleshooting","2026-05-01","Anthropic killed Claude Pro/Max for OpenClaw on April 4. Your agent is broken. Here's the 5-minute API key fix and 3 options for what comes next.","md",false,"/img/blog/openclaw-anthropic-subscription-ban.jpg",null,{},true,"/blog/openclaw-anthropic-subscription-ban","6 min read",{"title":5,"description":373},"Claude Subscription Banned from OpenClaw: Fix in 5 Min","blog/openclaw-anthropic-subscription-ban",[386,387,388,389,390,391,392,393],"OpenClaw Anthropic subscription ban","Claude subscription OpenClaw","OpenClaw Claude not working","OpenClaw API key fix","Anthropic ban April 2026","OpenClaw after ban","OpenClaw OAuth revoked","Claude Pro OpenClaw fix","eZhvG990h889SzM1FlA3Tbc_I1s1so6lDFpCf7zrmpA",[396,782,1181],{"id":397,"title":398,"author":399,"body":400,"category":371,"date":765,"description":766,"extension":374,"featured":375,"image":767,"imageHeight":377,"imageWidth":377,"meta":768,"navigation":379,"path":769,"readingTime":770,"seo":771,"seoTitle":772,"stem":773,"tags":774,"updatedDate":765,"__hash__":781},"blog/blog/claude-cowork-not-working-windows.md","Claude Cowork Not Working on Windows? Every Known Bug and the Best Workaround in 2026",{"name":7,"role":8,"avatar":9},{"type":11,"value":401,"toc":755},[402,407,410,413,416,419,422,426,429,435,441,447,453,459,465,469,472,475,478,481,484,487,494,498,501,504,507,514,521,531,539,543,546,549,552,555,558,561,567,571,574,595,605,611,625,638,646,650,653,656,659,662,665,673,676,680,683,686,689,697,700,707,709,714,717,722,728,733,739,744,747,752],[14,403,404],{},[34,405,406],{},"The Cowork tab is missing, the VM won't start, and Anthropic's docs don't mention half of it. Here's every Windows bug we've tracked and what actually fixes them.",[14,408,409],{},"\"The Claude API cannot be reached from Claude's workspace.\"",[14,411,412],{},"That was the first thing I saw after installing Claude Cowork on Windows. February 10, 2026. Day one of the Windows launch. I had Hyper-V enabled. My internet was working. Claude Chat loaded fine on the same machine.",[14,414,415],{},"But Cowork? It just stared at me and refused to connect.",[14,417,418],{},"I spent the next two hours reading GitHub issues, and I realized I wasn't alone. Not even close. The Claude Code GitHub repo has been flooded with Windows-specific Cowork bugs since launch day. Cryptic \"yukonSilver not supported\" errors. Missing Cowork tabs on fully capable machines. A VM service that installs itself and then refuses to be removed, even by administrators.",[14,420,421],{},"If Claude Cowork is not working on your Windows machine right now, this article will save you hours. We've tracked every major bug, mapped them to their actual causes, and listed what fixes them. No fluff. Just the bugs, the fixes, and an honest take on whether Cowork on Windows is ready for real work.",[27,423,425],{"id":424},"the-five-ways-cowork-breaks-on-windows","The Five Ways Cowork Breaks on Windows",[14,427,428],{},"Here's what nobody tells you about Cowork's Windows launch. The problems aren't random. They fall into five distinct patterns, and knowing which one you're hitting is half the battle.",[14,430,431,434],{},[34,432,433],{},"1. The Missing Tab."," You install Claude Desktop, open it, and the Cowork tab simply isn't there. Only \"Chat\" shows up. This is the \"yukonSilver not supported\" bug, tracked in GitHub issues #25136, #32004, and #32837. Claude's internal platform detection incorrectly marks your system as incompatible, even when all virtualization features are enabled.",[14,436,437,440],{},[34,438,439],{},"2. The Infinite Setup Spinner."," The Cowork tab appears, but clicking it shows \"Setting up Claude's workspace\" with a loading bar stuck at 80 to 90%. It never completes. Users have reported leaving it running for 12+ hours with no progress. No error message. Just spinning.",[14,442,443,446],{},[34,444,445],{},"3. The API Connection Failure."," The workspace starts but can't reach Claude's API. You get \"Cannot connect to Claude API from workspace\" or its Japanese equivalent. This was a day-one launch bug on Windows 11 Home and has resurfaced multiple times since.",[14,448,449,452],{},[34,450,451],{},"4. The Network Conflict."," Cowork uses a hardcoded network range (172.16.0.0/24) for its internal NAT. If your home network, corporate VPN, or another VM tool uses the same range, Cowork's VM can't reach the internet. Worse, it can break your WSL2 and Docker networking in the process.",[14,454,455,458],{},[34,456,457],{},"5. The Update Regression."," Cowork was working fine. Then Claude auto-updated to version 1.1.5749 on March 9, 2026, and it broke. Users report that the update introduced a regression that they can't fix without waiting for another patch from Anthropic.",[14,460,461],{},[85,462],{"alt":463,"src":464},"The five ways Claude Cowork breaks on Windows: missing tab, infinite spinner, API failure, network conflict, and update regression","/img/blog/claude-cowork-not-working-windows-five-bugs.jpg",[27,466,468],{"id":467},"the-windows-home-problem-that-anthropic-still-hasnt-documented","The Windows Home Problem That Anthropic Still Hasn't Documented",[14,470,471],{},"This is where it gets messy.",[14,473,474],{},"Claude Cowork runs inside a lightweight Hyper-V virtual machine on your Windows machine. That's how it creates its sandboxed environment for file access and code execution. The problem? Windows 11 Home doesn't include the full Hyper-V stack.",[14,476,477],{},"Home edition has Virtual Machine Platform and Windows Hypervisor Platform. But it's missing the vmms (Virtual Machine Management) service that Cowork's VM requires. Without it, the VM either fails silently or throws a cryptic \"Plan9 mount failed: bad address\" error.",[14,479,480],{},"At least seven separate GitHub issues have been filed by Windows Home users who spent hours troubleshooting before discovering that their Windows edition simply can't run Cowork. One user explicitly noted they \"subscribed to Max specifically to use this feature\" and only discovered the incompatibility after paying.",[14,482,483],{},"As of March 2026, Anthropic's official Cowork documentation does not clearly state that Windows Home edition is incompatible. The docs mention that ARM64 isn't supported, but say nothing about the Home edition limitation.",[14,485,486],{},"A documentation request (GitHub issue #27906) was filed in February asking Anthropic to add this information. The gap remains.",[14,488,489,490,493],{},"If you're on Windows Home, the quickest check is to open PowerShell and run ",[39,491,492],{},"Get-Service vmms",". If the service isn't found, Cowork won't work on your machine. Period.",[27,495,497],{"id":496},"the-yukonsilver-bug-and-why-your-pro-machine-still-fails","The \"yukonSilver\" Bug and Why Your Pro Machine Still Fails",[14,499,500],{},"Stay with me here, because this one is especially frustrating.",[14,502,503],{},"Even if you're running Windows 11 Pro with every virtualization feature enabled (Hyper-V, VMP, WHP, WSL2), you might still see the Cowork tab missing entirely. The logs will show \"yukonSilver not supported (status=unsupported)\" followed by the VM bundle cleanup routine running instead of the actual VM boot.",[14,505,506],{},"\"yukonSilver\" is Claude's internal codename for its VM configuration on Windows. The bug is in the platform detection logic: it incorrectly classifies fully capable x64 Windows 11 Pro systems as unsupported.",[14,508,509,510,513],{},"But that's not even the real problem. The installer also creates a Windows service called CoworkVMService, and this service sometimes becomes impossible to remove. Running ",[39,511,512],{},"sc.exe delete CoworkVMService"," as Administrator returns \"Access denied.\" The service blocks clean reinstalls and creates a circular failure where you can't fix the problem and you can't start fresh.",[14,515,516,517,520],{},"The documented workaround from community debugging: manually run ",[39,518,519],{},"Add-AppxPackage"," as the target user to install the MSIX package correctly for your account. It's a PowerShell command that most of Cowork's target audience (non-developers) would never discover on their own.",[14,522,523,524,530],{},"As one developer debugging the issue ",[121,525,529],{"href":526,"rel":527,"target":528},"https://blog.kamsker.at/blog/cowork-windows-broken/",[304],"_blank","put it perfectly",": \"Cowork is marketed at the people least equipped to debug it when it breaks.\"",[14,532,533,534,538],{},"If you've been running into similar infrastructure headaches with AI agents and want something that works out of the box, our ",[121,535,537],{"href":536},"/compare/self-hosted","comparison of self-hosted vs managed OpenClaw deployments"," covers why some teams are moving away from local setups entirely.",[27,540,542],{"id":541},"the-network-bug-that-breaks-docker-too","The Network Bug That Breaks Docker Too",[14,544,545],{},"Here's what nobody tells you about Cowork's networking on Windows.",[14,547,548],{},"Cowork creates its own Hyper-V virtual switch and NAT network. It's separate from WSL2's networking and separate from Docker Desktop's networking. Three different tenants sharing the same hypervisor, each with their own plumbing.",[14,550,551],{},"The specific failure: Cowork creates an HNS (Host Network Service) network called \"cowork-vm-nat\" but sometimes fails to create the corresponding WinNAT rule. The HNS network exists, but there's no NAT translation. The VM boots, but it has no internet access.",[14,553,554],{},"And in a particularly fun bug, Cowork's virtual network has been reported to permanently break WSL2's internet connectivity until you manually find and delete the offending network configuration using PowerShell HNS diagnostic tools.",[14,556,557],{},"The fix, discovered by community members, involves stopping all Claude processes, killing the Cowork VM via hcsdiag, removing the broken HNS network, and recreating it on a non-conflicting subnet like 172.24.0.0/24 or 10.200.0.0/24.",[14,559,560],{},"This is three PowerShell commands for someone who knows what they're doing. For someone who just wanted to organize their Downloads folder with AI, it's a wall.",[14,562,563],{},[85,564],{"alt":565,"src":566},"Cowork network conflict diagram showing Hyper-V NAT, WSL2, and Docker competing on the same subnet","/img/blog/claude-cowork-not-working-windows-network-conflict.jpg",[27,568,570],{"id":569},"what-actually-fixes-each-bug-quick-reference","What Actually Fixes Each Bug (Quick Reference)",[14,572,573],{},"Let's cut to the practical fixes for each failure mode.",[14,575,576,579,580,583,584,586,587,590,591,594],{},[34,577,578],{},"Missing Cowork Tab (yukonSilver bug):"," First, make sure you're not on Windows Home. If you're on Pro or Enterprise and still don't see the tab, uninstall Claude Desktop completely. Remove the CoworkVMService manually if possible (",[39,581,582],{},"sc.exe stop CoworkVMService"," then ",[39,585,512],{}," from an elevated prompt). Clear residual files from ",[39,588,589],{},"%APPDATA%\\Claude"," and ",[39,592,593],{},"%LOCALAPPDATA%\\Packages\\Claude_*",". Reinstall fresh from claude.ai/download.",[14,596,597,600,601,604],{},[34,598,599],{},"Infinite Setup Spinner:"," Check if your VM bundle downloaded correctly. Look in ",[39,602,603],{},"%APPDATA%\\Claude\\vm_bundles\\"," for the VM files. If the directory is empty or incomplete, your download was interrupted. A clean reinstall usually resolves this. If it persists on Windows Home, it's the Hyper-V incompatibility and there's no fix short of upgrading your Windows edition.",[14,606,607,610],{},[34,608,609],{},"API Connection Failure:"," Disable your VPN temporarily. Check if your network uses the 172.16.0.0/24 range. If Chat mode works but Cowork doesn't, the issue is the VM's network stack, not your internet connection. Update to the latest Claude Desktop version (v1.1.4328 or higher specifically addressed early API connection bugs).",[14,612,613,616,617,620,621,624],{},[34,614,615],{},"Network Conflict:"," Run ",[39,618,619],{},"Get-NetNat"," in PowerShell. If it returns empty but ",[39,622,623],{},"Get-HnsNetwork | Where-Object {$_.Name -eq \"cowork-vm-nat\"}"," returns a result, you're in the \"missing NAT rule\" failure mode. Remove the broken network and recreate it on a different subnet. Detailed steps in the blog post by Jonas Kamsker at kamsker.at.",[14,626,627,630,631,637],{},[34,628,629],{},"Update Regression (v1.1.5749):"," If Cowork broke after the March 9 update, there's no user-side fix. You're waiting for Anthropic to ship a patch. Check the ",[121,632,636],{"href":633,"rel":634,":target":635},"https://claude.com/download",[304],"\\_blank","Claude Desktop release notes"," for the latest version.",[14,639,640,641,645],{},"If all of this sounds like a lot of infrastructure debugging for a tool that's supposed to \"just work,\" that's because it is. This is exactly the kind of operational friction we built ",[121,642,644],{"href":643},"/","Better Claw"," to eliminate. Your OpenClaw agent runs on our managed infrastructure, no local VMs, no Hyper-V dependencies, no NAT conflicts. $19/month, bring your own API keys, and your first deploy takes about 60 seconds.",[27,647,649],{"id":648},"why-this-matters-beyond-just-bugs","Why This Matters Beyond Just Bugs",[14,651,652],{},"Here's the honest take.",[14,654,655],{},"Cowork is a genuinely impressive product when it works. The sub-agent coordination, the sandboxed file access, the ability to produce polished documents from natural language prompts. Anthropic built something real here.",[14,657,658],{},"But the Windows launch has been rough. And the core tension is architectural: Cowork runs a full Hyper-V VM on your local machine, which means every Windows configuration quirk, every network conflict, every edition limitation becomes a potential failure point.",[14,660,661],{},"There are over 60 open GitHub issues tagged platform:windows on the Claude Code repo right now. New ones are still being filed daily, including as recently as March 24, 2026.",[14,663,664],{},"For quick desktop tasks where you're sitting at your machine and can babysit the process, Cowork is worth the troubleshooting. But if you need an AI agent that runs reliably regardless of what's happening on your local machine, the architecture needs to be different.",[14,666,667,668,672],{},"That's where ",[121,669,671],{"href":670},"/openclaw-hosting","managed OpenClaw hosting"," comes in. Your agent runs on cloud infrastructure. It connects to Slack, Discord, WhatsApp, and 15+ other channels. It doesn't care whether your laptop is running Windows Home or Pro, whether Hyper-V is enabled, or whether your VPN conflicts with a hardcoded subnet.",[14,674,675],{},"The AI agent works. Your laptop stays out of it.",[27,677,679],{"id":678},"the-real-question-you-should-be-asking","The Real Question You Should Be Asking",[14,681,682],{},"The bugs will get fixed. Anthropic is actively patching, and the March updates have already resolved some early issues. In six months, Cowork on Windows will probably work well for most configurations.",[14,684,685],{},"But the question isn't whether Cowork will eventually work. The question is what you need an AI agent to do.",[14,687,688],{},"If you need a desktop co-pilot for occasional file organization and document creation, Cowork is the right architecture. Be patient with the bugs. Keep your Windows updated. Check GitHub before assuming the issue is on your end.",[14,690,691,692,696],{},"If you need an always-on agent that handles tasks across messaging platforms, runs while your computer sleeps, and doesn't depend on your local VM stack, you need something different entirely. Our guide on ",[121,693,695],{"href":694},"/blog/how-does-openclaw-work","how OpenClaw works"," explains the architectural difference in detail.",[14,698,699],{},"Don't let the tool you chose dictate what you can build. Choose the tool that matches what you're building.",[14,701,702,703,706],{},"If you want an OpenClaw agent running in 60 seconds without debugging PowerShell on a Tuesday night, ",[121,704,305],{"href":302,"rel":705},[304],". It's $19/month per agent, BYOK, and we handle the infrastructure. You handle the interesting part.",[27,708,315],{"id":314},[14,710,711],{},[34,712,713],{},"Why is Claude Cowork not working on my Windows machine?",[14,715,716],{},"The most common causes are: running Windows Home edition (which lacks the full Hyper-V stack Cowork requires), the \"yukonSilver\" platform detection bug that incorrectly marks capable systems as unsupported, network conflicts with VPNs or other VM tools using the 172.16.0.0/24 range, or a corrupted CoworkVMService that blocks clean installations. Check your Windows edition first, then your virtualization settings, then the Claude Code GitHub issues for your specific error.",[14,718,719],{},[34,720,721],{},"Does Claude Cowork work on Windows 11 Home?",[14,723,724,725,727],{},"Officially, Anthropic has not clarified whether Windows Home is supported. In practice, Windows 11 Home lacks the vmms service (full Hyper-V) that Cowork's VM requires, and at least seven GitHub issues document Home users unable to run Cowork. Run ",[39,726,492],{}," in PowerShell. If the service isn't found, Cowork won't work on your edition without upgrading to Windows Pro or Enterprise.",[14,729,730],{},[34,731,732],{},"How do I fix the \"yukonSilver not supported\" error in Claude Cowork?",[14,734,735,736,738],{},"This is a platform detection bug on Claude's side, not a configuration problem on yours. The workaround involves a complete uninstall of Claude Desktop, manual removal of the CoworkVMService via elevated PowerShell, clearing residual files from ",[39,737,589],{},", and a fresh reinstall. If the CoworkVMService returns \"Access denied\" when you try to delete it, you may need to use the registry editor or boot into Safe Mode to remove it.",[14,740,741],{},[34,742,743],{},"Is Claude Cowork worth $100 to $200 per month if I'm on Windows?",[14,745,746],{},"If you're on Windows Pro or Enterprise with a stable network configuration, Cowork delivers real value for desktop productivity tasks. But on Windows Home, it simply won't work. And even on Pro, the current bug situation means you should expect some troubleshooting time. If you need reliable AI agent infrastructure without local dependencies, a managed OpenClaw setup at $19/month with BYOK API keys may be a better fit until the Windows experience matures.",[14,748,749],{},[34,750,751],{},"Is Claude Cowork on Windows stable enough for daily use in 2026?",[14,753,754],{},"As of late March 2026, Cowork on Windows is still labeled a \"research preview\" by Anthropic. Over 60 open GitHub issues are tagged for Windows, new bugs are being reported daily, and an auto-update in March 2026 introduced a regression that broke working installations. It's usable for non-critical desktop tasks if your system configuration is compatible, but it's not yet reliable enough for production workflows where downtime means lost work.",{"title":360,"searchDepth":361,"depth":361,"links":756},[757,758,759,760,761,762,763,764],{"id":424,"depth":361,"text":425},{"id":467,"depth":361,"text":468},{"id":496,"depth":361,"text":497},{"id":541,"depth":361,"text":542},{"id":569,"depth":361,"text":570},{"id":648,"depth":361,"text":649},{"id":678,"depth":361,"text":679},{"id":314,"depth":361,"text":315},"2026-03-27","Claude Cowork not working on Windows? Here's every known bug from yukonSilver errors to broken VMs, plus the actual fixes. Updated March 2026.","/img/blog/claude-cowork-not-working-windows.jpg",{},"/blog/claude-cowork-not-working-windows","14 min read",{"title":398,"description":766},"Claude Cowork Not Working on Windows? Every Bug + Fix","blog/claude-cowork-not-working-windows",[775,776,777,778,779,780],"Claude Cowork not working Windows","Cowork Windows bugs","yukonSilver error","Claude Cowork Windows fix","Cowork Hyper-V","Cowork Windows Home","o22CmMpLZTKUo_DhU26vK0fMPgVvRyAyozS6KHAI5m8",{"id":783,"title":784,"author":785,"body":786,"category":371,"date":1164,"description":1165,"extension":374,"featured":375,"image":1166,"imageHeight":377,"imageWidth":377,"meta":1167,"navigation":379,"path":1168,"readingTime":1169,"seo":1170,"seoTitle":1171,"stem":1172,"tags":1173,"updatedDate":1164,"__hash__":1180},"blog/blog/openclaw-agent-hallucinating-fix.md","OpenClaw Agent Hallucinating? Why It's Describing Tasks Instead of Doing Them",{"name":7,"role":8,"avatar":9},{"type":11,"value":787,"toc":1152},[788,794,797,800,803,806,810,813,816,819,823,826,829,840,851,857,861,864,867,877,883,887,890,893,903,909,913,916,919,936,942,946,953,962,970,976,982,986,989,995,1001,1007,1018,1027,1034,1038,1041,1044,1047,1053,1061,1063,1068,1074,1079,1082,1087,1090,1095,1098,1103,1112,1116],[14,789,790],{},[791,792,793],"em",{},"Your agent says \"I've searched the web for you\" but didn't actually search. Here's the specific reason and the fix for each cause.",[14,795,796],{},"I asked my OpenClaw agent to check the weather in London. It responded with a detailed forecast: 14 degrees, partly cloudy, 60% chance of rain in the afternoon.",[14,798,799],{},"The forecast was completely wrong. Not because the weather API was broken. Because the agent never called the weather API. It generated a plausible-sounding forecast from its training data and presented it as if it had just looked it up.",[14,801,802],{},"This is the most frustrating OpenClaw behavior: the agent describes doing something without actually doing it. It says \"I've searched for that\" without searching. It says \"I've checked your calendar\" without checking. It writes a confident response that looks like it came from a tool call but was entirely fabricated.",[14,804,805],{},"Here's what nobody tells you: this isn't a bug in OpenClaw. It's a predictable failure mode with five specific causes, each with a different fix.",[27,807,809],{"id":808},"the-difference-between-hallucinating-and-executing","The difference between hallucinating and executing",[14,811,812],{},"When your OpenClaw agent properly executes a task, the process looks like this: you send a message, the model decides which tool to call, OpenClaw executes the tool, the tool returns real data, and the model generates a response based on that real data.",[14,814,815],{},"When the agent hallucinates a task, the process looks like this: you send a message, the model skips the tool call entirely, and generates a response that looks like it used a tool but didn't. No tool was called. No real data was retrieved. The response is pure fabrication dressed up as fact.",[14,817,818],{},"The scary part is that both responses look identical to you. The agent doesn't say \"I'm guessing.\" It presents the hallucinated answer with the same confidence as a real one.",[27,820,822],{"id":821},"cause-1-your-model-doesnt-support-tool-calling","Cause 1: Your model doesn't support tool calling",[14,824,825],{},"This is the most common cause and the easiest to fix.",[14,827,828],{},"Not every AI model can call tools. Tool calling is a specific capability that models must be trained for. If your model doesn't support it, the agent has no way to execute tools. It does the next best thing: it describes what it would do if it could.",[14,830,831,832,835,836,839],{},"This especially affects Ollama users running local models. Models like ",[39,833,834],{},"phi3:mini",", ",[39,837,838],{},"qwen2.5:3b",", and other small models lack tool calling support entirely. Even models that support tool calling through Ollama have issues because of a streaming bug (GitHub Issue #5769) that drops tool call responses.",[14,841,842,845,846,850],{},[34,843,844],{},"The fix:"," Switch to a model that supports tool calling. For cloud providers: Claude Sonnet, GPT-4o, DeepSeek, and Gemini all support tool calling reliably. For the ",[121,847,849],{"href":848},"/blog/openclaw-model-does-not-support-tools","full breakdown of which models support tools and which don't",", our model compatibility guide covers every common model.",[14,852,853],{},[85,854],{"alt":855,"src":856},"OpenClaw model tool calling support matrix showing which cloud and local models work with tools","/img/blog/openclaw-agent-hallucinating-fix-models.jpg",[27,858,860],{"id":859},"cause-2-docker-isnt-running-so-sandboxed-execution-fails-silently","Cause 2: Docker isn't running (so sandboxed execution fails silently)",[14,862,863],{},"OpenClaw uses Docker containers for sandboxed code execution and some tool operations. If Docker Desktop isn't running (on Mac/Windows) or the Docker daemon isn't active (on Linux/VPS), tool calls that require sandboxed execution fail silently.",[14,865,866],{},"Here's the weird part. The agent doesn't always tell you Docker failed. Instead, it falls back to generating a response without the tool, making it look like it executed the task when it couldn't.",[14,868,869,871,872,876],{},[34,870,844],{}," Make sure Docker is running before starting OpenClaw. On Mac/Windows, check for the Docker Desktop whale icon in the system tray. On Linux, verify the Docker daemon is active. For the ",[121,873,875],{"href":874},"/blog/openclaw-docker-troubleshooting","complete Docker troubleshooting guide",", our guide covers the eight most common Docker errors and their fixes.",[14,878,879],{},[85,880],{"alt":881,"src":882},"OpenClaw Docker dependency diagram showing how sandboxed tools fail silently when Docker daemon is down","/img/blog/openclaw-agent-hallucinating-fix-docker.jpg",[27,884,886],{"id":885},"cause-3-the-skill-you-think-is-installed-isnt-actually-active","Cause 3: The skill you think is installed isn't actually active",[14,888,889],{},"You installed a web search skill last week. You ask the agent to search something. It generates a fake search result instead of actually searching.",[14,891,892],{},"The skill might have been deactivated by a recent OpenClaw update. It might have failed validation after a version change. It might be installed globally but not in the current workspace. OpenClaw doesn't always tell you when a skill goes inactive.",[14,894,895,897,898,902],{},[34,896,844],{}," Check your installed skills. Ask the agent to list its available tools. If the skill you expect isn't in the list, reinstall it. After any OpenClaw update, verify your skills are still active. For the ",[121,899,901],{"href":900},"/blog/clawhub-skills-directory","skill audit process including how to verify what's installed",", our skills guide covers the verification steps.",[14,904,905],{},[85,906],{"alt":907,"src":908},"OpenClaw skill verification flow showing how to check active skills, reinstall after updates, and confirm tool availability","/img/blog/openclaw-agent-hallucinating-fix-skills.jpg",[27,910,912],{"id":911},"cause-4-the-agent-is-stuck-in-a-reasoning-loop","Cause 4: The agent is stuck in a reasoning loop",[14,914,915],{},"Sometimes the agent enters a loop where it tries to call a tool, encounters an error, retries, encounters the same error, and eventually gives up and generates a response without the tool. From your perspective, you asked a question and got an answer. You didn't see the five failed tool attempts that happened behind the scenes.",[14,917,918],{},"The agent doesn't announce that it gave up. It just... answers. With fabricated data. As if nothing went wrong.",[14,920,921,923,924,927,928,930,931,935],{},[34,922,844],{}," Check the gateway logs for repeated tool call errors. If you see the same tool being called and failing multiple times, there's a skill error or a configuration problem causing the loop. Set ",[39,925,926],{},"maxIterations"," to 10-15 in your config to prevent infinite retries. Use ",[39,929,153],{}," to clear the session state. For the ",[121,932,934],{"href":933},"/blog/openclaw-agent-stuck-in-loop","complete guide to diagnosing agent loops",", our loop troubleshooting post covers the specific patterns.",[14,937,938],{},[85,939],{"alt":940,"src":941},"OpenClaw silent retry loop showing how repeated tool failures lead to fabricated responses without user-visible errors","/img/blog/openclaw-agent-hallucinating-fix-loop.jpg",[27,943,945],{"id":944},"cause-5-your-soulmd-is-conflicting-with-tool-use","Cause 5: Your SOUL.md is conflicting with tool use",[14,947,948,949,952],{},"This is the subtlest cause. If your ",[39,950,951],{},"SOUL.md"," contains instructions that discourage or limit tool use (\"answer from your knowledge first,\" \"don't use tools unless necessary,\" \"respond quickly without external lookups\"), the model may interpret these as reasons to skip tool calls and generate responses from its training data instead.",[14,954,955,956,958,959,961],{},"The model follows your ",[39,957,951],{},". If the ",[39,960,951],{}," suggests that responding quickly from knowledge is preferred over using tools, the model will do exactly that. Even when using tools would give a better answer.",[14,963,964,966,967,969],{},[34,965,844],{}," Review your ",[39,968,951],{}," for any instructions that could be interpreted as \"don't use tools.\" Remove or clarify them. If you want the agent to always use tools for certain types of queries (web search for current information, calendar checks for scheduling), add explicit instructions: \"Always use web search for questions about current events, prices, or availability. Never guess when a tool can provide the real answer.\"",[14,971,972,973,975],{},"When your agent hallucinates tool use, it's not broken. It's choosing not to use tools because of one of five specific reasons: the model can't call tools, Docker isn't running, the skill is inactive, the tool is failing silently, or your ",[39,974,951],{}," discouraged tool use. Fix the specific cause. The hallucination stops.",[14,977,978],{},[85,979],{"alt":980,"src":981},"OpenClaw SOUL.md tool use conflicts showing instructions that accidentally discourage tool calling and how to rewrite them","/img/blog/openclaw-agent-hallucinating-fix-soulmd.jpg",[27,983,985],{"id":984},"the-quick-diagnostic-run-this-in-2-minutes","The quick diagnostic (run this in 2 minutes)",[14,987,988],{},"When your agent describes a task instead of doing it, check these five things in this order.",[14,990,991,994],{},[34,992,993],{},"First",", verify your model supports tool calling. If you're on Ollama, this is probably the issue. Switch to a cloud provider temporarily to test.",[14,996,997,1000],{},[34,998,999],{},"Second",", verify Docker is running. Check the system tray (Mac/Windows) or daemon status (Linux).",[14,1002,1003,1006],{},[34,1004,1005],{},"Third",", ask the agent to list its available tools. If the tool you expected isn't listed, reinstall the skill.",[14,1008,1009,1012,1013,1015,1016,66],{},[34,1010,1011],{},"Fourth",", check the gateway logs for repeated tool call errors. If you see retries, set ",[39,1014,926],{}," and use ",[39,1017,153],{},[14,1019,1020,1023,1024,1026],{},[34,1021,1022],{},"Fifth",", review your ",[39,1025,951],{}," for any instructions that discourage tool use.",[14,1028,1029,1030,1033],{},"If debugging tool calling failures and Docker dependencies isn't how you want to spend your afternoon, ",[121,1031,1032],{"href":670},"Better Claw handles tool execution"," with Docker-sandboxed execution built into the platform. $19/month per agent, BYOK with 28+ providers. Every model we support has working tool calling. Skills execute in sandboxed containers. No silent failures. No hallucinated tool use.",[27,1035,1037],{"id":1036},"why-this-matters-more-than-most-people-realize","Why this matters more than most people realize",[14,1039,1040],{},"Here's the uncomfortable truth about agent hallucination.",[14,1042,1043],{},"When your agent hallucinates a web search and gives you wrong information, you can probably tell. When it hallucinates a calendar check and tells you your afternoon is free (when it isn't), the consequences are more serious. When it hallucinates a file operation and tells you it saved something (when it didn't), you lose data.",[14,1045,1046],{},"The Meta researcher Summer Yue incident (agent mass-deleting emails while ignoring stop commands) is the extreme case. But the everyday case is agents that claim to have done things they didn't do. Not maliciously. Just because the tool call failed and the model covered the gap with a confident-sounding response.",[14,1048,1049,1050,1052],{},"The fix isn't to distrust your agent. The fix is to ensure tool calling actually works (right model, Docker running, skills active, no loops, clear ",[39,1051,951],{},") and to verify important actions by checking the results independently until you trust the pipeline.",[14,1054,1055,1056,1060],{},"If you want an agent where tool calls execute reliably and failures surface clearly instead of being masked by hallucination, ",[121,1057,1059],{"href":302,"rel":1058},[304],"give Better Claw a try",". $19/month per agent, BYOK with 28+ providers. 60-second deploy. Health monitoring catches tool execution failures before they become hallucinated answers.",[27,1062,315],{"id":314},[14,1064,1065],{},[34,1066,1067],{},"Why does my OpenClaw agent describe tasks instead of executing them?",[14,1069,1070,1071,1073],{},"The most common cause is that your model doesn't support tool calling (especially local Ollama models under 7B parameters). Other causes: Docker not running (sandboxed execution fails silently), the required skill being inactive after an update, the agent stuck in a retry loop, or ",[39,1072,951],{}," instructions that discourage tool use. The agent generates a confident response from its training data instead of admitting the tool call failed.",[14,1075,1076],{},[34,1077,1078],{},"How do I know if my OpenClaw agent is hallucinating?",[14,1080,1081],{},"Check the gateway logs for tool call entries. If your agent claims to have searched the web but the logs show no web search tool call, the response was hallucinated. You can also test by asking for verifiable information (today's date, current weather, a specific fact you can check). If the answer is wrong or outdated, the agent likely generated it from training data rather than using a tool.",[14,1083,1084],{},[34,1085,1086],{},"Which models support tool calling in OpenClaw?",[14,1088,1089],{},"Cloud models with reliable tool calling: Claude Sonnet, Claude Opus, GPT-4o, DeepSeek, Gemini Pro. Local Ollama models with tool calling support (but affected by streaming bug): hermes-2-pro, mistral:7b, qwen3:8b+, llama3.1:8b+. Models without tool calling: phi3:mini, qwen2.5:3b, and most small quantized models. Cloud providers have the most reliable tool execution because their streaming implementation correctly returns tool call responses.",[14,1091,1092],{},[34,1093,1094],{},"Does Docker need to be running for OpenClaw tools to work?",[14,1096,1097],{},"For skills that use sandboxed execution (code execution, browser automation, some file operations), yes. Docker provides the container environment where these tools run safely. If Docker isn't running, these tool calls fail silently and the agent may hallucinate a response instead. Always verify Docker is running before starting your OpenClaw gateway. Not all tools require Docker (simple API calls, web search through external services), but many core capabilities do.",[14,1099,1100],{},[34,1101,1102],{},"How do I stop my OpenClaw agent from making up information?",[14,1104,1105,1106,1108,1109,1111],{},"Five fixes in order: ensure your model supports tool calling (switch from Ollama to a cloud provider if needed), verify Docker is running, check that required skills are installed and active, set ",[39,1107,926],{}," to 10-15 to prevent silent retry failures, and review your ",[39,1110,951],{}," for instructions that might discourage tool use. Add explicit instructions like \"Always use web search for current information. Never guess when a tool can provide the answer.\"",[27,1113,1115],{"id":1114},"related-reading","Related Reading",[1117,1118,1119,1126,1132,1139,1145],"ul",{},[1120,1121,1122,1125],"li",{},[121,1123,1124],{"href":848},"\"Model Does Not Support Tools\" Fix"," — Tool calling failures by model and provider",[1120,1127,1128,1131],{},[121,1129,1130],{"href":874},"OpenClaw Docker Troubleshooting Guide"," — Docker errors that cause silent tool failures",[1120,1133,1134,1138],{},[121,1135,1137],{"href":1136},"/blog/openclaw-skill-audit","OpenClaw Skill Audit"," — How to verify which skills are actually active",[1120,1140,1141,1144],{},[121,1142,1143],{"href":933},"OpenClaw Agent Stuck in Loop"," — Diagnose and fix the silent retry loops",[1120,1146,1147,1151],{},[121,1148,1150],{"href":1149},"/blog/openclaw-soulmd-guide","The OpenClaw SOUL.md Guide"," — Write a system prompt that doesn't discourage tool use",{"title":360,"searchDepth":361,"depth":361,"links":1153},[1154,1155,1156,1157,1158,1159,1160,1161,1162,1163],{"id":808,"depth":361,"text":809},{"id":821,"depth":361,"text":822},{"id":859,"depth":361,"text":860},{"id":885,"depth":361,"text":886},{"id":911,"depth":361,"text":912},{"id":944,"depth":361,"text":945},{"id":984,"depth":361,"text":985},{"id":1036,"depth":361,"text":1037},{"id":314,"depth":361,"text":315},{"id":1114,"depth":361,"text":1115},"2026-04-11","Your OpenClaw agent says it searched the web but didn't. Five causes: wrong model, Docker down, skill inactive, loop, or SOUL.md conflict. Fixes here.","/img/blog/openclaw-agent-hallucinating-fix.jpg",{},"/blog/openclaw-agent-hallucinating-fix","10 min read",{"title":784,"description":1165},"OpenClaw Agent Hallucinating? Not Executing Tasks?","blog/openclaw-agent-hallucinating-fix",[1174,1175,1176,1177,1178,1179],"OpenClaw hallucinating","OpenClaw not executing tasks","OpenClaw tool calling not working","OpenClaw agent making things up","OpenClaw fake responses","OpenClaw agent fix","5vx9y6T-F-bQ0xrEiaxhDb3gEC6jXhgM5n_4AwD7E54",{"id":1182,"title":1183,"author":1184,"body":1185,"category":371,"date":1616,"description":1617,"extension":374,"featured":375,"image":1618,"imageHeight":377,"imageWidth":377,"meta":1619,"navigation":379,"path":933,"readingTime":1620,"seo":1621,"seoTitle":1622,"stem":1623,"tags":1624,"updatedDate":1630,"__hash__":1631},"blog/blog/openclaw-agent-stuck-in-loop.md","OpenClaw Agent Stuck in Loop? Here's Why You're Burning $25+ in Minutes (And How to Stop It)",{"name":7,"role":8,"avatar":9},{"type":11,"value":1186,"toc":1598},[1187,1200,1205,1208,1211,1214,1217,1220,1223,1227,1230,1233,1236,1239,1242,1245,1248,1254,1258,1261,1264,1267,1270,1273,1276,1279,1282,1286,1292,1297,1300,1303,1307,1310,1318,1322,1325,1331,1335,1338,1341,1344,1352,1355,1358,1362,1365,1375,1381,1387,1397,1405,1409,1416,1419,1426,1429,1433,1436,1439,1442,1445,1448,1455,1459,1462,1473,1479,1485,1492,1496,1499,1502,1505,1508,1511,1519,1521,1526,1529,1534,1537,1542,1551,1556,1559,1564,1567,1569],[14,1188,1189],{},[34,1190,1191,1192,1195,1196,1199],{},"To stop an OpenClaw agent loop, SSH into your server and run ",[39,1193,1194],{},"docker restart openclaw",". Then prevent future loops by setting ",[39,1197,1198],{},"maxIterations: 15"," in your agent config, adding a per-task cost ceiling, and configuring cooldown periods between retries. Agent loops happen when a failed action triggers infinite retry cycles — each burning API tokens.",[14,1201,1202],{},[34,1203,1204],{},"Your agent isn't broken. It's just expensive. Here's what's actually happening when OpenClaw loops, and the fastest way to stop the bleeding.",[14,1206,1207],{},"It was 11:47 PM on a Tuesday. I'd set up an OpenClaw agent to summarize support tickets and push updates to Slack. Simple workflow. Twenty minutes, tops.",[14,1209,1210],{},"I went to bed.",[14,1212,1213],{},"I woke up to a $38 API bill from Anthropic. For one night.",[14,1215,1216],{},"The agent had gotten stuck in a retry loop. Every failed Slack post triggered another reasoning cycle. Every reasoning cycle packed more context into the prompt. Every prompt burned more tokens. For six hours straight, my agent was essentially arguing with itself about why a Slack webhook URL was wrong, spending real money on every single turn of that argument.",[14,1218,1219],{},"If you're running OpenClaw and you've seen your API costs spike without explanation, you're not alone. And this isn't a bug. It's a design reality of how autonomous agents work.",[14,1221,1222],{},"Here's what's actually going on.",[27,1224,1226],{"id":1225},"why-your-openclaw-agent-gets-stuck-its-not-what-you-think","Why Your OpenClaw Agent Gets Stuck (It's Not What You Think)",[14,1228,1229],{},"Most people assume a looping agent means something is misconfigured. Bad YAML. Wrong API key. Broken skill file.",[14,1231,1232],{},"Sometimes, yes. But the more common cause is subtler and more expensive.",[14,1234,1235],{},"OpenClaw agents operate on a reason-act-observe loop. The agent reads its context, decides what to do, takes an action, observes the result, and then reasons again. This is the core pattern behind every agent framework, not just OpenClaw.",[14,1237,1238],{},"The problem starts when the \"observe\" step returns ambiguous feedback.",[14,1240,1241],{},"Think about it. If a tool call returns \"request failed, please try again,\" the agent should try again. That's what it's designed to do. It's being a good agent. But without explicit limits on how many times it retries, or any awareness of how much each retry costs, it will keep trying forever.",[14,1243,1244],{},"Research from AWS shows that agents can loop hundreds of times without delivering a single useful result when tool feedback is vague. The agent keeps calling the same tool with slightly different parameters, convinced the next attempt will work.",[14,1246,1247],{},"And every single one of those attempts costs tokens.",[14,1249,1250],{},[85,1251],{"alt":1252,"src":1253},"OpenClaw reason-act-observe loop diagram showing how ambiguous tool feedback triggers infinite retries","/img/blog/openclaw-agent-stuck-in-loop-reason-loop.jpg",[27,1255,1257],{"id":1256},"the-math-that-should-scare-you","The Math That Should Scare You",[14,1259,1260],{},"Let's do some quick napkin math on what an OpenClaw loop actually costs.",[14,1262,1263],{},"Say your agent is running Claude Sonnet. Each reasoning cycle sends the full conversation history plus tool definitions plus the latest observation. That's easily 50,000 to 80,000 input tokens per turn once context starts growing.",[14,1265,1266],{},"At Anthropic's current pricing, that's roughly $0.15 to $0.24 per turn for input tokens alone. Add output tokens and you're looking at $0.20 to $0.35 per reasoning cycle.",[14,1268,1269],{},"Now imagine 100 cycles in an hour. That's $20 to $35 burned on a single stuck task.",[14,1271,1272],{},"Switch to a more powerful model like Claude Opus? The numbers get worse fast. And if your agent is running overnight or over a weekend with no circuit breaker, the math becomes genuinely painful.",[14,1274,1275],{},"A single runaway agent loop can consume your monthly API budget in hours. This isn't hypothetical. It happens to people building with autonomous agents every single week.",[14,1277,1278],{},"One developer recently filed a bug report showing a subagent that burned $350 in 3.5 hours after entering an infinite tool-call loop with 809 consecutive turns. The agent kept reading and re-reading the same files, never concluding its task. Worse, the cost dashboard showed only half the real bill due to a pricing tier mismatch.",[14,1280,1281],{},"This is the risk nobody talks about in the \"just deploy an agent\" tutorials.",[27,1283,1285],{"id":1284},"the-three-loop-patterns-that-drain-your-wallet","The Three Loop Patterns That Drain Your Wallet",[14,1287,1288,1289,1291],{},"Not all loops are created equal. In our experience running managed OpenClaw deployments at ",[121,1290,644],{"href":643},", we see three patterns over and over again.",[1293,1294,1296],"h3",{"id":1295},"_1-the-retry-storm","1. The Retry Storm",[14,1298,1299],{},"A tool call fails. The agent retries. Same error. Retries again. Each retry adds the error message to context, making the prompt longer and more expensive. The agent isn't learning from the failure. It's just paying more to fail again.",[14,1301,1302],{},"This is the most common pattern. It usually comes from external API timeouts, rate limits, or webhook misconfigurations.",[1293,1304,1306],{"id":1305},"_2-the-context-avalanche","2. The Context Avalanche",[14,1308,1309],{},"This one is sneakier. The agent successfully calls tools, but each tool returns a massive payload. Full file contents. Entire database query results. Complete API responses. The context window balloons with every turn. Eventually, the agent is spending most of its tokens just reading its own history rather than doing useful work.",[14,1311,1312,1313,1317],{},"If you've looked at ",[121,1314,1316],{"href":1315},"/blog/openclaw-api-costs","how OpenClaw handles API costs",", you know that context management is half the battle.",[1293,1319,1321],{"id":1320},"_3-the-verification-loop","3. The Verification Loop",[14,1323,1324],{},"The agent completes a task successfully but then enters an infinite verification cycle. It checks its own work, decides something might be slightly off, \"fixes\" it, checks again, fixes again. Round and round, perfecting something that was already done, burning tokens on what is essentially AI anxiety.",[14,1326,1327],{},[85,1328],{"alt":1329,"src":1330},"Three loop patterns compared: retry storm, context avalanche, and verification loop with cost impact","/img/blog/openclaw-agent-stuck-in-loop-patterns.jpg",[27,1332,1334],{"id":1333},"what-openclaw-doesnt-do-that-you-need-to-do-yourself","What OpenClaw Doesn't Do (That You Need to Do Yourself)",[14,1336,1337],{},"Here's what nobody tells you about self-hosting OpenClaw.",[14,1339,1340],{},"OpenClaw is a powerful agent framework. It handles task execution, skill loading, multi-channel communication, and tool calling really well. But it was designed as a framework, not a managed service. That means certain operational safeguards are left to you.",[14,1342,1343],{},"There's no built-in per-task cost cap. No automatic circuit breaker that kills a loop after N iterations. No alert that fires when token consumption spikes. No rate limiting on the agent's own behavior.",[14,1345,1346,1347,1351],{},"If you're ",[121,1348,1350],{"href":1349},"/blog/openclaw-vps-setup","self-hosting OpenClaw on a VPS",", all of this is your responsibility. You need to configure max retries, set cooldown periods, implement session budgets, and monitor token usage in real time.",[14,1353,1354],{},"The fix itself isn't complicated. A basic circuit breaker config looks something like this: set a max of 3 retries per task, add a 60-second cooldown between failures, cap total actions per session at 50, and kill the agent if it exceeds a dollar threshold per run.",[14,1356,1357],{},"Four rules. That's it. But most people don't add them until after the first surprise bill.",[27,1359,1361],{"id":1360},"how-to-stop-the-bleeding-right-now","How to Stop the Bleeding Right Now",[14,1363,1364],{},"If your agent is stuck in a loop right now, here's what to do.",[14,1366,1367,1370,1371,1374],{},[34,1368,1369],{},"First, kill the process."," Don't wait for it to finish gracefully. Every second it runs is money spent. If you're running in Docker, ",[39,1372,1373],{},"docker stop"," will do it. If you're on a VPS, kill the node process.",[14,1376,1377,1380],{},[34,1378,1379],{},"Second, check your API provider's dashboard."," Look at the token usage for the last few hours. Identify which model was being used and how many requests were made. This tells you the actual damage.",[14,1382,1383,1386],{},[34,1384,1385],{},"Third, look at the agent's conversation history."," Find the point where it started looping. What tool call failed? What was the response? This is your debugging starting point.",[14,1388,1389,1392,1393,1396],{},[34,1390,1391],{},"Fourth, add guardrails before restarting."," Minimum viable guardrails for any OpenClaw deployment: set ",[39,1394,1395],{},"max_retries"," in your agent config, implement a session timeout, and add a cost ceiling per task.",[14,1398,1399,1400,1404],{},"If you want to go deeper on preventing these issues before they start, our guide on ",[121,1401,1403],{"href":1402},"/blog/openclaw-best-practices","OpenClaw best practices"," covers the full configuration approach.",[27,1406,1408],{"id":1407},"the-case-for-not-managing-this-yourself","The Case for Not Managing This Yourself",[14,1410,1411,1412,1415],{},"I'll be direct here. We built ",[121,1413,644],{"href":1414},"/pricing"," because we got tired of being the human circuit breaker for our own agents.",[14,1417,1418],{},"Every OpenClaw deployment we managed for ourselves had the same lifecycle: set up the agent, it works great for a week, something goes sideways at 2 AM, wake up to a cost spike, spend half a day debugging, add another guardrail, repeat. The agent itself was doing its job. The infrastructure around it was the problem.",[14,1420,1421,1425],{},[121,1422,1424],{"href":302,"rel":1423},[304],"BetterClaw"," runs your OpenClaw agent on managed infrastructure with built-in cost controls, automatic monitoring, and loop detection baked in. $19/month per agent, you bring your own API keys. Your first deploy takes about 60 seconds. We handle the Docker, the uptime, the security patches, and the \"why is my agent spending $50 at 3 AM\" problem.",[14,1427,1428],{},"You handle the interesting part: building the actual workflows your agent runs.",[27,1430,1432],{"id":1431},"the-bigger-picture-why-this-problem-is-getting-worse","The Bigger Picture: Why This Problem Is Getting Worse",[14,1434,1435],{},"Here's something worth thinking about.",[14,1437,1438],{},"As models get smarter, agent loops get more expensive, not less. Newer models have larger context windows, which means a looping agent can accumulate more context before hitting limits. They're also better at generating plausible-sounding reasoning, which means they can loop longer before producing output that looks obviously wrong.",[14,1440,1441],{},"A GPT-4 era agent might loop 50 times before filling its context window. A newer model might loop 500 times in the same window, each turn more expensive than the last.",[14,1443,1444],{},"The industry is moving toward longer-running, more autonomous agents. That's exciting. But it also means the cost of a stuck agent is going up, not down.",[14,1446,1447],{},"The tools for building agents are getting better every month. The tools for operating agents safely are still catching up. That gap is where your API budget disappears.",[14,1449,1450,1451,1454],{},"This is why operational infrastructure matters as much as the agent framework itself. The ",[121,1452,1453],{"href":536},"difference between self-hosted and managed OpenClaw"," isn't just about convenience. It's about whether you have production-grade safeguards running by default or whether you're building them from scratch every time.",[27,1456,1458],{"id":1457},"what-id-tell-someone-just-getting-started","What I'd Tell Someone Just Getting Started",[14,1460,1461],{},"If you're setting up your first OpenClaw agent today, here's what I wish someone had told me.",[14,1463,1464,1467,1468,1472],{},[34,1465,1466],{},"Start with a cheap model for testing."," Use Claude Haiku or GPT-4o-mini while you're iterating on your skill files and task configurations. Switch to a more capable model only after you've confirmed the workflow runs without loops. Our ",[121,1469,1471],{"href":1470},"/blog/openclaw-model-comparison","model comparison guide"," breaks down when each model makes sense.",[14,1474,1475,1478],{},[34,1476,1477],{},"Set cost alerts on your API provider dashboard from day one."," Anthropic, OpenAI, and Google all let you set usage alerts. A $5 daily alert is a simple early warning system.",[14,1480,1481,1484],{},[34,1482,1483],{},"Never leave an agent running overnight without a session timeout."," Just don't. The 30 minutes it takes to add a timeout config will save you hundreds of dollars over the life of your deployment.",[14,1486,1487,1488,1491],{},"And if you'd rather skip the infrastructure headaches entirely and just focus on what your agent does, ",[121,1489,305],{"href":302,"rel":1490},[304],". It's $19/month per agent, BYOK, and your first deploy takes about 60 seconds. We handle the infrastructure. You handle the interesting part.",[27,1493,1495],{"id":1494},"the-real-cost-isnt-the-bill","The Real Cost Isn't the Bill",[14,1497,1498],{},"The thing that actually bothers me about runaway agent loops isn't the money. Money can be recovered.",[14,1500,1501],{},"It's the trust erosion.",[14,1503,1504],{},"Every time an agent loops and burns your budget, it chips away at your confidence in the whole approach. You start second-guessing whether autonomous agents are ready. You add more manual oversight. You reduce the agent's autonomy. And slowly, the thing that was supposed to save you time becomes another system you babysit.",[14,1506,1507],{},"The fix isn't to distrust agents. The fix is to give them proper guardrails so they can be trusted. A well-configured agent with cost caps, retry limits, and monitoring is more autonomous than one you have to watch like a hawk because it might bankrupt you at 3 AM.",[14,1509,1510],{},"Build the guardrails. Trust the agent. Ship the workflow.",[14,1512,1513,1514,1518],{},"Or ",[121,1515,1517],{"href":302,"rel":1516},[304],"let us handle the guardrails"," and skip straight to the good part.",[27,1520,315],{"id":314},[14,1522,1523],{},[34,1524,1525],{},"Why does my OpenClaw agent get stuck in a loop?",[14,1527,1528],{},"OpenClaw agents loop when tool calls return ambiguous or failed responses without clear stop conditions. The agent's reason-act-observe cycle keeps retrying because it's designed to be persistent. Without explicit max-retry limits or circuit breakers configured in your setup, the agent will keep attempting the task indefinitely, burning API tokens on every iteration.",[14,1530,1531],{},[34,1532,1533],{},"How much does an OpenClaw agent loop cost in API fees?",[14,1535,1536],{},"A single stuck loop can cost anywhere from $5 to $50+ per hour depending on your model choice and context size. With Claude Sonnet, expect roughly $0.20 to $0.35 per reasoning cycle. At 100 cycles per hour, that's $20 to $35. One documented case showed a subagent burning $350 in just 3.5 hours during an uncontrolled loop with over 800 consecutive turns.",[14,1538,1539],{},[34,1540,1541],{},"How do I stop an OpenClaw agent that's stuck in a loop right now?",[14,1543,1544,1545,1547,1548,1550],{},"Kill the process immediately. Use ",[39,1546,1373],{}," if running in Docker, or terminate the node process on your VPS. Then check your API provider's usage dashboard to assess the damage. Before restarting, add guardrails: set ",[39,1549,1395],{}," to 3, add a 60-second cooldown between failures, and cap total actions per session at 50.",[14,1552,1553],{},[34,1554,1555],{},"Is BetterClaw worth it compared to self-hosting OpenClaw?",[14,1557,1558],{},"If you value your time and want to avoid surprise API bills, yes. BetterClaw costs $19/month per agent with BYOK (bring your own API keys). You get built-in monitoring, loop detection, and managed infrastructure. Self-hosting is free but requires you to handle Docker maintenance, security patches, uptime monitoring, and building your own cost safeguards from scratch.",[14,1560,1561],{},[34,1562,1563],{},"Can I prevent OpenClaw agent loops without switching to a managed platform?",[14,1565,1566],{},"Absolutely. Set max-retry limits in your agent configuration, implement session timeouts, add per-task cost ceilings, configure cooldown periods between retries, and set up API usage alerts with your provider. These five steps will prevent most runaway loops. The trade-off is that you're responsible for maintaining and updating these safeguards yourself as OpenClaw evolves.",[27,1568,1115],{"id":1114},[1117,1570,1571,1578,1585,1592],{},[1120,1572,1573,1577],{},[121,1574,1576],{"href":1575},"/blog/openclaw-not-working","OpenClaw Not Working: Every Fix in One Guide"," — Master troubleshooting guide for all common setup issues",[1120,1579,1580,1584],{},[121,1581,1583],{"href":1582},"/blog/openclaw-oom-errors","OpenClaw OOM Errors: Complete Fix Guide"," — Memory crashes that can trigger restart loops",[1120,1586,1587,1591],{},[121,1588,1590],{"href":1589},"/blog/openclaw-memory-fix","OpenClaw Memory Fix Guide"," — Context compaction issues that cause agents to lose track mid-task",[1120,1593,1594,1597],{},[121,1595,1596],{"href":1315},"OpenClaw API Costs: What You'll Actually Pay"," — Understand the cost impact of runaway loops",{"title":360,"searchDepth":361,"depth":361,"links":1599},[1600,1601,1602,1608,1609,1610,1611,1612,1613,1614,1615],{"id":1225,"depth":361,"text":1226},{"id":1256,"depth":361,"text":1257},{"id":1284,"depth":361,"text":1285,"children":1603},[1604,1606,1607],{"id":1295,"depth":1605,"text":1296},3,{"id":1305,"depth":1605,"text":1306},{"id":1320,"depth":1605,"text":1321},{"id":1333,"depth":361,"text":1334},{"id":1360,"depth":361,"text":1361},{"id":1407,"depth":361,"text":1408},{"id":1431,"depth":361,"text":1432},{"id":1457,"depth":361,"text":1458},{"id":1494,"depth":361,"text":1495},{"id":314,"depth":361,"text":315},{"id":1114,"depth":361,"text":1115},"2026-03-26","OpenClaw agent stuck in a loop and burning API tokens? Learn why agents loop, what it costs, and how to add guardrails that stop the bleeding fast.","/img/blog/openclaw-agent-stuck-in-loop.jpg",{},"12 min read",{"title":1183,"description":1617},"OpenClaw Agent Stuck in Loop? Stop Burning $25+/Min","blog/openclaw-agent-stuck-in-loop",[1625,1626,1627,1628,1629,671],"OpenClaw agent stuck in loop","OpenClaw loop fix","AI agent runaway cost","OpenClaw retry storm","OpenClaw circuit breaker","2026-04-02","r8vf1SNdzUrPLKz4gX-cSo2FdOpSo9HbMIB8z5puqEY",1777640219134]