AI agents built on Anthropic’s Claude now have a hosted production infrastructure behind them, as the company launched Claude Managed Agents in public beta on April 8, handling the sandboxing, state management, credential handling, and error recovery that previously took engineering teams three to six months to build before writing a single line of agent logic.
Anthropic’s @claudeai account announced the launch on April 8 at 5:14 PM ET, drawing 5.09 million views. The service is built around what Anthropic calls a brain versus hands design philosophy: Claude itself is the reasoning layer, while each session runs in a disposable, isolated Linux container that handles code execution, file manipulation, and tool calls. When the next Claude model ships, the infrastructure does not need to be rebuilt. The brain upgrades and the hands remain the same.
Pricing is usage based. The $0.08 per runtime hour applies to the session; standard Claude token pricing applies to model usage on top of that.
The deployment patterns across Notion, Asana, and Rakuten illustrate three distinct enterprise use cases. Notion integrated Claude directly into workspaces, allowing engineers to ship code and knowledge workers to generate presentations and websites without leaving the platform, running dozens of parallel tasks while teams collaborate on outputs simultaneously. Asana built what they call AI Teammates, agents embedded in project management workflows that pick up assigned tasks, draft deliverables, and hand back outputs for human review. Rakuten stood up agents across five business functions, each plugged into Slack and Teams, accepting task assignments and returning structured deliverables, with each function live in under a week. Sentry took a different path, pairing its existing debugging agent with a Claude-powered counterpart that writes patches and opens pull requests autonomously from a flagged bug to a completed pull request with no human intervention.
Developers define the agent by specifying the model, system prompt, tools, MCP server connections, and guardrails, then configure a cloud environment with pre-installed packages and network access rules. Anthropic’s infrastructure handles tool orchestration, context management, checkpointing, and crash recovery. Sessions persist through disconnections, a practical requirement for complex workflows. The one significant constraint is that the service runs only on Anthropic’s infrastructure and is not currently available through Amazon Bedrock or Google Vertex AI, which matters for organizations with multi-cloud strategies.
As crypto.news has reported, the AI integration driving enterprise decisions in 2026 increasingly determines headcount, and the operational overhead that Claude Managed Agents eliminates has been a significant barrier to adoption for teams without specialist DevOps resources. As crypto.news has noted, the AI infrastructure buildout, of which Anthropic’s agent platform is a direct example, is one of the primary drivers of capital allocation decisions that have ripple effects across crypto-adjacent AI token markets. The multi-agent coordination feature, which allows agents to spawn sub-agents for complex tasks, is in research preview, with early access available through the Claude Platform console.


