Mastering Anthropic's Managed Agents: A Guide to Dreaming, Outcomes, and Multi-Agent Orchestration
Overview
Anthropic's Managed Agents platform allows you to run AI agents on their robust infrastructure. Recent updates introduced three powerful features: Dreaming, Outcomes, and Multi-agent Orchestration. This guide will walk you through each feature, providing practical steps to leverage them for complex task automation with minimal steering.

Dreaming enables agents to review past work, find patterns, and update memory for self-improvement. Outcomes focus agents on specific quality criteria with automated grading. Multi-agent orchestration lets you break tasks into parallel sub-tasks. Together, these features create a robust self-improving system.
Prerequisites
- An active Anthropic account with access to Managed Agents (public beta or later).
- Basic understanding of AI agent workflows and prompt engineering.
- Familiarity with Anthropic's console or API for agent configuration.
- Sample tasks for testing (e.g., content generation, data extraction).
Step-by-Step Instructions
Enabling Dreaming for Self-Improvement
Dreaming is a scheduled process where your agent reviews recent sessions, identifies patterns, and updates its memory automatically (or with your approval). Follow these steps to activate it:
- Navigate to Agent Settings: In the Managed Agents console, select your agent and go to the 'Memory & Learning' section.
- Enable Dreaming: Toggle the 'Enable Dreaming' option. You'll see two modes: 'Automatic' and 'Review Required'.
- Configure Schedule: Set how often dreaming runs (e.g., every 24 hours). The agent will analyze all sessions since the last run.
- Review Changes (if using Review Required): After each dreaming cycle, you'll receive a summary of proposed memory updates. Approve or reject before they are applied.
Code Example (API):POST /v1/managed-agents/agent_id/memory/dream
{
"mode": "review_required",
"schedule": "0 0 * * *" # daily at midnight
}
Dreaming works best when your agent performs repetitive tasks. It can spot common errors and refine its approach automatically.
Defining Outcomes with a Grader Agent
Outcomes let you specify what 'good' looks like for a task. A separate grader agent evaluates the output against your criteria. Here's how to set it up:
- Define Criteria: Write clear success metrics. For example, for marketing copy: 'Tone must match brand voice, include call-to-action, under 100 words.'
- Create Grader Agent: In Managed Agents, create a new agent dedicated to grading. Give it instructions to evaluate outputs based on your criteria.
- Link Grading: In your main agent's configuration, under 'Evaluation', select the grader agent. Specify that the grader uses its own context window (no cheating).
- Test the Loop: Run a sample task. The main agent completes it, then the grader scores it. Adjust criteria as needed.
Example Grading Prompt:"Evaluate the following text against these criteria: [criteria]. Score each criterion from 1 to 5 and provide an overall pass/fail."

Anthropic reports up to 10% improvement in task success with outcomes. Use it for detail-oriented or subjective quality tasks.
Orchestrating Multiple Agents in Parallel
Multi-agent orchestration breaks a complex task into sub-tasks and assigns them to separate agents. Follow these steps:
- Define Task Decomposition: In your main agent's configuration, enable 'Parallel Orchestration'. Specify a decomposition strategy (e.g., by section, by function).
- Create Sub-Agents: For each sub-task, create a dedicated agent with its own instructions and tools.
- Configure Aggregation: After sub-agents finish, the main agent collects results and merges them (or you can define a merge agent).
- Run a Test: Submit a complex request like 'Write a report covering financial, marketing, and operational sections'. The system will delegate each section to a sub-agent.
API Example:POST /v1/managed-agents/orchestrate
{
"base_agent_id": "main_agent",
"sub_agents": ["finance_agent", "marketing_agent", "ops_agent"],
"decomposition": "section"
}
Orchestration is ideal for tasks that benefit from specialization and parallel processing.
Common Mistakes
- Over-automating Dreaming: Using automatic mode without review can lead to incorrect memory updates, especially for complex tasks. Start with 'review required' mode.
- Vague Outcome Criteria: If criteria are too broad, the grader agent may not catch errors. Be specific and test with examples.
- Ignoring Grader Context: Ensure the grader agent's instructions are clear and that it has access to the same context as the main agent (except the answer).
- Orchestration Overhead: For simple tasks, multi-agent orchestration can add latency and unnecessary complexity. Use it only for genuinely complex tasks.
- Not Monitoring Performance: Agents can drift over time. Regularly review dreaming updates and outcome scores.
Summary
Anthropic's Managed Agents with Dreaming, Outcomes, and Multi-agent Orchestration offer a powerful framework for building self-improving, quality-focused AI workflows. By following this guide, you can configure these features to automate complex tasks effectively. Start small, test thoroughly, and iterate based on results.
Related Articles
- MacRumors Podcast Reveals Apple's Next Moves: Foldable iPhone, iPhone Air, and Vision Pro Future Under Spotlight
- Should You Wait for the Next MacBook Pro? Key Upgrades to Consider
- How to Expose Hidden IT Problems and Eliminate Digital Friction
- Safari Technology Preview 237: 10 Key Fixes and Features You Should Know
- cPanel Security Alert: Critical Authentication Flaw Requires Immediate Patching
- From Flame to Q-Day: A Tutorial on Hash Collision Attacks and Quantum Computing Threats
- Mastering the Art of Announcing Executive Moves in Biotech: A Step-by-Step Guide
- Safari Technology Preview 237: Accessibility and CSS Enhancements Lead the Way