
workflow
Core Concepts
All workflow patterns in MCP Agent share fundamental characteristics that enable their composability:
- AugmentedLLM Foundation: Every workflow pattern extends the AugmentedLLM class, allowing any workflow to be used anywhere an LLM is expected.
- Uniform Interface: All patterns expose the same core methods (generate, generate_str, generate_structured), facilitating interchangeability.
- Agent-Based Architecture: Workflows orchestrate one or more Agents, each with specific instructions and access to MCP servers.
- Tool Usage: Patterns leverage MCP servers for extended capabilities through tool calls.

Available Workflow Patterns
AugmentedLLM
The foundation of all workflow patterns, AugmentedLLM enhances language models with MCP server tools and a consistent interface.

Key Features:
- Base implementation for all other workflow patterns
- Provides memory for conversation history
- Implements standard methods: generate, generate_str, and generate_structured
- Supports various LLM providers through implementation classes (OpenAI, Anthropic, etc.)
Parallel Workflow
The Parallel workflow distributes tasks to multiple sub-agents and combines their results.

Implementation: The ParallelLLM class takes a fan-in agent and a list of fan-out agents. It:
- Distributes the input to each fan-out agent
- Runs all agents in parallel (truly concurrent)
- Collects outputs from all fan-out agents
- Feeds these outputs to the fan-in agent to create a consolidated result
Example Usage:
parallel = ParallelLLM(
fan_in_agent=grader,
fan_out_agents=[proofreader, fact_checker, style_enforcer],
llm_factory=OpenAIAugmentedLLM,
)
result = await parallel.generate_str(message="Student submission: ...")
Use Cases:
- Document analysis with multiple specialists
- Multi-faceted evaluation tasks
- Tasks requiring different expertise areas
Router Workflow
The Router pattern directs requests to the most appropriate handler based on the content.

Implementation Types:
- LLMRouter: Uses an LLM to classify and route requests
- EmbeddingRouter: Uses embedding models for faster, less expensive classification
Functions:
- route: Returns top matches across all categories
- route_to_agent: Routes specifically to matching agents
- route_to_server: Routes to appropriate MCP servers
- route_to_function: Routes to Python functions
Example Usage:
router = LLMRouter(
llm=llm,
agents=[finder_agent, writer_agent],
functions=[print_hello_world]
)
result = await router.route(
request = "Find and print the README.md contents",
top_k=1
)
chosen_agent = results[0].result
Use Cases:
- Multi-agent systems with specialized components
- Intent-based processing
- Command routing