
architecture
System Architecture Diagram

Core Components
Mnemo
Mnemo is the main application class that manages the global state and application lifecycle. It serves as the entry point for building MCP agent applications. Key responsibilities:
Key responsibilities:
- Initialize the application context
- Manage configuration
- Provide access to the server registry and executor
- Register and execute workflows
app = Mnemo(name="mnemo")
async with app.run() as mcp_agent_app:
# App is initialized and ready to use
# Create agents, run workflows, etc.
pass
# App is automatically cleaned up when exiting the context
Context
The Context class maintains the application's global state, including:
- Configuration settings
- Server registry
- Executor for running workflows
- Human input handler
- Signal notification
- Logging system
The context is initialized by the Mnemo and is accessible throughout the application.
Connection Management Architecture

The MCPConnectionManager class manages the lifecycle of multiple MCP server connections:
- It creates and maintains connections to MCP servers
- It ensures proper initialization and cleanup of server connections
- It handles different transport mechanisms (STDIO, SSE, WebSocket)
The ServerConnection class represents a connection to a specific MCP server:
- It manages the transport context (STDIO, SSE, WebSocket)
- It creates a ClientSession for the server
- It handles server initialization and shutdown
Agent System
Agent Architecture

The Agent class extends MCPAggregator and represents an entity with a purpose and access to MCP servers:
- It has a name and instruction
- It can aggregate tools from multiple MCP servers
- It can expose Python functions as tools
- It can request human input during execution
- It can attach an AugmentedLLM to generate responses
Example of agent creation:
agent = Agent(
name="finder",
instruction="You are a helpful agent that can access files and fetch URLs.",
server_names=["fetch","filesystem"]
)
async with agent:
tools = await agent.list_tools()
llm = await agent.attach_llm(OpenAIAugmentedLLM)
result = await llm.generate_str("Show me what's in README.md")
Augmented LLM System
The Augmented LLM system provides integration with various LLM providers and enables tool-calling capabilities:

AugmentedLLM is the base class for LLMs with tool-calling capabilities:
- It provides methods for generating text with tool calls
- It manages conversation history
- It handles structured output
- It integrates with different LLM providers (OpenAI, Anthropic, Azure, Bedrock)
The framework supports multiple LLM providers through provider-specific implementations of AugmentedLLM.
Transport Mechanisms
The framework supports multiple transport mechanisms for connecting to MCP servers:

The transport mechanism is specified in the server configuration:
mcp:
servers:
fetch:
transport: stdio
command: "uvx"
args: ["mcp-server-fetch"]
remote_service:
transport: sse
url: "https://example.com/mcp-server"
headers:
Authorization:"Bearer ${MCP_API_KEY}"
Data Flow Architecture
The following diagram illustrates the data flow during agent execution:

This diagram shows how tool calls flow through the system:
- The agent lists available tools from multiple MCP servers
- The LLM processes the user's message and decides to use tools
- Tool calls are routed to the appropriate server
- Results are returned to the LLM, which incorporates them in its response
- The final response is returned to the user