Chapter 8: MCP Manager¶
In Chapter 7: Tool System, we explored how kiro-cli's built-in tools — file reads, shell commands, code search — are defined, validated, and executed inside the Agent Loop. Those tools live inside the binary. They ship with kiro-cli and are always available.
But what if you need a tool that doesn't exist yet? A GitHub integration, a Jira connector, a custom database query tool your team built in Python? You can't recompile kiro-cli every time someone invents a new capability.
That's where the MCP Manager comes in. It lets external processes contribute tools to the same Agent Loop, using a standard protocol.
Motivation: Why External Tool Servers?¶
Imagine kiro-cli is a laptop with USB ports. The built-in tools are like the keyboard and trackpad — always there, always working. But USB ports let you plug in anything: a drawing tablet, a MIDI controller, an external drive. You plug it in, the OS discovers what it can do, and suddenly your laptop has new capabilities it was never shipped with.
MCP servers are the USB devices of kiro-cli. Each one is a separate process that speaks a standard protocol. Plug one in (add it to your agent config), and the MCP Manager discovers its tools, makes them available to the LLM, and proxies every call. Unplug it (remove the config), and the tools disappear. The LLM never knows the difference between a built-in tool and an MCP tool — they all show up in the same tool list.
This design gives you three things:
- Cross-language plugins — Write your tool server in Python, Go, TypeScript, Rust — anything that can speak JSON-RPC over stdio or HTTP
- Isolation — A buggy MCP server crashes its own process, not kiro-cli
- Community ecosystem — Anyone can publish an MCP server; users add it with one config entry
Use Case: Adding a GitHub MCP Server¶
Let's say you want the LLM to create pull requests. You find a community MCP server called github-mcp and add it to your agent config:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@anthropic/github-mcp-server"],
"env": { "GITHUB_TOKEN": "$GITHUB_TOKEN" }
}
}
}
When kiro-cli starts a session with this agent config, here's what happens:
- The MCP Manager reads the
mcpServerssection and sees"github" - It spawns
npx -y @anthropic/github-mcp-serveras a child process - It performs a JSON-RPC handshake — exchanging capabilities
- It calls
tools/listand discovers tools likecreate_pull_request,list_issues,get_file_contents - Those tools are merged into the Agent Loop's tool list alongside built-ins
- When the LLM calls
create_pull_request, the MCP Manager proxies the call to the running server and returns the result
You didn't write any Rust. You didn't recompile anything. You added three lines of JSON and got new capabilities.
Key Concepts¶
The MCP Protocol¶
MCP (Model Context Protocol) is a standard for tool servers. Communication uses JSON-RPC 2.0 — the same request/response format used by LSP (Language Server Protocol). Every message has a method, optional params, and returns a result or error.
The handshake looks like this:
| Step | Client (kiro-cli) | Server (MCP process) |
|---|---|---|
| 1 | initialize with client capabilities |
Responds with server capabilities |
| 2 | initialized notification |
— |
| 3 | tools/list |
Returns array of tool definitions |
After this, the server is ready. The client can call tools/call with a tool name and arguments, and the server returns the result.
Two Transports: Stdio and HTTP¶
MCP servers connect over one of two transports:
Stdio (local) — The MCP Manager spawns the server as a child process. JSON-RPC messages flow over stdin/stdout. This is the most common pattern for local tools.
{
"command": "npx",
"args": ["-y", "@anthropic/github-mcp-server"],
"env": { "GITHUB_TOKEN": "$GITHUB_TOKEN" }
}
HTTP (remote) — The server runs elsewhere (a cloud endpoint, a team service). JSON-RPC messages flow over HTTP. This is used for shared or authenticated services.
{
"url": "https://mcp.internal.example.com/tools",
"headers": { "Authorization": "Bearer $TOKEN" }
}
The transport is inferred from the config: if command is present, it's stdio; if url is present, it's HTTP.
OAuth for Authenticated Servers¶
Remote MCP servers often require authentication. kiro-cli supports a full OAuth 2.0 flow:
- The MCP Manager sends an initial probe request to the server
- If the server responds with
401 Unauthorizedand aWWW-Authenticateheader, kiro-cli discovers the OAuth metadata endpoint - It opens a browser for the user to authorize, runs a local callback server, and exchanges the code for tokens
- Tokens are cached at
~/.kiro/mcp/credentials/and refreshed automatically on expiry
The user sees a one-time browser popup. After that, the MCP server works seamlessly.
Tool Discovery and Namespacing¶
When the MCP Manager discovers tools from a server, it namespaces them. A tool called create_pr from the github server becomes @github.create_pr in the Agent Loop's tool list. This prevents collisions — two servers can both define a search tool without conflict.
Lifecycle: Spawn → Handshake → Serve → Shutdown¶
Each MCP server goes through a clear lifecycle:
| State | Description |
|---|---|
| Not Launched | Config exists but no process spawned yet |
| Initializing | Process spawned, handshake in progress, fetching tool list |
| Initialized | Ready — tools are cached, calls can be proxied |
| Error | Initialization failed (bad command, timeout, auth failure) |
The MCP Manager tracks each server's state and reports events (initializing, initialized, error, OAuth request) to the UI so the user sees real-time feedback.
How It All Fits Together¶
Here's the full flow from agent config to tool execution:
sequenceDiagram
participant Config as Agent Config
participant MM as MCP Manager
participant Server as MCP Server Process
participant Loop as Agent Loop
participant LLM as LLM
Config->>MM: mcpServers: { "github": { command: "npx ..." } }
MM->>Server: spawn process + initialize handshake
Server-->>MM: capabilities + tools/list → [create_pr, list_issues]
MM-->>Loop: register @github.create_pr, @github.list_issues
LLM->>Loop: tool_call: @github.create_pr({title: "Fix bug"})
Loop->>MM: execute_tool("github", "create_pr", {title: "Fix bug"})
MM->>Server: tools/call → create_pr
Server-->>MM: { "pr_url": "https://..." }
MM-->>Loop: CallToolResult
Loop-->>LLM: tool result
The LLM never talks to the MCP server directly. The MCP Manager is the intermediary — it spawns, discovers, proxies, and cleans up.
Internal Implementation¶
Under the hood, the MCP Manager uses a three-layer actor architecture:
Layer 1: McpManager (the coordinator)¶
The McpManager runs in its own async task. It holds a map of all servers (initializing and ready) and routes requests to the right one.
// Simplified from crates/agent/src/agent/mcp/mod.rs
pub struct McpManager {
initializing_servers: HashMap<String, McpServerActorHandle>,
servers: HashMap<String, McpServerActorHandle>,
failed_servers: HashSet<String>,
}
External code never touches McpManager directly. Instead, it uses a cloneable McpManagerHandle that communicates via async channels:
// The handle is what the Agent Loop holds
let mut handle: McpManagerHandle = mcp_manager.spawn();
// Launch a server from config
handle.launch_server("github".into(), config).await?;
// Later, execute a tool
handle.execute_tool("github".into(), "create_pr".into(), args).await?;
Layer 2: McpServerActor (per-server)¶
Each MCP server gets its own McpServerActor — an async task that manages one server's lifecycle. It handles initialization, caches the tool list, and forwards execution requests.
// Simplified from crates/agent/src/agent/mcp/actor.rs
pub struct McpServerActor {
server_name: String,
tools: Vec<ToolSpec>, // cached from tools/list
service_handle: RunningMcpService, // the live connection
}
When a tool call arrives, the actor delegates to its RunningMcpService, which speaks the actual MCP protocol.
Layer 3: McpService (the protocol bridge)¶
The McpService implements the rmcp::Service trait — the Rust MCP library that handles JSON-RPC serialization, transport negotiation, and the initialize/initialized handshake.
For stdio servers, it spawns a child process and wraps stdin/stdout:
// Simplified from crates/agent/src/agent/mcp/service.rs
let process = TokioChildProcess::new(command).spawn()?;
let service = mcp_service.serve(process).await?;
For HTTP servers, it creates a streamable HTTP client with optional OAuth:
// Simplified — remote server with auth
let transport = StreamableHttpClientTransport::new(url, config);
let service = mcp_service.serve(transport).await?;
Event Broadcasting¶
The MCP Manager broadcasts lifecycle events so the UI can show real-time status:
pub enum McpServerEvent {
Initializing { server_name: String },
Initialized { server_name: String, serve_duration: Duration, .. },
InitializeError { server_name: String, error: String },
OauthRequest { server_name: String, oauth_url: String },
ToolListChanged { server_name: String },
}
When a server finishes initializing, the event propagates from McpServerActor → McpManager → McpManagerHandle → UI. The TUI can then show "✓ github (3 tools)" or "✗ slack (connection refused)".
How the Agent Loop Calls MCP Tools¶
When the LLM emits a tool call for an MCP tool, the Agent Loop identifies it by the @server.tool naming convention and routes it through the handle:
// From crates/agent/src/agent/mod.rs (simplified)
ToolKind::Mcp(t) => {
let rx = self.mcp_manager_handle
.execute_tool(t.server_name, t.tool_name, t.params)
.await?;
let result = rx.await?; // wait for the server's response
}
The execute_tool call is non-blocking — it returns a oneshot::Receiver so the Agent Loop can await the result without holding up other work.
Key Source Files¶
| File | Purpose |
|---|---|
crates/agent/src/agent/mcp/mod.rs |
McpManager + McpManagerHandle — coordinator actor |
crates/agent/src/agent/mcp/actor.rs |
McpServerActor — per-server lifecycle management |
crates/agent/src/agent/mcp/service.rs |
McpService + RunningMcpService — protocol bridge (stdio/HTTP) |
crates/agent/src/agent/mcp/oauth_util.rs |
OAuth 2.0 flow, token caching, credential storage |
crates/agent/src/agent/mcp/types.rs |
Shared types (prompts, tool specs) |
crates/agent/src/agent/agent_config/definitions.rs |
McpServerConfig enum (Local / Remote / Registry) |
crates/mock-mcp-server/ |
Test harness — configurable mock MCP server (stdio + HTTP) |
Recap¶
The MCP Manager is the plugin system of kiro-cli. It turns a closed binary into an open platform:
- Agent config declares which MCP servers to launch (stdio or HTTP)
- McpManager spawns each server, performs the JSON-RPC handshake, and discovers tools
- McpServerActor manages each server's lifecycle independently
- McpService bridges the protocol — stdio for local processes, HTTP for remote endpoints
- OAuth handles authentication for remote servers transparently
- The Agent Loop sees MCP tools alongside built-ins — no special handling needed
The LLM doesn't know or care whether a tool is built-in Rust code or a Python process running over stdio. It just calls the tool and gets a result. That's the power of a standard protocol.
What's Next¶
One of the most important MCP servers isn't external at all — it's the code-agent-sdk, a server that ships alongside kiro-cli and provides deep code understanding through Tree-sitter and LSP. In Chapter 9: Code Intelligence, we'll look inside that server and see how it turns "find all callers of foo" from a naive grep into a precise semantic query.