Introduction
SACP (Symposium's extensions to ACP) is an SDK for building composable AI agent systems using the Agent-Client Protocol.
What is SACP?
SACP extends the Agent-Client Protocol (ACP) to enable composable agent architectures through proxy chains. Instead of building monolithic AI tools, SACP allows you to create modular components that can intercept and transform messages flowing between editors and agents.
Think of it like middleware for AI agents: you can add logging, inject context, provide additional tools, or modify behavior - all without changing the editor or base agent.
Key Capabilities
- Proxy Chains: Chain together multiple components, each adding specific capabilities
- Message Transformation: Intercept and modify requests and responses
- Tool Injection: Add MCP servers and tools to any agent
- Editor Agnostic: Works with any ACP-compatible editor (Zed, Claude Code, etc.)
- Agent Agnostic: Works with any ACP-compatible agent
Example Use Case: Sparkle Integration
Consider integrating Sparkle (a collaborative AI framework) into a coding session. Sparkle needs initialization and provides MCP tools.
Without SACP: Manual initialization each session, or agent-specific modifications.
With SACP: A Sparkle proxy component automatically:
- Injects Sparkle's MCP server during initialization
- Prepends the embodiment sequence to the first prompt
- Passes everything else through transparently
The editor sees a normal ACP agent. The base agent has Sparkle tools available. No code changes needed on either side.
Architecture Overview
flowchart LR
Editor[ACP Editor]
subgraph Conductor[Conductor Process]
P1[Proxy 1]
P2[Proxy 2]
Agent[Base Agent]
P1 -->|messages| P2
P2 -->|messages| Agent
end
Editor <-->|ACP| Conductor
SACP introduces three types of components:
- Conductor: Orchestrates the proxy chain, appears as a normal ACP agent to editors
- Proxies: Intercept and transform messages, built using the
sacp-proxyframework - Agents: Provide base AI model behavior using standard ACP
The conductor manages message routing, making the proxy chain transparent to editors.
Who Should Use SACP?
- Proxy Developers: Build reusable components that add capabilities to any agent
- Agent Developers: Create specialized agents that work with any ACP editor
- Client Developers: Build ACP-compatible editors and tools
- Integration Developers: Connect AI agents with existing systems and workflows
Repository Structure
This repository provides three core crates:
sacp: Core protocol types and traits for building clients and agentssacp-proxy: Framework for building proxy componentssacp-conductor: Binary that orchestrates proxy chains
Getting Started
- Read the Architecture Overview to understand how SACP works
- See the Protocol Reference for technical details
- Follow Building a Proxy to create your first component
- Check out the SACP RFD for the complete specification
Relationship to ACP
SACP is an extension to ACP, not a fork. SACP components communicate using ACP's extension protocol (_meta fields and custom methods). Standard ACP editors and agents work with SACP without modification - they simply see a normal ACP agent when talking to the conductor.
Architecture Overview
SACP enables composable agent systems through a proxy chain architecture. This chapter explains how the components work together.
Core Concepts
Proxy Chain
A proxy chain is a sequence of components where each can intercept and transform messages.
Conceptual Flow
Conceptually, the chain looks like a sequence where messages flow through each component:
flowchart LR
Editor[Editor] -->|prompt| P1[Proxy 1]
P1 -->|modified| P2[Proxy 2]
P2 -->|modified| A[Agent]
A -->|response| P2
P2 -->|modified| P1
P1 -->|response| Editor
This is the mental model: Editor → Proxy 1 → Proxy 2 → Agent, with responses flowing back.
Actual Flow
In reality, the conductor sits between every component. Each component only talks to the conductor:
flowchart TB
Editor[Editor]
C[Conductor]
P1[Proxy 1]
P2[Proxy 2]
A[Agent]
Editor <-->|ACP| C
C <-->|ACP| P1
C <-->|ACP| P2
C <-->|ACP| A
The conductor uses _proxy/successor/request bidirectionally with different meanings:
- Proxy → Conductor: Proxy sends
_proxy/successor/requestto send a message TO its successor (downstream) - Conductor → Proxy: Conductor sends
_proxy/successor/requestto deliver a message FROM the successor (upstream)
Important: The conductor maintains message ordering by routing all forwarding decisions through a central event loop, preventing responses from overtaking notifications even though they use different transport paths.
Downstream flow (Proxy 1 sending to Proxy 2):
- Conductor sends normal ACP request to Proxy 1
- Proxy 1 sends
_proxy/successor/requestto conductor (meaning "send this TO my successor") - Conductor unwraps and forwards inner content to Proxy 2 as normal ACP request
Upstream flow (Proxy 2 sending agent-to-client message back to Proxy 1):
- Proxy 2 sends normal ACP request/notification to conductor (agent-to-client direction)
- Conductor wraps it in
_proxy/successor/requestand sends to Proxy 1 (meaning "this is FROM your successor") - Proxy 1 receives the message, processes it, can forward further upstream to conductor
Each proxy can:
- Modify messages before forwarding to successor
- Generate responses without forwarding
- Add side effects (logging, metrics, etc.)
- Pass messages through transparently
The Conductor
The conductor is the orchestrator that manages the proxy chain. From the editor's perspective, it appears as a normal ACP agent.
Responsibilities:
- Process Management: Spawns and manages component processes
- Message Routing: Routes messages through the proxy chain, preserving send order
- Capability Adaptation: Bridges between different component capabilities
- Message Ordering: Ensures all messages maintain send order when forwarded through proxy chains
Usage:
# Start a proxy chain
conductor agent sparkle-proxy claude-code-acp
# The editor just sees "conductor" as a normal ACP agent
The conductor:
- Spawns each component as a subprocess
- Performs the proxy capability handshake
- Routes messages using the
_proxy/successor/*protocol - Handles component failures gracefully
Proxy Components
Proxies are built using the sacp-proxy framework. They communicate with the conductor using special extension methods.
Proxy Lifecycle:
- Initialization: Conductor offers proxy capability, component accepts
- Message Handling: Component receives ACP messages from upstream (editor direction)
- Forwarding: Component sends transformed messages downstream using
_proxy/successor/* - Responses: Conductor delivers responses/notifications from downstream
Transparent Proxy Pattern:
The simplest proxy just forwards everything:
#![allow(unused)] fn main() { match message { // Forward requests from editor to successor AcpRequest(req) => send_to_successor_request(req), // Forward from successor back to editor SuccessorReceiveRequest(msg) => respond_to_editor(msg), } }
Message Transformation:
A proxy can transform messages before forwarding:
#![allow(unused)] fn main() { match message { AcpRequest::Prompt(mut prompt) => { // Inject context into the prompt prompt.messages.insert(0, embodiment_message); send_to_successor_request(prompt); } // ... handle other messages } }
Agent Components
The last component in the chain is the agent - a standard ACP agent that provides the base AI model behavior.
Agents:
- Don't need SACP awareness
- Receive normal ACP messages
- Don't know they're in a proxy chain
- Can be any ACP-compatible agent (Claude Code, other implementations)
Message Flow
Request Flow (Editor → Agent)
Let's trace a prompt request through the chain: Editor → Proxy 1 → Proxy 2 → Agent
- Editor → Conductor: Editor sends normal ACP request
promptto conductor via stdio - Conductor → Proxy 1: Conductor forwards as normal ACP
promptmessage to Proxy 1 - Proxy 1 processing: Proxy 1 receives the request, modifies it, decides to forward
- Proxy 1 → Conductor: Proxy 1 sends
_proxy/successor/requestcontaining the modifiedpromptback to conductor - Conductor → Proxy 2: Conductor unwraps the
_proxy/successor/requestand forwards the innerpromptas normal ACP to Proxy 2 - Proxy 2 → Conductor: Proxy 2 sends
_proxy/successor/requestwith its modifiedprompt - Conductor → Agent: Conductor unwraps and forwards as normal ACP
promptto agent (no proxy capability offered to agent) - Agent processing: Agent processes the request and generates a response
Response Flow (Agent → Editor)
Responses flow back via standard JSON-RPC response mechanism:
- Agent → Conductor: Agent sends JSON-RPC response (with matching message ID) via stdio
- Conductor → Proxy 2: Conductor routes response back to Proxy 2 (which sent the original
_proxy/successor/request) - Proxy 2 processing: Proxy 2 receives response, can modify it
- Proxy 2 → Conductor: Proxy 2 sends JSON-RPC response (with matching ID) back
- Conductor → Proxy 1: Conductor routes to Proxy 1
- Proxy 1 → Conductor: Proxy 1 sends response
- Conductor → Editor: Conductor forwards final response to editor
Key insight: Responses don't use _proxy/successor/* wrappers. They use standard JSON-RPC response IDs to route back through the chain.
Proxy Mode
The conductor can itself be initialized as a proxy component. When the conductor receives an initialize request with the proxy capability:
- Conductor runs in proxy mode: All managed components (including the last one) are offered proxy capability
- Final component forwards: When the last component sends
_proxy/successor/request, the conductor forwards that message to its own successor using_proxy/successor/request - Tree structures: This enables hierarchical proxy chains where a conductor manages a sub-chain within a larger chain
Example tree:
client → proxy1 → conductor (proxy mode) → final-agent
↓ manages
p1 → p2 → p3
When p3 sends _proxy/successor/request, the conductor forwards it to final-agent (the conductor's successor).
Key Properties
- Conductor is always the intermediary: No component talks directly to another
- Transparency: Editor only sees conductor, agent only sees normal ACP
- Composability: Proxies don't need to know about each other
- Flexibility: Can add/remove/reorder proxies without code changes
- Compatibility: Works with any ACP editor and agent
- Hierarchical: Conductors can nest via proxy mode
Capability Handshake
The conductor uses a two-way capability handshake to ensure components can fulfill their role.
Normal Mode (Conductor as Root)
When the conductor is the root of the chain (not offered proxy capability):
For Proxy Components (all except last):
- Conductor sends
InitializeRequestwith"_meta": { "proxy": true } - Component must respond with
InitializeResponsewith"_meta": { "proxy": true } - If component doesn't accept, conductor fails with error
For Agent (last component):
- Conductor sends normal
InitializeRequest(no proxy capability) - Agent responds normally
- Agent doesn't need SACP awareness
Proxy Mode (Conductor as Proxy)
When the conductor receives initialize with proxy capability, it runs in proxy mode:
For All Components (including last):
- Conductor sends
InitializeRequestwith"_meta": { "proxy": true }to all components - Each component must respond with
InitializeResponsewith"_meta": { "proxy": true } - The final component can now send
_proxy/successor/requestto the conductor - Conductor forwards these requests to its own successor using
_proxy/successor/request
This enables tree-structured proxy chains where conductors manage sub-chains.
MCP Bridge
SACP includes a bridge that allows proxy components to provide MCP servers that communicate over ACP messages.
Problem: MCP servers traditionally use stdio. Components want to provide tools without requiring stdio connections.
Solution: MCP-over-ACP protocol using _mcp/* extension methods.
How it works:
- Component declares MCP server with ACP transport:
"url": "acp:UUID" - If agent supports
mcp_acp_transport, conductor passes through - If not, conductor spawns
conductor mcp PORTbridge processes - Bridge converts between stdio (MCP) and ACP messages
- Agent thinks it's talking to normal MCP server over stdio
See Protocol Reference for detailed message formats.
Component Types Summary
| Component | Role | SACP Awareness | Communication |
|---|---|---|---|
| Editor | User interface | No | Standard ACP with conductor |
| Conductor (normal mode) | Orchestrator | Yes | Routes messages between components, sits in the middle of all connections |
| Conductor (proxy mode) | Proxy + Orchestrator | Yes | Routes messages for sub-chain AND forwards final component's messages to its own successor |
| Proxy | Transform messages | Yes | Receives ACP, sends _proxy/successor/* to conductor |
| Agent | AI model | No | Standard ACP |
Benefits of This Architecture
- Modularity: Build focused components that do one thing well
- Reusability: Same proxy works with any editor and agent
- Testability: Test proxies in isolation
- Compatibility: No changes needed to editors or agents
- Composition: Combine components in different ways for different use cases
- Evolution: Add new capabilities without modifying existing components
Next Steps
- See Protocol Reference for technical details of the
_proxy/successor/*protocol - Read Building a Proxy to create your first component
- Check Building an Agent to understand agent development
Protocol Reference
This chapter documents the SACP protocol extensions to ACP. These extensions use ACP's extensibility mechanism through custom methods and _meta fields.
Overview
SACP defines two main protocol extensions:
_proxy/successor/*- For proxy-to-successor communication_mcp/*- For MCP-over-ACP bridging
The _proxy/successor/* Protocol
Proxies communicate with their downstream component (next proxy or agent) through the conductor using these extension methods.
_proxy/successor/request
Send a request to the successor component.
Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "_proxy/successor/request",
"params": {
// The actual ACP request to forward, flattened
"method": "prompt",
"params": {
"messages": [...]
}
}
}
Response: The response is the successor's response to the forwarded request:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
// The successor's response
}
}
Usage: When a proxy receives an ACP request from upstream and wants to forward it (possibly transformed) to the downstream component, it sends _proxy/successor/request to the conductor. The conductor routes it to the next component.
_proxy/successor/notification
Send a notification to the successor component.
Notification:
{
"jsonrpc": "2.0",
"method": "_proxy/successor/notification",
"params": {
// The actual ACP notification to forward, flattened
"method": "cancelled",
"params": {}
}
}
Usage: When a proxy receives a notification from upstream and wants to forward it downstream.
Message Flow Examples
Example: Transforming a prompt
- Editor sends
promptrequest to conductor - Conductor forwards as normal ACP
promptto Proxy A - Proxy A modifies the prompt and sends:
{ "method": "_proxy/successor/request", "params": { "method": "prompt", "params": { /* modified prompt */ } } } - Conductor routes to Proxy B as normal
prompt - Response flows back through the chain
Example: Pass-through proxy
A minimal proxy that just forwards everything:
#![allow(unused)] fn main() { use sacp_proxy::{JsonRpcCxExt, AcpProxyExt}; // Forward request downstream cx.send_request_to_successor(request) .await_when_result_received(|result| { cx.respond_with_result(result) }) // Forward notification downstream cx.send_notification_to_successor(notification) }
Capability Handshake
The Proxy Capability
The conductor uses a two-way capability handshake to verify components can act as proxies.
InitializeRequest from conductor to proxy:
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "0.7.0",
"capabilities": {},
"_meta": {
"proxy": true
}
}
}
InitializeResponse from proxy to conductor:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "0.7.0",
"serverInfo": {},
"capabilities": {},
"_meta": {
"proxy": true
}
}
}
Why a two-way handshake?
The proxy capability is an active protocol - it requires the component to handle _proxy/successor/* messages and route communications. If a component doesn't respond with the proxy capability, the conductor fails initialization with an error.
Agent initialization:
The last component (agent) is NOT offered the proxy capability:
{
"method": "initialize",
"params": {
"protocolVersion": "0.7.0",
"capabilities": {},
"_meta": {} // No proxy capability
}
}
Agents don't need SACP awareness.
The _mcp/* Protocol
SACP enables components to provide MCP servers that communicate over ACP messages instead of stdio.
MCP Server Declaration
Components declare MCP servers with ACP transport using a special URL scheme:
{
"tools": {
"mcpServers": {
"sparkle": {
"transport": "http",
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"headers": {}
}
}
}
}
The acp:UUID URL signals ACP transport. The component generates a unique UUID to identify which component handles calls to this MCP server.
_mcp/connect
Create a new MCP connection (equivalent to "running the command").
Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "_mcp/connect",
"params": {
"acp_url": "acp:550e8400-e29b-41d4-a716-446655440000"
}
}
Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"connection_id": "conn-123"
}
}
The connection_id is used in subsequent MCP messages to identify which connection.
_mcp/disconnect
Disconnect an MCP connection.
Notification:
{
"jsonrpc": "2.0",
"method": "_mcp/disconnect",
"params": {
"connection_id": "conn-123"
}
}
_mcp/request
Send an MCP request over the ACP connection. This is bidirectional:
- Agent→Component: MCP client calling MCP server (tool calls, resource reads, etc.)
- Component→Agent: MCP server calling MCP client (sampling/createMessage, etc.)
Request:
{
"jsonrpc": "2.0",
"id": 2,
"method": "_mcp/request",
"params": {
"connection_id": "conn-123",
// The actual MCP request, flattened
"method": "tools/call",
"params": {
"name": "embody_sparkle",
"arguments": {}
}
}
}
Response:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
// The MCP response
"content": [
{"type": "text", "text": "Embodiment complete"}
]
}
}
_mcp/notification
Send an MCP notification over the ACP connection. Bidirectional like _mcp/request.
Notification:
{
"jsonrpc": "2.0",
"method": "_mcp/notification",
"params": {
"connection_id": "conn-123",
// The actual MCP notification, flattened
"method": "notifications/progress",
"params": {
"progressToken": "token-1",
"progress": 50,
"total": 100
}
}
}
Agent Capability: mcp_acp_transport
Agents that natively support MCP-over-ACP declare this capability:
{
"_meta": {
"mcp_acp_transport": true
}
}
Conductor behavior:
- If the agent has
mcp_acp_transport: true, conductor passes MCP server declarations through unchanged - If the agent lacks this capability, conductor performs bridging adaptation:
- Binds a TCP port (e.g.,
localhost:54321) - Transforms MCP server to use
conductor mcp PORTcommand with stdio transport - Spawns bridge process that converts between stdio (MCP) and ACP messages
- Agent thinks it's talking to normal MCP server over stdio
- Binds a TCP port (e.g.,
Bridging transformation example:
Original (from component):
{
"sparkle": {
"transport": "http",
"url": "acp:550e8400-e29b-41d4-a716-446655440000"
}
}
Transformed (for agent without native support):
{
"sparkle": {
"command": "conductor",
"args": ["mcp", "54321"],
"transport": "stdio"
}
}
The conductor mcp PORT process bridges between stdio and the conductor's ACP message routing.
Message Direction Summary
| Message | Direction | Purpose |
|---|---|---|
_proxy/successor/request | Proxy→Conductor | Forward request downstream |
_proxy/successor/notification | Proxy→Conductor | Forward notification downstream |
_mcp/connect | Agent↔Component | Establish MCP connection |
_mcp/disconnect | Agent↔Component | Close MCP connection |
_mcp/request | Agent↔Component | Bidirectional MCP requests |
_mcp/notification | Agent↔Component | Bidirectional MCP notifications |
Building on SACP
When building proxies, you use the sacp-proxy crate which provides:
AcpProxyExttrait for handling successor messagesJsonRpcCxExttrait for sending to successorProxyHandlerfor automatic proxy capability handshake
See Building a Proxy for implementation guide.
Building a Proxy
This chapter explains how to build a proxy component using the sacp-proxy crate.
Overview
A proxy component intercepts messages between editors and agents, transforming them or adding side effects. Proxies are built using the sacp-proxy framework.
Basic Structure
#![allow(unused)] fn main() { use sacp_proxy::{AcpProxyExt, JsonRpcCxExt, ProxyHandler}; use sacp::{JsonRpcConnection, JsonRpcHandler}; // Your proxy's main handler struct MyProxy { // State fields } impl JsonRpcHandler for MyProxy { async fn handle_message(&mut self, message: MessageAndCx) -> Result<Handled> { match message { // Handle messages from upstream (editor direction) MessageAndCx::Request(req, cx) => { match req { // Transform and forward AcpRequest::Prompt(mut prompt) => { // Modify the prompt prompt.messages.insert(0, my_context); // Forward to successor cx.send_request_to_successor(prompt) .await_when_result_received(|result| { cx.respond_with_result(result) }) } // Other message types... } } MessageAndCx::Notification(notif, cx) => { // Handle notifications } } } } }
Key Traits
AcpProxyExt
Provides methods for handling messages from the successor:
#![allow(unused)] fn main() { use sacp_proxy::AcpProxyExt; connection .on_receive_request_from_successor(|req, cx| async move { // Handle request from downstream component }) .on_receive_notification_from_successor(|notif, cx| async move { // Handle notification from downstream }) .proxy() // Enable automatic proxy capability handshake }
JsonRpcCxExt
Provides methods for sending to successor:
#![allow(unused)] fn main() { use sacp_proxy::JsonRpcCxExt; // Send request and handle response cx.send_request_to_successor(request) .await_when_result_received(|result| { cx.respond_with_result(result) }) // Send notification (fire and forget) cx.send_notification_to_successor(notification) }
Proxy Patterns
Pass-through Proxy
The simplest proxy forwards everything unchanged:
#![allow(unused)] fn main() { impl JsonRpcHandler for PassThrough { async fn handle_message(&mut self, message: MessageAndCx) -> Result<Handled> { match message { MessageAndCx::Request(req, cx) => { cx.send_request_to_successor(req) .await_when_result_received(|r| cx.respond_with_result(r)) } MessageAndCx::Notification(notif, cx) => { cx.send_notification_to_successor(notif) } } } } }
Initialization Injection
Inject context or configuration during initialization:
#![allow(unused)] fn main() { MessageAndCx::Request(AcpRequest::Initialize(mut init), cx) => { // Add your capabilities init.capabilities.my_feature = true; cx.send_request_to_successor(init) .await_when_result_received(|result| { cx.respond_with_result(result) }) } }
Prompt Transformation
Modify prompts before they reach the agent:
#![allow(unused)] fn main() { MessageAndCx::Request(AcpRequest::Prompt(mut prompt), cx) => { // Prepend system context let context_message = Message { role: Role::User, content: vec![Content::Text { text: context }], }; prompt.messages.insert(0, context_message); cx.send_request_to_successor(prompt) .await_when_result_received(|result| { cx.respond_with_result(result) }) } }
MCP Server Provider
Provide MCP servers to the agent:
#![allow(unused)] fn main() { use sacp_proxy::AcpProxyExt; connection .provide_mcp(my_mcp_server_uuid, my_mcp_handler) .proxy() }
See the Protocol Reference for details on the MCP-over-ACP protocol.
Complete Example
For a complete example of a production proxy, see the sparkle-acp-proxy implementation.
Next Steps
- See Protocol Reference for message format details
- Read the
sacp-proxycrate documentation for API details - Study the sparkle-acp-proxy implementation for patterns
Building an Agent
This chapter explains how to build an ACP agent using the sacp crate.
Overview
An agent is the final component in a SACP proxy chain. It provides the base AI model behavior and doesn't need awareness of SACP - it's just a standard ACP agent.
However, the sacp crate provides useful types and utilities for building ACP agents.
Core Types
The sacp crate provides Rust types for ACP protocol messages:
#![allow(unused)] fn main() { use sacp::{ InitializeRequest, InitializeResponse, PromptRequest, PromptResponse, // ... other ACP types }; }
These types handle:
- Serialization/deserialization
- Protocol validation
- Type safety for message handling
JSON-RPC Foundation
The sacp crate includes a JSON-RPC layer that handles:
- Message framing over stdio or other transports
- Request/response correlation
- Notification handling
- Error propagation
#![allow(unused)] fn main() { use sacp::{JsonRpcConnection, JsonRpcHandler}; // Create a connection over stdio let connection = JsonRpcConnection::new(stdin(), stdout(), my_handler); // Run the message loop connection.run().await?; }
Handler Pattern
Implement JsonRpcHandler to process ACP messages:
#![allow(unused)] fn main() { use sacp::{JsonRpcHandler, MessageAndCx, Handled}; struct MyAgent { // Agent state } impl JsonRpcHandler for MyAgent { async fn handle_message(&mut self, message: MessageAndCx) -> Result<Handled> { match message { MessageAndCx::Request(req, cx) => { match req { AcpRequest::Initialize(init) => { // Handle initialization let response = InitializeResponse { protocolVersion: "0.7.0", serverInfo: ServerInfo { /* ... */ }, capabilities: Capabilities { /* ... */ }, }; cx.respond(response)?; Ok(Handled::FullyHandled) } AcpRequest::Prompt(prompt) => { // Call your AI model let response = self.generate_response(prompt).await?; cx.respond(response)?; Ok(Handled::FullyHandled) } // ... other message types } } MessageAndCx::Notification(notif, cx) => { // Handle notifications } } } } }
Working with Proxies
Your agent doesn't need to know about SACP proxies. However, there are some optional capabilities that improve proxy integration:
MCP-over-ACP Support
If your agent can handle MCP servers declared with acp:UUID URLs, advertise the capability:
#![allow(unused)] fn main() { InitializeResponse { // ... _meta: json!({ "mcp_acp_transport": true }), } }
This allows the conductor to skip bridging and pass MCP declarations through directly.
Without this capability, the conductor will automatically bridge MCP-over-ACP to stdio for you.
Testing
The sacp crate provides test utilities:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use sacp::testing::*; #[test] fn test_prompt_handling() { let agent = MyAgent::new(); let response = agent.handle_prompt(test_prompt()).await?; assert_eq!(response.role, Role::Assistant); } } }
Standard ACP Implementation
Remember: An agent built with sacp is a standard ACP agent. It will work:
- Directly with ACP editors (Zed, Claude Code, etc.)
- As the final component in a SACP proxy chain
- With any ACP-compatible tooling
The sacp crate just provides convenient Rust types and infrastructure.
Next Steps
- See Protocol Reference for message format details
- Read the
sacpcrate documentation for API details - Check the ACP specification for protocol details
Composable Agents via P/ACP (Proxying ACP)
Elevator pitch
What are you proposing to change?
We propose to prototype P/ACP (Proxying ACP), an extension to Zed's Agent Client Protocol (ACP) that enables composable agent architectures through proxy chains. Instead of building monolithic AI tools, P/ACP allows developers to create modular components that can intercept and transform messages flowing between editors and agents.
This RFD builds on the concepts introduced in SymmACP: extending Zed's ACP to support Composable Agents, with the protocol renamed to P/ACP for this implementation.
Key changes:
- Define a proxy chain architecture where components can transform ACP messages
- Create an orchestrator (Conductor) that manages the proxy chain and presents as a normal ACP agent to editors
- Establish the
_proxy/successor/*protocol for proxies to communicate with downstream components - Enable composition without requiring editors to understand P/ACP internals
Status quo
How do things work today and what problems does this cause? Why would we change things?
Today's AI agent ecosystem is dominated by monolithic agents. We want people to be able to combine independent components to build custom agents targeting their specific needs. We want them to be able to use these with whatever editors and tooling they have. This is aligned with Symposium's core values of openness, interoperability, and extensibility.
Motivating Example: Sparkle Integration
Consider integrating Sparkle (a collaborative AI framework) into a coding session with Zed and Claude. Sparkle provides an MCP server with tools, but requires an initialization sequence to load patterns and set up collaborative context.
Without P/ACP:
- Users must manually run the initialization sequence each session
- Or use agent-specific hooks (Claude Code has them, but not standardized across agents)
- Or modify the agent to handle initialization automatically
- Result: Manual intervention required, agent-specific configuration, no generic solution
With P/ACP:
flowchart LR
Editor[Editor<br/>Zed]
subgraph Conductor[Conductor Orchestrator]
Sparkle[Sparkle Component]
Agent[Base Agent]
MCP[Sparkle MCP Server]
Sparkle -->|proxy chain| Agent
Sparkle -.->|provides tools| MCP
end
Editor <-->|ACP| Conductor
The Sparkle component:
- Injects Sparkle MCP server into the agent's tool list during
initialize - Intercepts the first
promptand prepends Sparkle embodiment sequence - Passes all other messages through transparently
From the editor's perspective, it talks to a normal ACP agent. From the base agent's perspective, it has Sparkle tools available. No code changes required on either side.
This demonstrates P/ACP's core value: adding capabilities through composition rather than modification.
What we propose to do about it
What are you proposing to improve the situation?
We will develop an extension to ACP called P/ACP (Proxying ACP).
The heart of P/ACP is a proxy chain where each component adds specific capabilities:
flowchart LR
Editor[ACP Editor]
subgraph Orchestrator[P/ACP Orchestrator]
O[Orchestrator Process]
end
subgraph ProxyChain[Proxy Chain - managed by orchestrator]
P1[Proxy 1]
P2[Proxy 2]
Agent[ACP Agent]
P1 -->|_proxy/successor/*| P2
P2 -->|_proxy/successor/*| Agent
end
Editor <-->|ACP| O
O <-->|routes messages| ProxyChain
P/ACP defines three kinds of actors:
- Editors spawn the orchestrator and communicate via standard ACP
- Orchestrator manages the proxy chain, appears as a normal ACP agent to editors
- Proxies intercept and transform messages, communicate with downstream via
_proxy/successor/*protocol - Agents provide base AI model behavior using standard ACP
The orchestrator handles message routing, making the proxy chain transparent to editors. Proxies can transform requests, responses, or add side-effects without editors or agents needing P/ACP awareness.
The Orchestrator: Conductor
P/ACP's orchestrator is called the Conductor (binary name: conductor). The conductor has three core responsibilities:
- Process Management - Creates and manages component processes based on command-line configuration
- Message Routing - Routes messages between editor, components, and agent through the proxy chain
- Capability Adaptation - Observes component capabilities and adapts between them
Key adaptation: MCP Bridge
- If the agent supports
mcp_acp_transport, conductor passes MCP servers with ACP transport through unchanged - If not, conductor spawns
conductor mcp $portprocesses to bridge between stdio (MCP) and ACP messages - Components can provide MCP servers without requiring agent modifications
- See "MCP Bridge" section in Implementation Details for full protocol
Other adaptations include session pre-population, streaming support, content types, and tool formats.
From the editor's perspective, it spawns one conductor process and communicates using normal ACP over stdio. The editor doesn't know about the proxy chain.
Command-line usage:
# Agent mode - manages proxy chain
conductor agent sparkle-acp claude-code-acp
# MCP mode - bridges stdio to TCP for MCP-over-ACP
conductor mcp 54321
To editors, the conductor is a normal ACP agent - no special capabilities are advertised upstream.
Proxy Capability Handshake:
The conductor uses a two-way capability handshake to verify that proxy components can fulfill their role:
- Conductor offers proxy capability - When initializing non-last components (proxies), the conductor includes
"proxy": truein the_metafield of the InitializeRequest - Component accepts proxy capability - The component must respond with
"proxy": truein the_metafield of its InitializeResponse - Last component (agent) - The final component is treated as a standard ACP agent and does NOT receive the proxy capability offer
Why a two-way handshake? The proxy capability is an active protocol - it requires the component to handle _proxy/successor/* messages and route communications appropriately. Unlike passive capabilities (like "http" or "sse") which are just declarations, proxy components must actively participate in message routing. If a component doesn't respond with the proxy capability, the conductor fails initialization with an error like "component X is not a proxy", since that component cannot fulfill its required role in the chain.
Shiny future
How will things will play out once this feature exists?
Composable Agent Ecosystems
P/ACP enables a marketplace of reusable proxy components. Developers can:
- Compose custom agent pipelines from independently-developed proxies
- Share proxies across different editors and agents
- Test and debug proxies in isolation
- Mix community-developed and custom proxies
Simplified Agent Development
Agent developers can focus on core model behavior without implementing cross-cutting concerns:
- Logging, metrics, and observability become proxy responsibilities
- Rate limiting and caching handled externally
- Content filtering and safety policies applied consistently
Editor Simplicity
Editors gain enhanced functionality without custom integrations:
- Add sophisticated agent behaviors by changing proxy chain configuration
- Support new agent features without editor updates
- Maintain compatibility with any ACP agent
Standardization Path
As the ecosystem matures, successful patterns may be:
- Standardized in ACP specification itself
- Adopted by other agent protocols
- Used as reference implementations for proxy architectures
Implemented Extensions
MCP Bridge - ✅ Implemented via the _mcp/* protocol (see "Implementation details and plan" section). Components can provide MCP servers using ACP transport, enabling tool provision without agents needing P/ACP awareness. The conductor bridges between agents lacking native support and components.
Future Protocol Extensions
Extensions under consideration for future development:
Agent-Initiated Messages - Allow components to send messages after the agent has sent end-turn, outside the normal request-response cycle. Use cases include background task completion notifications, time-based reminders, or autonomous checkpoint creation.
Session Pre-Population - Create sessions with existing conversation history. Conductor adapts based on agent capabilities: uses native support if available, otherwise synthesizes a dummy prompt containing the history, intercepts the response, and starts the real session.
Rich Content Types - Extend content types beyond text to include HTML panels, interactive GUI components, or other structured formats. Components can transform between content types based on what downstream agents support.
Implementation details and plan
Tell me more about your implementation. What is your detailed implementaton plan?
The implementation focuses on building the Conductor and demonstrating the Sparkle integration use case.
P/ACP protocol
Definition: Editor vs Agent of a proxy
For an P/ACP proxy, the "editor" is defined as the upstream connection and the "agent" is the downstream connection.
flowchart LR
Editor --> Proxy --> Agent
P/ACP editor capabilities
An P/ACP-aware editor provides the following capability during ACP initialization:
/// Including the symposium section *at all* means that the editor
/// supports symposium proxy initialization.
"_meta": {
"symposium": {
"version": "1.0",
"html_panel": true, // or false, if this is the ToEditor proxy
"file_comment": true, // or false, if this is the ToEditor proxy
}
}
P/ACP proxies forward the capabilities they receive from their editor.
P/ACP component capabilities
P/ACP uses capabilities in the _meta field for the proxy handshake:
Proxy capability (two-way handshake):
The conductor offers the proxy capability to non-last components in InitializeRequest:
// InitializeRequest from conductor to proxy component
"_meta": {
"symposium": {
"version": "1.0",
"proxy": true
}
}
The component must accept by responding with the proxy capability in InitializeResponse:
// InitializeResponse from proxy component to conductor
"_meta": {
"symposium": {
"version": "1.0",
"proxy": true
}
}
If a component that was offered the proxy capability does not respond with it, the conductor fails initialization.
Agent capability: The last component in the chain (the agent) is NOT offered the proxy capability and does not need to respond with it. Agents are just normal ACP agents with no P/ACP awareness required.
The _proxy/successor/{send,receive} protocol
Proxies communicate with their downstream component (next proxy or agent) through special extension messages handled by the orchestrator:
_proxy/successor/send/request - Proxy wants to send a request downstream:
{
"method": "_proxy/successor/send/request",
"params": {
"message": <ACP_REQUEST>
}
}
_proxy/successor/send/notification - Proxy wants to send a notification downstream:
{
"method": "_proxy/successor/send/notification",
"params": {
"message": <ACP_NOTIFICATION>
}
}
_proxy/successor/receive/request - Orchestrator delivers a request from downstream:
{
"method": "_proxy/successor/receive/request",
"params": {
"message": <ACP_REQUEST>
}
}
_proxy/successor/receive/notification - Orchestrator delivers a notification from downstream:
{
"method": "_proxy/successor/receive/notification",
"params": {
"message": <ACP_NOTIFICATION>
}
}
Message flow example:
- Editor sends ACP
promptrequest to orchestrator - Orchestrator forwards to Proxy1 as normal ACP message
- Proxy1 transforms and sends
_proxy/successor/send/request { message: <modified_prompt> } - Orchestrator routes that to Proxy2 as normal ACP
prompt - Eventually reaches agent, response flows back through chain
- Orchestrator wraps responses going upstream appropriately
Transparent proxy pattern: A pass-through proxy is trivial - just forward everything:
#![allow(unused)] fn main() { match message { // Forward requests from editor to successor AcpRequest(req) => send_to_successor_request(req), // Forward notifications from editor to successor AcpNotification(notif) => send_to_successor_notification(notif), // Forward from successor back to editor ExtRequest("_proxy/successor/receive/request", msg) => respond_to_editor(msg), ExtNotification("_proxy/successor/receive/notification", msg) => forward_to_editor(msg), } }
The MCP Bridge: _mcp/* Protocol
P/ACP enables components to provide MCP servers that communicate over ACP messages rather than traditional stdio. This allows components to handle MCP tool calls without agents needing special P/ACP awareness.
MCP Server Declaration with ACP Transport
Components declare MCP servers with ACP transport by using the HTTP MCP server format with a special URL scheme:
{
"tools": {
"mcpServers": {
"sparkle": {
"transport": "http",
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"headers": {}
}
}
}
}
The acp:$UUID URL signals ACP transport. The component generates the UUID to identify which component handles calls to this MCP server.
Agent Capability: mcp_acp_transport
Agents that natively support MCP-over-ACP declare this capability:
{
"_meta": {
"mcp_acp_transport": true
}
}
Conductor behavior:
- If the final agent has
mcp_acp_transport: true, conductor passes MCP server declarations through unchanged - If the final agent lacks this capability, conductor performs bridging adaptation:
- Binds a fresh TCP port (e.g.,
localhost:54321) - Transforms the MCP server declaration to use
conductor mcp $portas the command - Spawns
conductor mcp $portwhich connects back via TCP and bridges to ACP messages - Always advertises
mcp_acp_transport: trueto intermediate components
- Binds a fresh TCP port (e.g.,
Bridging Transformation Example
Original MCP server spec (from component):
{
"sparkle": {
"transport": "http",
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"headers": {}
}
}
Transformed spec (passed to agent without mcp_acp_transport):
{
"sparkle": {
"command": "conductor",
"args": ["mcp", "54321"],
"transport": "stdio"
}
}
The agent thinks it's talking to a normal MCP server over stdio. The conductor mcp process bridges between stdio (MCP JSON-RPC) and TCP (connection to main conductor), which then translates to ACP _mcp/* messages.
MCP Message Flow Protocol
When MCP tool calls occur, they flow as ACP extension messages:
_mcp/client_to_server/request - Agent calling an MCP tool (flows backward up chain):
{
"jsonrpc": "2.0",
"id": "T1",
"method": "_mcp/client_to_server/request",
"params": {
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"message": {
"jsonrpc": "2.0",
"id": "mcp-123",
"method": "tools/call",
"params": {
"name": "embody_sparkle",
"arguments": {}
}
}
}
}
Response:
{
"jsonrpc": "2.0",
"id": "T1",
"result": {
"message": {
"jsonrpc": "2.0",
"id": "mcp-123",
"result": {
"content": [
{"type": "text", "text": "Embodiment complete"}
]
}
}
}
}
_mcp/client_to_server/notification - Agent sending notification to MCP server:
{
"jsonrpc": "2.0",
"method": "_mcp/client_to_server/notification",
"params": {
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"message": {
"jsonrpc": "2.0",
"method": "notifications/cancelled",
"params": {}
}
}
}
_mcp/server_to_client/request - MCP server calling back to agent (flows forward down chain):
{
"jsonrpc": "2.0",
"id": "S1",
"method": "_mcp/server_to_client/request",
"params": {
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"message": {
"jsonrpc": "2.0",
"id": "mcp-456",
"method": "sampling/createMessage",
"params": {
"messages": [...],
"modelPreferences": {...}
}
}
}
}
_mcp/server_to_client/notification - MCP server sending notification to agent:
{
"jsonrpc": "2.0",
"method": "_mcp/server_to_client/notification",
"params": {
"url": "acp:550e8400-e29b-41d4-a716-446655440000",
"message": {
"jsonrpc": "2.0",
"method": "notifications/progress",
"params": {
"progressToken": "token-1",
"progress": 50,
"total": 100
}
}
}
}
Message Routing
Client→Server messages (agent calling MCP tools):
- Flow backward up the proxy chain (agent → conductor → components)
- Component matches on
params.urlto identify which MCP server - Component extracts
params.message, handles the MCP call, responds
Server→Client messages (MCP server callbacks):
- Flow forward down the proxy chain (component → conductor → agent)
- Component initiates when its MCP server needs to call back (sampling, logging, progress)
- Conductor routes to agent (or via bridge if needed)
Conductor MCP Mode
The conductor binary has two modes:
-
Agent mode:
conductor agent [proxies...] agent- Manages P/ACP proxy chain
- Routes ACP messages
-
MCP mode:
conductor mcp $port- Acts as MCP server over stdio
- Connects to
localhost:$portvia TCP - Bridges MCP JSON-RPC (stdio) ↔ raw JSON-RPC (TCP to main conductor)
When bridging is needed, the main conductor spawns conductor mcp $port as the child process that the agent communicates with via stdio.
Additional Extension Messages
Proxies can define their own extension messages beyond _proxy/successor/* to provide specific capabilities. Examples might include:
- Logging/observability:
_proxy/logmessages for structured logging - Metrics:
_proxy/metricmessages for tracking usage - Configuration:
_proxy/configmessages for dynamic reconfiguration
The orchestrator can handle routing these messages appropriately, or they can be handled by specific proxies in the chain.
These extensions are beyond the scope of this initial RFD and will be defined as needed by specific proxy implementations.
Implementation progress
What is the current status of implementation and what are the next steps?
Current Status: Implementation Phase
Completed:
- ✅ P/ACP protocol design with Conductor orchestrator architecture
- ✅
_proxy/successor/{send,receive}message protocol defined - ✅
scpRust crate with JSON-RPC layer and ACP message types - ✅ Comprehensive JSON-RPC test suite (21 tests)
- ✅ Proxy message type definitions (
ToSuccessorRequest, etc.)
In Progress:
- Conductor orchestrator implementation
- Sparkle P/ACP component
- MCP Bridge implementation (see checklist below)
MCP Bridge Implementation Checklist
Phase 1: Conductor MCP Mode (COMPLETE ✅)
-
Implement
conductor mcp $portCLI parsing -
TCP connection to
localhost:$port - Stdio → TCP bridging (read from stdin, send via TCP)
- TCP → Stdio bridging (read from TCP, write to stdout)
- Newline-delimited JSON framing
- Error handling (connection failures, parse errors, reconnection logic)
- Unit tests for message bridging
- Integration test: standalone MCP bridge with mock MCP client/server
Phase 2: Conductor Agent Mode - MCP Detection & Bridging
-
Detect
"transport": "http", "url": "acp:$UUID"MCP servers in initialization -
Check final agent for
mcp_acp_transportcapability - Bind ephemeral TCP ports when bridging needed
-
Transform MCP server specs to use
conductor mcp $port -
Spawn
conductor mcp $portsubprocess per ACP-transport MCP server -
Store mapping:
UUID → TCP port → bridge process -
Always advertise
mcp_acp_transport: trueto intermediate components - Integration test: full chain with MCP bridging
Phase 3: _mcp/* Message Routing
-
Route
_mcp/client_to_server/request(TCP → ACP, backward up chain) -
Route
_mcp/client_to_server/notification(TCP → ACP, backward) -
Route
_mcp/server_to_client/request(ACP → TCP, forward down chain) -
Route
_mcp/server_to_client/notification(ACP → TCP, forward) -
URL matching for component routing (
params.urlmatches UUID) - Response routing back through bridge
-
Integration test: full
_mcp/*message flow
Phase 4: Bridge Lifecycle Management
- Clean up bridge processes on session end
- Handle bridge process crashes
- Handle component crashes (clean up associated bridges)
- TCP connection cleanup on errors
- Port cleanup and reuse
Phase 5: Component-Side MCP Integration
- Sparkle component declares ACP-transport MCP server
-
Sparkle handles
_mcp/client_to_server/*messages -
Sparkle initiates
_mcp/server_to_client/*callbacks - End-to-end test: Sparkle embodiment via MCP bridge
Phase 1: Minimal Sparkle Demo
Goal: Demonstrate Sparkle integration through P/ACP composition.
Components:
- Conductor orchestrator - Process management, message routing, capability adaptation
- Sparkle P/ACP component - Injects Sparkle MCP server, handles embodiment sequence
- Integration test - Validates end-to-end flow with mock editor/agent
Demo flow:
Zed → Conductor → Sparkle Component → Claude
↓
Sparkle MCP Server
Success criteria:
- Sparkle MCP server appears in agent's tool list
- First prompt triggers Sparkle embodiment sequence
- Subsequent prompts work normally
- All other ACP messages pass through unchanged
Detailed MVP Walkthrough
This section shows the exact message flows for the minimal Sparkle demo.
Understanding UUIDs in the flow:
There are two distinct types of UUIDs in these sequences:
-
Message IDs (JSON-RPC request IDs): These identify individual JSON-RPC requests and must be tracked to route responses correctly. When a component forwards a message using
_proxy/successor/request, it creates a fresh message ID for the downstream request and remembers the mapping to route the response back. -
Session IDs (ACP session identifiers): These identify ACP sessions and flow through the chain unchanged. The agent creates a session ID, and all components pass it back unmodified.
Conductor's routing rules:
- Message from Editor → Forward "as is" to first component (same message ID)
_proxy/successor/requestfrom component → Unwrap payload and send to next component (using message ID from the wrapper)- Response from downstream → Send back to whoever made the
_proxyrequest - First component's response → Send back to Editor
Components don't talk directly to each other - all communication flows through Conductor via the _proxy protocol.
Scenario 1: Initialization and Session Creation
The editor spawns Conductor with component names, Conductor spawns the components, and initialization flows through the chain.
sequenceDiagram
participant Editor as Editor<br/>(Zed)
participant Conductor as Conductor<br/>Orchestrator
participant Sparkle as Sparkle<br/>Component
participant Agent as Base<br/>Agent
Note over Editor: Spawns Conductor with args:<br/>"sparkle-acp agent-acp"
Editor->>Conductor: spawn process
activate Conductor
Note over Conductor: Spawns both components
Conductor->>Sparkle: spawn "sparkle-acp"
activate Sparkle
Conductor->>Agent: spawn "agent-acp"
activate Agent
Note over Editor,Agent: === Initialization Phase ===
Editor->>Conductor: initialize (id: I0)
Conductor->>Sparkle: initialize (id: I0)<br/>(offers PROXY capability)
Note over Sparkle: Sees proxy capability offer,<br/>initializes successor
Sparkle->>Conductor: _proxy/successor/request (id: I1)<br/>payload: initialize
Conductor->>Agent: initialize (id: I1)<br/>(NO proxy capability - agent is last)
Agent-->>Conductor: initialize response (id: I1)
Conductor-->>Sparkle: _proxy/successor response (id: I1)
Note over Sparkle: Sees Agent capabilities,<br/>prepares response
Sparkle-->>Conductor: initialize response (id: I0)<br/>(accepts PROXY capability)
Note over Conductor: Verifies Sparkle accepted proxy.<br/>If not, would fail with error.
Conductor-->>Editor: initialize response (id: I0)
Note over Editor,Agent: === Session Creation ===
Editor->>Conductor: session/new (id: U0, tools: M0)
Conductor->>Sparkle: session/new (id: U0, tools: M0)
Note over Sparkle: Wants to inject Sparkle MCP server
Sparkle->>Conductor: _proxy/successor/request (id: U1)<br/>payload: session/new with tools (M0, sparkle-mcp)
Conductor->>Agent: session/new (id: U1, tools: M0 + sparkle-mcp)
Agent-->>Conductor: response (id: U1, sessionId: S1)
Conductor-->>Sparkle: response to _proxy request (id: U1, sessionId: S1)
Note over Sparkle: Remembers mapping U0 → U1
Sparkle-->>Conductor: response (id: U0, sessionId: S1)
Conductor-->>Editor: response (id: U0, sessionId: S1)
Note over Editor,Agent: Session S1 created,<br/>Sparkle MCP server available to agent
Key messages:
-
Editor → Conductor: initialize (id: I0)
{ "jsonrpc": "2.0", "id": "I0", "method": "initialize", "params": { "protocolVersion": "0.1.0", "capabilities": {}, "clientInfo": {"name": "Zed", "version": "0.1.0"} } } -
Conductor → Sparkle: initialize (id: I0, with PROXY capability)
{ "jsonrpc": "2.0", "id": "I0", "method": "initialize", "params": { "protocolVersion": "0.1.0", "capabilities": { "_meta": { "symposium": { "version": "1.0", "proxy": true } } }, "clientInfo": {"name": "Conductor", "version": "0.1.0"} } } -
Sparkle → Conductor: _proxy/successor/request (id: I1, wrapping initialize)
{ "jsonrpc": "2.0", "id": "I1", "method": "_proxy/successor/request", "params": { "message": { "method": "initialize", "params": { "protocolVersion": "0.1.0", "capabilities": {}, "clientInfo": {"name": "Sparkle", "version": "0.1.0"} } } } } -
Conductor → Agent: initialize (id: I1, unwrapped, without PROXY capability)
{ "jsonrpc": "2.0", "id": "I1", "method": "initialize", "params": { "protocolVersion": "0.1.0", "capabilities": {}, "clientInfo": {"name": "Sparkle", "version": "0.1.0"} } } -
Agent → Conductor: initialize response (id: I1)
{ "jsonrpc": "2.0", "id": "I1", "result": { "protocolVersion": "0.1.0", "capabilities": {}, "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"} } } -
Conductor → Sparkle: _proxy/successor response (id: I1, wrapping Agent's response)
{ "jsonrpc": "2.0", "id": "I1", "result": { "protocolVersion": "0.1.0", "capabilities": {}, "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"} } } -
Sparkle → Conductor: initialize response (id: I0, accepting proxy capability)
{ "jsonrpc": "2.0", "id": "I0", "result": { "protocolVersion": "0.1.0", "capabilities": { "_meta": { "symposium": { "version": "1.0", "proxy": true } } }, "serverInfo": {"name": "Sparkle + claude-code-acp", "version": "0.1.0"} } }Note: Sparkle MUST include
"proxy": truein its response since it was offered the proxy capability. If this field is missing, Conductor will fail initialization with an error. -
Editor → Conductor: session/new (id: U0)
{ "jsonrpc": "2.0", "id": "U0", "method": "session/new", "params": { "tools": { "mcpServers": { "filesystem": {"command": "mcp-filesystem", "args": []} } } } } -
Conductor → Sparkle: session/new (id: U0, forwarded as-is)
{ "jsonrpc": "2.0", "id": "U0", "method": "session/new", "params": { "tools": { "mcpServers": { "filesystem": {"command": "mcp-filesystem", "args": []} } } } } -
Sparkle → Conductor: _proxy/successor/request (id: U1, with injected Sparkle MCP)
{
"jsonrpc": "2.0",
"id": "U1",
"method": "_proxy/successor/request",
"params": {
"message": {
"method": "session/new",
"params": {
"tools": {
"mcpServers": {
"filesystem": {"command": "mcp-filesystem", "args": []},
"sparkle": {"command": "sparkle-mcp", "args": []}
}
}
}
}
}
}
- Conductor → Agent: session/new (id: U1, unwrapped from _proxy message)
{
"jsonrpc": "2.0",
"id": "U1",
"method": "session/new",
"params": {
"tools": {
"mcpServers": {
"filesystem": {"command": "mcp-filesystem", "args": []},
"sparkle": {"command": "sparkle-mcp", "args": []}
}
}
}
}
- Agent → Conductor: response (id: U1, with new session S1)
{
"jsonrpc": "2.0",
"id": "U1",
"result": {
"sessionId": "S1",
"serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
}
}
- Conductor → Sparkle: _proxy/successor response (id: U1)
{
"jsonrpc": "2.0",
"id": "U1",
"result": {
"sessionId": "S1",
"serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
}
}
- Sparkle → Conductor: response (id: U0, with session S1)
{
"jsonrpc": "2.0",
"id": "U0",
"result": {
"sessionId": "S1",
"serverInfo": {"name": "Conductor + Sparkle", "version": "0.1.0"}
}
}
Scenario 2: First Prompt (Sparkle Embodiment)
When the first prompt arrives, Sparkle intercepts it and runs the embodiment sequence before forwarding the actual user prompt.
sequenceDiagram
participant Editor as Editor<br/>(Zed)
participant Conductor as Conductor<br/>Orchestrator
participant Sparkle as Sparkle<br/>Component
participant Agent as Base<br/>Agent
Note over Editor,Agent: === First Prompt Flow ===
Editor->>Conductor: session/prompt (id: P0, sessionId: S1)
Conductor->>Sparkle: session/prompt (id: P0, sessionId: S1)
Note over Sparkle: First prompt detected!<br/>Run embodiment sequence first
Sparkle->>Conductor: _proxy/successor/request (id: P1)<br/>payload: session/prompt (embodiment)
Conductor->>Agent: session/prompt (id: P1, embodiment)
Agent-->>Conductor: response (id: P1, tool_use: embody_sparkle)
Conductor-->>Sparkle: response to _proxy request (id: P1)
Note over Sparkle: Embodiment complete,<br/>now send real prompt
Sparkle->>Conductor: _proxy/successor/request (id: P2)<br/>payload: session/prompt (user message)
Conductor->>Agent: session/prompt (id: P2, user message)
Agent-->>Conductor: response (id: P2, actual answer)
Conductor-->>Sparkle: response to _proxy request (id: P2)
Note over Sparkle: Maps P2 → P0
Sparkle-->>Conductor: response (id: P0, actual answer)
Conductor-->>Editor: response (id: P0, actual answer)
Note over Editor,Agent: User sees response,<br/>Sparkle initialized
Key messages:
-
Editor → Conductor: session/prompt (id: P0, user's first message)
{ "jsonrpc": "2.0", "id": "P0", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Hello! Can you help me with my code?"} ] } } -
Conductor → Sparkle: session/prompt (id: P0, forwarded as-is)
{ "jsonrpc": "2.0", "id": "P0", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Hello! Can you help me with my code?"} ] } } -
Sparkle → Conductor: _proxy/successor/request (id: P1, embodiment sequence)
{ "jsonrpc": "2.0", "id": "P1", "method": "_proxy/successor/request", "params": { "message": { "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ { "role": "user", "content": "Please use the embody_sparkle tool to load your collaborative patterns." } ] } } } } -
Conductor → Agent: session/prompt (id: P1, unwrapped embodiment)
{ "jsonrpc": "2.0", "id": "P1", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ { "role": "user", "content": "Please use the embody_sparkle tool to load your collaborative patterns." } ] } } -
Agent → Conductor: response (id: P1, embodiment tool call)
{ "jsonrpc": "2.0", "id": "P1", "result": { "role": "assistant", "content": [ { "type": "tool_use", "id": "tool-1", "name": "embody_sparkle", "input": {} } ] } } -
Sparkle → Conductor: _proxy/successor/request (id: P2, actual user prompt)
{ "jsonrpc": "2.0", "id": "P2", "method": "_proxy/successor/request", "params": { "message": { "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Hello! Can you help me with my code?"} ] } } } } -
Conductor → Agent: session/prompt (id: P2, unwrapped user prompt)
{ "jsonrpc": "2.0", "id": "P2", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Hello! Can you help me with my code?"} ] } } -
Sparkle → Conductor: response (id: P0, forwarded to editor)
{ "jsonrpc": "2.0", "id": "P0", "result": { "role": "assistant", "content": "I'd be happy to help you with your code! What would you like to work on?" } }
Scenario 3: Subsequent Prompts (Pass-Through)
After embodiment, Sparkle passes all messages through transparently.
sequenceDiagram
participant Editor as Editor<br/>(Zed)
participant Conductor as Conductor<br/>Orchestrator
participant Sparkle as Sparkle<br/>Component
participant Agent as Base<br/>Agent
Note over Editor,Agent: === Subsequent Prompt Flow ===
Editor->>Conductor: session/prompt (id: P3, sessionId: S1)
Conductor->>Sparkle: session/prompt (id: P3, sessionId: S1)
Note over Sparkle: Already embodied,<br/>pass through unchanged
Sparkle->>Conductor: _proxy/successor/request (id: P4)<br/>payload: session/prompt (unchanged)
Conductor->>Agent: session/prompt (id: P4, unchanged)
Agent-->>Conductor: response (id: P4)
Conductor-->>Sparkle: response to _proxy request (id: P4)
Note over Sparkle: Maps P4 → P3
Sparkle-->>Conductor: response (id: P3)
Conductor-->>Editor: response (id: P3)
Note over Editor,Agent: Normal ACP flow,<br/>Sparkle and Conductor transparent
Key messages:
-
Editor → Conductor: session/prompt (id: P3)
{ "jsonrpc": "2.0", "id": "P3", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Can you refactor the authenticate function?"} ] } } -
Sparkle → Conductor: _proxy/successor/request (id: P4, message unchanged)
{ "jsonrpc": "2.0", "id": "P4", "method": "_proxy/successor/request", "params": { "message": { "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Can you refactor the authenticate function?"} ] } } } } -
Conductor → Agent: session/prompt (id: P4, unwrapped)
{ "jsonrpc": "2.0", "id": "P4", "method": "session/prompt", "params": { "sessionId": "S1", "messages": [ {"role": "user", "content": "Can you refactor the authenticate function?"} ] } } -
Sparkle → Conductor: response (id: P3, forwarded to editor)
{ "jsonrpc": "2.0", "id": "P3", "result": { "role": "assistant", "content": "I'll help you refactor the authenticate function..." } }
Note that even though Sparkle is passing messages through "transparently", it still uses the _proxy/successor/request protocol. This maintains the consistent routing pattern where all downstream communication flows through Conductor.
Implementation Note on Embodiment Responses:
For the MVP, when Sparkle runs the embodiment sequence before the user's actual prompt, it will buffer both responses and concatenate them before sending back to the editor. This makes the embodiment transparent but loses some structure. A future RFD will explore richer content types (like subconversation) that would allow editors to distinguish between nested exchanges and main responses.
Phase 2: Tool Interception (FUTURE)
Goal: Route MCP tool calls through the proxy chain.
Conductor registers as a dummy MCP server. When Claude calls a Sparkle tool, the call routes back through the proxy chain to the Sparkle component for handling. This enables richer component interactions without requiring agents to understand P/ACP.
Phase 3: Additional Components (FUTURE)
Build additional P/ACP components that demonstrate different use cases:
- Session history/context management
- Logging and observability
- Rate limiting
- Content filtering
These will validate the protocol design and inform refinements.
Testing Strategy
Unit tests:
- Test message serialization/deserialization
- Test process spawning logic
- Test stdio communication
Integration tests:
- Spawn real proxy chains
- Use actual ACP agents for end-to-end validation
- Test error handling and cleanup
Manual testing:
- Use with VSCode + ACP-aware agents
- Verify with different proxy configurations
- Test process management under various failure modes
Frequently asked questions
What questions have arisen over the course of authoring this document or during subsequent discussions?
What alternative approaches did you consider, and why did you settle on this one?
We considered extending MCP directly, but MCP is focused on tool provision rather than conversation flow control. We also looked at building everything as VSCode extensions, but that would lock us into a single editor ecosystem.
P/ACP's proxy chain approach provides the right balance of modularity and compatibility - components can be developed independently while still working together.
How does this relate to other agent protocols like Google's A2A?
P/ACP is complementary to protocols like A2A. While A2A focuses on agent-to-agent communication for remote services, P/ACP focuses on composing the user-facing development experience. You could imagine P/ACP components that use A2A internally to coordinate with remote agents.
What about security concerns with arbitrary proxy chains?
Users are responsible for the proxies they choose to run, similar to how they're responsible for the software they install. Proxies can intercept and modify all communication, so trust is essential. For future versions, we're considering approaches like Microsoft's Wassette (WASM-based capability restrictions) to provide sandboxed execution environments.
What about the chat GUI interface?
We currently have a minimal chat GUI working in VSCode that can exchange basic messages with ACP agents. However, a richer chat interface with features like message history, streaming support, context providers, and interactive elements remains TBD.
Continue.dev has solved many of the hard problems for production-quality chat interfaces in VS Code extensions. Their GUI is specifically designed to be reusable - they use the exact same codebase for both VS Code and JetBrains IDEs by implementing different adapter layers.
Their architecture proves that message-passing protocols can cleanly separate GUI concerns from backend logic, which aligns perfectly with P/ACP's composable design. When we're ready to enhance the chat interface, we can evaluate whether to build on Continue.dev's foundation or develop our own approach based on what we learn from the P/ACP proxy framework.
The Apache 2.0 license makes this legally straightforward, and their well-documented message protocols provide a clear integration path.
Why not just use hooks or plugins?
Hooks are fundamentally limited to what the host application anticipated. P/ACP proxies can intercept and modify the entire conversation flow, enabling innovations that the original tool designer never envisioned. This is the difference between customization and true composability.
What about performance implications of the proxy chain?
The proxy chain does add some latency as messages pass through multiple hops. However, we don't expect this to be noticeable for typical development workflows. Most interactions are human-paced rather than high-frequency, and the benefits of composability outweigh the minimal latency cost.
How will users discover and configure proxy chains?
This will be determined over time as the ecosystem develops. We expect solutions to emerge organically, potentially including registries, configuration files, or marketplace-style discovery mechanisms.
What about resource management with multiple proxy processes?
Each proxy manages the lifecycle of processes it starts. When a proxy terminates, it cleans up its downstream processes. This creates a natural cleanup chain that prevents resource leaks.
Revision history
Initial draft based on architectural discussions.
P/ACP Components
{{#rfd: proxying-acp}}
This section documents the components that implement the P/ACP (Proxying ACP) protocol for composable agent architectures.
Overview
P/ACP enables building modular agent systems by chaining components together. Each component can intercept and transform ACP messages flowing between editors and agents.
The key components are:
- Conductor: ACP Orchestrator - The orchestrator that manages the proxy chain and presents as a normal ACP agent to editors
- ProxyingAcpServer Trait (planned) - The trait/interface that makes writing proxy components easy
- Sparkle Component (planned) - Example component that injects Sparkle collaborative patterns
Architecture
flowchart LR
Editor[ACP Editor]
subgraph Conductor[Conductor Process]
F[Orchestrator]
end
subgraph Chain[Component Chain]
C1[Proxy Component 1]
C2[Proxy Component 2]
Agent[ACP Agent]
C1 -->|_proxy/successor/*| C2
C2 -->|_proxy/successor/*| Agent
end
Editor <-->|ACP| F
F <-->|manages| Chain
Key principles:
- Editor transparency: Editors see Conductor as a normal ACP agent—no special protocol awareness needed
- Component composition: Proxies can be mixed and matched without knowing about each other
- Capability negotiation: Each component controls what capabilities it advertises to its predecessor
- Simple forwarding: Default behavior is to forward messages unchanged; components only override what they need
Component Lifecycle
- Initialization: Editor sends
acp/initializeto Conductor - Chain setup: Conductor spawns first component, which initializes its successor, etc.
- Capability negotiation: Capabilities flow back up the chain, each component adding its own
- Message routing: Messages flow down the chain, responses flow back up
- Shutdown: If any component exits, the entire chain shuts down
Related Documentation
- P/ACP RFD - Full protocol specification and motivation
Conductor: P/ACP Orchestrator
{{#rfd: proxying-acp}}
The Conductor (binary name: conductor) is the orchestrator for P/ACP proxy chains. It coordinates the flow of ACP messages through a chain of proxy components.
Overview
The conductor orchestrates proxy chains by sitting between every component. It spawns component processes and routes all messages, presenting itself as a normal ACP agent to the editor.
flowchart TB
Editor[Editor]
C[Conductor]
P1[Component 1]
P2[Component 2]
Editor <-->|ACP via stdio| C
C <-->|stdio| P1
C <-->|stdio| P2
Key insight: Components never talk directly to each other. The conductor routes ALL messages using the _proxy/successor/* protocol.
From the editor's perspective: Conductor is a normal ACP agent communicating over stdio.
From each component's perspective:
- Receives normal ACP messages from the conductor
- Sends
_proxy/successor/requestto conductor to forward messages TO successor - Receives
_proxy/successor/requestfrom conductor for messages FROM successor
See Architecture Overview for detailed conceptual and actual message flows.
Responsibilities
The conductor has four core responsibilities:
1. Process Management
- Spawns component processes based on command-line arguments
- Manages component lifecycle (startup, shutdown, error handling)
- For MVP: If any component crashes, shut down the entire chain
Command-line interface:
# Agent mode - manages proxy chain
conductor agent sparkle-acp claude-code-acp
# MCP mode - bridges stdio to TCP for MCP-over-ACP
conductor mcp 54321
Agent mode creates a chain: Editor → Conductor → sparkle-acp → claude-code-acp
MCP mode bridges MCP JSON-RPC (stdio) to raw JSON-RPC (TCP connection to main conductor)
2. Message Routing
The conductor routes ALL messages between components. No component talks directly to another.
Message ordering: The conductor preserves message send order by routing all forwarding decisions through a central event loop, preventing responses from overtaking notifications.
Message flow types:
- Editor → First Component: Conductor forwards normal ACP messages
- Component → Successor: Component sends
_proxy/successor/requestto conductor, which unwraps and forwards to next component - Successor → Component: Conductor wraps messages in
_proxy/successor/requestwhen sending FROM successor - Responses: Flow back via standard JSON-RPC response IDs
See Architecture Overview for detailed request/response flow diagrams.
3. Capability Management
The conductor manages proxy capability handshakes during initialization:
Normal Mode (conductor as root):
- Offers
proxy: trueto all components EXCEPT the last - Verifies each proxy component accepts the capability
- Last component (agent) receives standard ACP initialization
Proxy Mode (conductor as proxy):
- When conductor itself receives
proxy: trueduring initialization - Offers
proxy: trueto ALL components (including the last) - Enables tree-structured proxy chains
See Architecture Overview for detailed handshake flows and Proxy Mode below for hierarchical chain details.
4. MCP Bridge Adaptation
When components provide MCP servers with ACP transport ("url": "acp:$UUID"):
If agent has mcp_acp_transport capability:
- Pass through MCP server declarations unchanged
- Agent handles
_mcp/*messages natively
If agent lacks mcp_acp_transport capability:
- Bind TCP port for each ACP-transport MCP server
- Transform MCP server spec to use
conductor mcp $port - Spawn
conductor mcp $portbridge processes - Route MCP tool calls:
- Agent → stdio → bridge → TCP → conductor →
_mcp/*messages backward up chain - Component responses flow back: component → conductor → TCP → bridge → stdio → agent
- Agent → stdio → bridge → TCP → conductor →
See MCP Bridge for full implementation details.
Proxy Mode
The conductor can itself operate as a proxy component within a larger chain, enabling tree-structured proxy architectures.
How Proxy Mode Works
When the conductor receives an initialize request with the proxy capability:
- Detection: Conductor detects it's being used as a proxy component
- All components become proxies: Offers
proxy: trueto ALL managed components (including the last) - Successor forwarding: When the final component sends
_proxy/successor/request, conductor forwards to its own successor
Example: Hierarchical Chain
client → proxy1 → conductor (proxy mode) → final-agent
↓ manages
p1 → p2 → p3
Message flow when p3 forwards to successor:
- p3 sends
_proxy/successor/requestto conductor - Conductor recognizes it's in proxy mode
- Conductor sends
_proxy/successor/requestto proxy1 (its predecessor) - proxy1 routes to final-agent
Use Cases
Modular sub-chains: Group related proxies into a conductor-managed sub-chain that can be inserted anywhere
Conditional routing: A proxy can route to conductor-based sub-chains based on request type
Isolated environments: Each conductor manages its own component lifecycle while participating in larger chains
Implementation Notes
- Proxy mode is detected during initialization by checking for
proxy: truein incominginitializerequest - In normal mode: last component is agent (no proxy capability)
- In proxy mode: all components are proxies (all receive proxy capability)
- The conductor's own successor is determined by whoever initialized it
See Architecture Overview for conceptual diagrams.
Initialization Flow
sequenceDiagram
participant Editor
participant Conductor
participant Sparkle as Component1<br/>(Sparkle)
participant Agent as Component2<br/>(Agent)
Note over Conductor: Spawns both components at startup<br/>from CLI args
Editor->>Conductor: acp/initialize [I0]
Conductor->>Sparkle: acp/initialize (offers proxy capability) [I0]
Note over Sparkle: Sees proxy capability offer,<br/>knows it has a successor
Sparkle->>Conductor: _proxy/successor/request(acp/initialize) [I1]
Note over Conductor: Unwraps request,<br/>knows Agent is last in chain
Conductor->>Agent: acp/initialize (NO proxy capability - agent is last) [I1]
Agent-->>Conductor: initialize response (capabilities) [I1]
Conductor-->>Sparkle: _proxy/successor response [I1]
Note over Sparkle: Sees Agent's capabilities,<br/>prepares response
Sparkle-->>Conductor: initialize response (accepts proxy capability) [I0]
Note over Conductor: Verifies Sparkle accepted proxy.<br/>If not, would fail with error.
Conductor-->>Editor: initialize response [I0]
Key points:
- Conductor spawns ALL components at startup based on command-line args
- Sequential initialization: Conductor → Component1 → Component2 → ... → Agent
- Proxy capability handshake:
- Conductor offers
proxy: trueto non-last components (in InitializeRequest_meta) - Components must accept by responding with
proxy: true(in InitializeResponse_meta) - Last component (agent) is NOT offered proxy capability
- Conductor verifies acceptance and fails initialization if missing
- Conductor offers
- Components use
_proxy/successor/requestto initialize their successors - Capabilities flow back up the chain: Each component sees successor's capabilities before responding
- Message IDs: Preserved from editor (I0), new IDs for proxy messages (I1, I2, ...)
Implementation Architecture
The conductor uses an actor-based architecture with message passing via channels.
Core Components
- Main connection: Handles editor stdio and spawns the event loop
- Component connections: Each component has a bidirectional JSON-RPC connection
- Message router: Central actor that receives
ConductorMessageenums and routes appropriately - MCP bridge actors: Manage MCP-over-ACP connections
Message Ordering Invariant
Critical invariant: All messages (requests, responses, notifications) between any two endpoints must maintain their send order.
The conductor ensures this invariant by routing all message forwarding through its central message queue (ConductorMessage channel). This prevents faster message types (responses) from overtaking slower ones (notifications).
Why This Matters
Without ordering preservation, a race condition can occur:
- Agent sends
session/updatenotification - Agent responds to
session/promptrequest - Response takes a fast path (reply_actor with oneshot channels)
- Notification takes slower path (handler pipeline)
- Response arrives before notification → client loses notification data
Implementation
The conductor uses extension traits to route all forwarding through the central queue:
JrConnectionCxExt::send_proxied_message_via- Routes both requests and notificationsJrRequestCxExt::respond_via- Routes responses through the queueJrResponseExt::forward_response_via- Ensures response forwarding maintains order
All message forwarding in both directions (client-to-agent and agent-to-client) flows through the conductor's central event loop, which processes ConductorMessage enums sequentially. This serialization ensures messages arrive in the same order they were sent.
Message Routing Implementation
The conductor uses a recursive spawning pattern:
- Recursive chain building: Each component spawns the next, establishing connections
- Actor-based routing: All messages flow through a central conductor actor via channels
- Response routing: Uses JSON-RPC response IDs and request contexts to route back
- No explicit ID tracking: Context passing eliminates need for manual ID management
Key routing decisions:
- Normal mode: Last component gets normal ACP (no proxy capability)
- Proxy mode: All components get proxy capability, final component can forward to conductor's successor
- Bidirectional
_proxy/successor/*: Used for both TO successor (unwrap and forward) and FROM successor (wrap and deliver)
Concurrency Model
Built on Tokio async runtime:
- Async I/O: All stdio operations are non-blocking
- Message passing: Components communicate via mpsc channels
- Spawned tasks: Each connection handler runs as separate task
- Error propagation: Tasks send errors back to main actor via channels
See source code in src/sacp-conductor/src/conductor.rs for implementation details.
Error Handling
Component Crashes
If any component process exits or crashes:
- Log error to stderr
- Shut down entire Conductor process
- Exit with non-zero status
The editor will see the ACP connection close and can handle appropriately.
Invalid Messages
If Conductor receives malformed JSON-RPC:
- Log to stderr
- Continue processing (don't crash the chain)
- May result in downstream errors
Initialization Failures
If component fails to initialize:
- Log error
- Return error response to editor
- Shut down
Implementation Phases
Phase 1: Basic Routing (MVP)
- Design documented
- Parse command-line arguments (component list)
- Spawn components recursively (alternative to "spawn all at startup")
- Set up stdio pipes for all components
-
Message routing logic:
- Editor → Component1 forwarding
-
_proxy/successor/requestunwrapping and forwarding - Response routing via context passing (alternative to explicit ID tracking)
- Component → Editor message routing
-
Actor-based message passing architecture with
ConductorMessageenum - Error reporting from spawned tasks to conductor
-
PUNCH LIST - Remaining MVP items:
-
Fix typo:
ComnponentToItsClientMessage→ComponentToItsClientMessage -
Proxy capability handshake during initialization:
-
Offer
proxy: truein_metato non-last components duringacp/initialize -
Do NOT offer
proxyto last component (agent) -
Verify component accepts by checking for
proxy: truein InitializeResponse_meta - Fail initialization with error "component X is not a proxy" if handshake fails
-
Offer
- Add documentation/comments explaining recursive chain building
- Add logging (message routing, component startup, errors)
- Write tests (proxy capability handshake, basic routing, initialization, error handling)
- Component crash detection and chain shutdown
-
Fix typo:
Phase 2: Robust Error Handling
- Basic error reporting from async tasks
- Graceful component shutdown
- Retry logic for transient failures
- Health checks
- Timeout handling for hung requests
Phase 3: Observability
- Structured logging/tracing
- Performance metrics
- Debug mode with message inspection
Phase 4: Advanced Features
- Dynamic component loading
- Hot reload of components
- Multiple parallel chains
Testing Strategy
Unit Tests
- Message parsing and forwarding logic
- Capability modification
- Error handling paths
Integration Tests
- Full chain initialization
- Message flow through real components
- Component crash scenarios
- Malformed message handling
End-to-End Tests
- Real editor + Conductor + test components
- Sparkle + Claude Code integration
- Performance benchmarks
Open Questions
- Component discovery: How do we find component binaries? PATH? Configuration file?
- Configuration: Should Conductor support a config file for default chains?
- Logging: Structured logging format? Integration with existing Symposium logging?
- Metrics: Should Conductor expose metrics (message counts, latency)?
- Security: Do we need to validate/sandbox component processes?
Related Documentation
- P/ACP RFD - Full protocol specification
- Proxying ACP Server Trait - Component implementation guide
- Sparkle Component - Example P/ACP component
MCP Bridge: Proxying MCP over ACP
The MCP Bridge enables agents without native MCP-over-ACP support to work with proxy components that provide MCP servers using ACP transport (acp:$UUID).
Problem Statement
Proxy components may want to expose MCP servers to agents using ACP as the transport layer. This allows:
- Dynamic MCP server registration during session creation
- Proxies to correlate MCP tool calls with specific ACP sessions
- Unified protocol handling (everything flows through ACP messages)
However, many agents only support traditional MCP transports (stdio, SSE). The conductor bridges this gap by:
- Accepting
acp:$UUIDURLs insession/newrequests - Transforming them into stdio-based MCP servers the agent can connect to
- Routing MCP messages between the agent (stdio) and proxies (ACP
_mcp/*messages)
High-Level Architecture
flowchart LR
Proxy[Proxy Component]
Conductor[Conductor]
Agent[Agent Process]
Bridge[MCP Bridge Process]
Proxy -->|session/new with acp: URL| Conductor
Conductor -->|session/new with stdio| Agent
Agent <-->|stdio| Bridge
Bridge <-->|TCP| Conductor
Conductor <-->|_mcp/* messages| Proxy
Key decision: Use stdio + TCP bridge instead of direct stdio to agent, because:
- Preserves agent isolation (agent only sees stdio)
- Enables connection multiplexing (multiple bridges to one conductor)
- Simplifies lifecycle management (bridge exits when agent closes stdio)
Session Initialization Flow
The conductor transforms MCP servers during session creation and correlates them with session IDs.
sequenceDiagram
participant Proxy
participant Conductor
participant Listener as MCP Bridge<br/>Listener Actor
participant Agent
Note over Proxy: Wants to provide MCP server<br/>with session-specific context
Proxy->>Conductor: session/new {<br/> mcp_servers: [{<br/> name: "research-tools",<br/> url: "acp:uuid-123"<br/> }]<br/>}
Note over Conductor: Detects acp: transport<br/>Agent lacks mcp_acp_transport capability
Conductor->>Conductor: Bind TCP listener on port 54321
Conductor->>Listener: Spawn MCP Bridge Listener<br/>(acp_url: "acp:uuid-123")
Note over Listener: Listening on TCP port 54321<br/>Waiting for session_id
Conductor->>Agent: session/new {<br/> mcp_servers: [{<br/> name: "research-tools",<br/> command: "conductor",<br/> args: ["mcp", "54321"]<br/> }]<br/>}
Agent-->>Conductor: session/new response {<br/> session_id: "sess-abc"<br/>}
Note over Conductor: Extract session_id from response
Conductor->>Listener: Send session_id: "sess-abc"<br/>(via oneshot channel)
Note over Listener: Now has session_id<br/>Ready to accept connections
Conductor-->>Proxy: session/new response {<br/> session_id: "sess-abc"<br/>}
Note over Agent: Later: spawns MCP bridge process
Agent->>Bridge: spawn process:<br/>conductor mcp 54321
Bridge->>Listener: TCP connect to localhost:54321
Note over Listener: Connection arrives<br/>Session ID already known
Listener->>Conductor: McpConnectionReceived {<br/> acp_url: "acp:uuid-123",<br/> session_id: "sess-abc"<br/>}
Conductor->>Proxy: _mcp/connect {<br/> acp_url: "acp:uuid-123",<br/> session_id: "sess-abc"<br/>}
Proxy-->>Conductor: _mcp/connect response {<br/> connection_id: "conn-xyz"<br/>}
Note over Proxy: Can correlate connection<br/>with session context
Conductor-->>Listener: connection_id: "conn-xyz"
Note over Listener,Bridge: Bridge now active<br/>Routes MCP <-> ACP messages
Key Decisions
Why spawn TCP listener before getting session_id (during request, not response)?
- Agent may spawn bridge process immediately after receiving
session/newresponse - If listener doesn't exist yet, bridge connection fails with "connection refused"
- Spawning during request ensures TCP port is ready before agent receives response
- Session_id delivered asynchronously via oneshot channel once response arrives
Why send session_id to listener before forwarding response?
- Ensures session_id is available before agent spawns bridge process
- Eliminates race condition where TCP connection arrives before session_id known
- Listener blocks on receiving session_id, guaranteeing it's available when needed
Why include session_id in _mcp/connect?
- Proxies need to correlate MCP connections with ACP sessions
- Example: Research proxy remembers session context (current task, preferences)
- Without session_id, proxy has no way to associate connection with session state
Why use oneshot channel for session_id delivery?
- Listener spawned during request handling (before response available)
- Response comes asynchronously from agent
- Oneshot channel delivers session_id exactly once when response arrives
- Clean separation: listener setup (during request) vs session_id delivery (during response)
Connection Lifecycle
Once the MCP connection is established, the bridge routes messages bidirectionally:
sequenceDiagram
participant Agent
participant Bridge as MCP Bridge<br/>Process
participant Listener as Bridge Listener<br/>Actor
participant Conductor
participant Proxy
Note over Agent,Proxy: Connection established (connection_id: "conn-xyz")
Agent->>Bridge: MCP tools/list request<br/>(stdio JSON-RPC)
Bridge->>Listener: JSON-RPC over TCP
Listener->>Conductor: Raw JSON-RPC message
Conductor->>Proxy: _mcp/request {<br/> connection_id: "conn-xyz",<br/> method: "tools/list",<br/> params: {...}<br/>}
Note over Proxy: Has connection_id -> session_id mapping<br/>Can use session context
Proxy-->>Conductor: _mcp/request response {<br/> tools: [...]<br/>}
Conductor-->>Listener: JSON-RPC response
Listener-->>Bridge: JSON-RPC over TCP
Bridge-->>Agent: MCP response<br/>(stdio JSON-RPC)
Note over Agent: Agent disconnects
Agent->>Bridge: Close stdio
Bridge->>Listener: Close TCP connection
Listener->>Conductor: McpConnectionDisconnected {<br/> connection_id: "conn-xyz"<br/>}
Conductor->>Proxy: _mcp/disconnect {<br/> connection_id: "conn-xyz"<br/>}
Note over Proxy: Clean up session state
Key Decisions
Why route through conductor instead of direct bridge-to-proxy?
- Maintains consistent message ordering through central conductor queue
- Preserves conductor's role as sole message router
- Simplifies error handling and lifecycle management
Why use connection_id instead of session_id in _mcp/* messages?
- One session can have multiple MCP connections (multiple servers)
- Connection_id uniquely identifies the bridge instance
- Proxies maintain
connection_id -> session_idmapping internally
Why send disconnect notification?
- Allows proxies to clean up session-specific state
- Enables resource cleanup (close files, release locks, etc.)
- Provides explicit lifecycle boundary
Race Condition Handling
The session_id delivery mechanism prevents a race condition:
sequenceDiagram
participant Conductor
participant Listener as MCP Bridge Listener
participant Agent
participant Bridge as MCP Bridge Process
Note over Conductor: Without oneshot channel coordination
Conductor->>Listener: Spawn (no session_id yet)
Conductor->>Agent: session/new request
Note over Agent: Fast agent responds immediately
Agent->>Bridge: Spawn bridge process
Bridge->>Listener: TCP connect
Note over Listener: ❌ No session_id available yet!<br/>Can't send McpConnectionReceived
Agent-->>Conductor: session/new response {<br/> session_id: "sess-abc"<br/>}
Note over Conductor: Too late - connection already waiting
rect rgb(200, 50, 50)
Note over Listener,Bridge: Race condition:<br/>Connection arrived before session_id
end
Solution: Listener blocks on oneshot channel:
sequenceDiagram
participant Conductor
participant Listener as MCP Bridge Listener
participant Agent
participant Bridge as MCP Bridge Process
Conductor->>Listener: Spawn with oneshot receiver
Note over Listener: Listening on TCP<br/>Waiting for session_id via oneshot
Conductor->>Agent: session/new request
Agent->>Bridge: Spawn bridge process
Bridge->>Listener: TCP connect
Note over Listener: Connection accepted<br/>⏸️ BLOCKS waiting for session_id
Agent-->>Conductor: session/new response {<br/> session_id: "sess-abc"<br/>}
Conductor->>Listener: Send "sess-abc" via oneshot
Note over Listener: ✅ Session ID received<br/>Unblocks with connection + session_id
Listener->>Conductor: McpConnectionReceived {<br/> acp_url: "acp:uuid-123",<br/> session_id: "sess-abc"<br/>}
rect rgb(50, 200, 50)
Note over Listener,Bridge: No race condition:<br/>session_id always available
end
Key decision: Block connection acceptance on session_id availability
- Listener accepts TCP connection immediately (agent won't wait)
- But blocks sending
McpConnectionReceiveduntil session_id arrives - Guarantees session_id is always available when creating
_mcp/connectrequest - Simple implementation:
oneshot_rx.await?before sending message
Multiple MCP Servers
A single session can register multiple MCP servers:
flowchart TB
Proxy[Proxy]
Conductor[Conductor]
Agent[Agent]
Listener1[Listener: acp:uuid-1<br/>Port 54321]
Listener2[Listener: acp:uuid-2<br/>Port 54322]
Bridge1[Bridge: conductor mcp 54321]
Bridge2[Bridge: conductor mcp 54322]
Proxy -->|session/new with 2 servers| Conductor
Conductor -->|session_id: sess-abc| Listener1
Conductor -->|session_id: sess-abc| Listener2
Conductor -->|session/new response| Proxy
Agent <-->|stdio| Bridge1
Agent <-->|stdio| Bridge2
Bridge1 <-->|TCP| Listener1
Bridge2 <-->|TCP| Listener2
Listener1 -->|_mcp/* messages<br/>conn-1| Conductor
Listener2 -->|_mcp/* messages<br/>conn-2| Conductor
Conductor <-->|Both connections| Proxy
style Listener1 fill:#e1f5ff
style Listener2 fill:#e1f5ff
Key decisions:
- Each
acp:URL gets its own TCP port and listener - All listeners for a session receive the same session_id
- Each connection gets unique connection_id
- Proxy maintains map:
connection_id -> (session_id, acp_url)
Implementation Components
McpBridgeListeners
- Purpose: Manages TCP listeners for all
acp:URLs - Lifecycle: Created with conductor, lives for entire conductor lifetime
- Responsibilities:
- Detect
acp:URLs duringsession/new - Spawn TCP listeners on ephemeral ports
- Transform MCP server specs to stdio transport
- Deliver session_id to listeners via oneshot channels
- Detect
McpBridgeListener Actor
- Purpose: Accepts TCP connections for a specific
acp:URL - Lifecycle: Spawned during
session/new, lives until conductor exits - Responsibilities:
- Listen on TCP port
- Block on oneshot channel to receive session_id
- Accept connections and send
McpConnectionReceivedwith session_id - Spawn connection actors
McpBridgeConnectionActor
- Purpose: Routes messages for a single MCP connection
- Lifecycle: Spawned when agent connects, exits when agent disconnects
- Responsibilities:
- Read JSON-RPC from TCP, forward to conductor
- Receive messages from conductor, write to TCP
- Send
McpConnectionDisconnectedon close
MCP Bridge Process (conductor mcp $PORT)
- Purpose: Bridges agent's stdio to conductor's TCP
- Lifecycle: Spawned by agent, exits when stdio closes
- Responsibilities:
- Connect to TCP port on startup
- Bidirectional stdio ↔ TCP forwarding
- No protocol awareness (just bytes)
Error Handling
Agent Disconnects During Session Creation
If agent closes connection before sending session/new response:
- Oneshot channel sender drops
- Listener receives
Errfrom oneshot - Listener exits gracefully
- TCP port cleaned up
Bridge Process Crashes
If bridge process exits unexpectedly:
- TCP connection closes
- Listener detects disconnect
- Sends
McpConnectionDisconnected - Proxy cleans up state
Multiple Connections to Same Listener
Decision: Allow multiple connections per listener (for future flexibility)
- Each connection gets unique connection_id
- All connections share same session_id
- Proxy can correlate all connections to session
Related Documentation
- Conductor Implementation - Conductor architecture
- Protocol Reference - ACP message formats
- Building a Proxy - Implementing MCP-aware proxies
Transport Architecture
This chapter explains how JrConnection separates protocol semantics from transport mechanisms, enabling flexible deployment patterns including in-process message passing.
Overview
JrConnection provides the core JSON-RPC connection abstraction used by all SACP components. Originally designed around byte streams, it has been refactored to support pluggable transports that work with different I/O mechanisms while maintaining consistent protocol semantics.
Design Principles
Separation of Concerns
The architecture separates two distinct responsibilities:
-
Protocol Layer: JSON-RPC semantics
- Request ID assignment
- Request/response correlation
- Method dispatch to handlers
- Error handling
-
Transport Layer: Message movement
- Reading/writing from I/O sources
- Serialization/deserialization
- Connection management
This separation enables:
- In-process efficiency: Components in the same process can skip serialization
- Transport flexibility: Easy to add new transport types (WebSockets, named pipes, etc.)
- Testability: Mock transports for unit testing
- Clarity: Clear boundaries between protocol and I/O concerns
The jsonrpcmsg::Message Boundary
The key insight is that jsonrpcmsg::Message provides a natural, transport-neutral boundary:
#![allow(unused)] fn main() { enum jsonrpcmsg::Message { Request { method, params, id }, Response { result, error, id }, } }
This type sits between the protocol and transport layers:
- Above: Protocol layer works with application types (
OutgoingMessage,UntypedMessage) - Below: Transport layer works with
jsonrpcmsg::Message - Boundary: Clean, well-defined interface
Actor Architecture
Protocol Actors (Core JrConnection)
These actors live in JrConnection and understand JSON-RPC semantics:
Outgoing Protocol Actor
Input: mpsc::UnboundedReceiver<OutgoingMessage>
Output: mpsc::UnboundedSender<jsonrpcmsg::Message>
Responsibilities:
- Assign unique IDs to outgoing requests
- Subscribe to reply_actor for response correlation
- Convert application-level
OutgoingMessageto protocol-leveljsonrpcmsg::Message
Incoming Protocol Actor
Input: mpsc::UnboundedReceiver<jsonrpcmsg::Message>
Output: Routes to reply_actor or handler chain
Responsibilities:
- Route responses to reply_actor (matches by ID)
- Route requests/notifications to handler chain
- Convert
jsonrpcmsg::RequesttoUntypedMessagefor handlers
Reply Actor
Manages request/response correlation:
- Maintains map from request ID to response channel
- When response arrives, delivers to waiting request
- Unchanged from original design
Task Actor
Runs user-spawned concurrent tasks via cx.spawn(). Unchanged from original design.
Transport Actors (Provided by Trait)
These actors are spawned by IntoJrConnectionTransport implementations and have zero knowledge of protocol semantics:
Transport Outgoing Actor
Input: mpsc::UnboundedReceiver<jsonrpcmsg::Message>
Output: Writes to I/O (byte stream, channel, socket, etc.)
For byte streams:
- Serialize
jsonrpcmsg::Messageto JSON - Write newline-delimited JSON to stream
For in-process channels:
- Directly forward
jsonrpcmsg::Messageto channel
Transport Incoming Actor
Input: Reads from I/O (byte stream, channel, socket, etc.)
Output: mpsc::UnboundedSender<jsonrpcmsg::Message>
For byte streams:
- Read newline-delimited JSON from stream
- Parse to
jsonrpcmsg::Message - Send to incoming protocol actor
For in-process channels:
- Directly forward
jsonrpcmsg::Messagefrom channel
Message Flow
Outgoing Message Flow
User Handler
|
| OutgoingMessage (request/notification/response)
v
Outgoing Protocol Actor
| - Assign ID (for requests)
| - Subscribe to replies
| - Convert to jsonrpcmsg::Message
v
| jsonrpcmsg::Message
|
Transport Outgoing Actor
| - Serialize (byte streams)
| - Or forward directly (channels)
v
I/O Destination
Incoming Message Flow
I/O Source
|
Transport Incoming Actor
| - Parse (byte streams)
| - Or forward directly (channels)
v
| jsonrpcmsg::Message
|
Incoming Protocol Actor
| - Route responses → reply_actor
| - Route requests → handler chain
v
Handler or Reply Actor
Message Ordering in the Conductor
When the conductor forwards messages between components, it must preserve send order to prevent race conditions. The conductor achieves this by routing all message forwarding through a central message queue.
Key insight: While the transport actors operate independently, the conductor's routing logic serializes all forwarding decisions through a central event loop. This ensures that even though responses use a "fast path" (reply_actor with oneshot channels) at the transport level, the decision to forward them is serialized with notification forwarding at the protocol level.
Without this serialization, responses could overtake notifications when both are forwarded through proxy chains, causing the client to receive messages out of order. See Conductor Implementation for details.
Transport Trait
The IntoJrConnectionTransport trait defines how to bridge internal channels with I/O:
#![allow(unused)] fn main() { pub trait IntoJrConnectionTransport { fn setup_transport( self, cx: &JrConnectionCx, outgoing_rx: mpsc::UnboundedReceiver<jsonrpcmsg::Message>, incoming_tx: mpsc::UnboundedSender<jsonrpcmsg::Message>, ) -> Result<(), Error>; } }
Key points:
- Consumed (
self): Implementations move owned resources into spawned actors - Spawns via
cx.spawn(): Uses connection context to spawn transport actors - Channels only: No knowledge of
OutgoingMessageor response correlation - Returns quickly: Just spawns actors, doesn't block
Transport Implementations
Byte Stream Transport
The default implementation works with any AsyncRead + AsyncWrite pair:
#![allow(unused)] fn main() { impl<OB: AsyncWrite, IB: AsyncRead> IntoJrConnectionTransport for (OB, IB) { fn setup_transport(self, cx, outgoing_rx, incoming_tx) -> Result<(), Error> { let (outgoing_bytes, incoming_bytes) = self; // Spawn incoming: read bytes → parse JSON → send Message cx.spawn(async move { let mut lines = BufReader::new(incoming_bytes).lines(); while let Some(line) = lines.next().await { let message: jsonrpcmsg::Message = serde_json::from_str(&line?)?; incoming_tx.unbounded_send(message)?; } Ok(()) }); // Spawn outgoing: receive Message → serialize → write bytes cx.spawn(async move { while let Some(message) = outgoing_rx.next().await { let json = serde_json::to_vec(&message)?; outgoing_bytes.write_all(&json).await?; outgoing_bytes.write_all(b"\n").await?; } Ok(()) }); Ok(()) } } }
Use cases:
- Stdio connections to subprocess agents
- TCP socket connections
- Unix domain sockets
- Any stream-based I/O
In-Process Channel Transport
For components in the same process, skip serialization entirely:
#![allow(unused)] fn main() { pub struct ChannelTransport { outgoing: mpsc::UnboundedSender<jsonrpcmsg::Message>, incoming: mpsc::UnboundedReceiver<jsonrpcmsg::Message>, } impl IntoJrConnectionTransport for ChannelTransport { fn setup_transport(self, cx, outgoing_rx, incoming_tx) -> Result<(), Error> { // Just forward messages, no serialization cx.spawn(async move { while let Some(message) = self.incoming.next().await { incoming_tx.unbounded_send(message)?; } Ok(()) }); cx.spawn(async move { while let Some(message) = outgoing_rx.next().await { self.outgoing.unbounded_send(message)?; } Ok(()) }); Ok(()) } } }
Benefits:
- Zero serialization overhead: Messages passed by value
- Same-process efficiency: Ideal for conductor with in-process proxies
- Full type safety: No parsing errors possible
Construction API
Flexible Construction
The refactored API separates handler setup from transport selection:
#![allow(unused)] fn main() { // Build handler chain let connection = JrConnection::new() .name("my-component") .on_receive_request(|req: InitializeRequest, cx| { cx.respond(InitializeResponse::make()) }) .on_receive_notification(|notif: SessionNotification, _cx| { Ok(()) }); // Provide transport at the end connection.serve_with(transport).await?; }
Byte Stream Convenience
For the common case of byte streams, use the convenience constructor:
#![allow(unused)] fn main() { JrConnection::from_streams(stdout, stdin) .on_receive_request(...) .serve() .await?; }
This is equivalent to:
#![allow(unused)] fn main() { JrConnection::new() .on_receive_request(...) .serve_with((stdout, stdin)) .await?; }
Use Cases
1. Standard Agent (Stdio)
Traditional subprocess agent with stdio communication:
#![allow(unused)] fn main() { JrConnection::from_streams( tokio::io::stdout().compat_write(), tokio::io::stdin().compat() ) .name("my-agent") .on_receive_request(handle_prompt) .serve() .await?; }
2. In-Process Proxy Chain
Conductor with proxies in the same process for maximum efficiency:
#![allow(unused)] fn main() { // Create paired channel transports let (transport_a, transport_b) = create_paired_transports(); // Spawn proxy in background tokio::spawn(async move { JrConnection::new() .on_receive_message(proxy_handler) .serve_with(transport_a) .await }); // Connect to proxy JrConnection::new() .on_receive_request(agent_handler) .serve_with(transport_b) .await?; }
No serialization overhead between components!
3. Network-Based Components
TCP socket connections between components:
#![allow(unused)] fn main() { let stream = TcpStream::connect("localhost:8080").await?; let (read, write) = stream.split(); JrConnection::new() .on_receive_request(handler) .serve_with((write.compat_write(), read.compat())) .await?; }
4. Testing with Mock Transport
Unit tests without real I/O:
#![allow(unused)] fn main() { let (transport, mock) = create_mock_transport(); tokio::spawn(async move { JrConnection::new() .on_receive_request(my_handler) .serve_with(transport) .await }); // Test by sending messages directly mock.send_request("initialize", params).await?; let response = mock.receive_response().await?; assert_eq!(response.method, "initialized"); }
Benefits
Performance
- In-process optimization: Skip serialization when components are co-located
- Zero-copy potential: Direct message passing for channels
- Flexible trade-offs: Choose appropriate transport for deployment
Flexibility
- Transport-agnostic handlers: Write handler logic once, use anywhere
- Easy experimentation: Try different transports without code changes
- Future-proof: Add new transports (WebSockets, gRPC, etc.) without refactoring
Testing
- Mock transports: Unit test handlers without I/O
- Deterministic tests: Control message timing precisely
- Isolated testing: Test protocol logic separate from I/O
Clarity
- Clear boundaries: Protocol semantics vs transport mechanics
- Focused implementations: Each layer has single responsibility
- Maintainability: Changes to transport don't affect protocol logic
Implementation Status
- ✅ Phase 1: Documentation complete
- 🚧 Phase 2: Actor splitting in progress
- 📋 Phase 3: Trait introduction planned
- 📋 Phase 4: In-process transport planned
- 📋 Phase 5: Conductor integration planned
See src/sacp/PLAN.md for detailed implementation tracking.
Related Documentation
- Architecture Overview - High-level SACP concepts
- Building a Proxy - Using
JrConnectionin proxies - Conductor Implementation - How conductor uses transports