Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Symposium ACP

This repository contains Symposium’s implementation of the Agent-Client Protocol (ACP).

For Users

If you want to build something with these crates, see the rustdoc:

The sacp crate includes a concepts module that explains how connections, sessions, callbacks, and message ordering work.

For Maintainers and Agents

This book documents the design and architecture for people working on the codebase itself.

Repository Structure

src/
├── sacp/              # Core protocol SDK
├── sacp-tokio/        # Tokio utilities (process spawning)
├── sacp-rmcp/         # Integration with rmcp crate
├── sacp-cookbook/     # Usage patterns (rendered as rustdoc)
├── sacp-derive/       # Proc macros
├── sacp-conductor/    # Conductor binary and library
├── sacp-test/         # Test utilities and fixtures
├── sacp-trace-viewer/ # Trace visualization tool
├── elizacp/           # Example agent implementation
└── yopo/              # "You Only Prompt Once" example client

Crate Relationships

graph TD
    sacp[sacp<br/>Core SDK]
    tokio[sacp-tokio<br/>Process spawning]
    rmcp[sacp-rmcp<br/>rmcp integration]
    conductor[sacp-conductor<br/>Proxy orchestration]
    cookbook[sacp-cookbook<br/>Usage patterns]

    tokio --> sacp
    rmcp --> sacp
    conductor --> sacp
    conductor --> tokio
    cookbook --> sacp
    cookbook --> rmcp
    cookbook --> conductor

Key Design Documents

Core Library Design

This document describes the design of the sacp crate and its companion crates (sacp-tokio, sacp-rmcp).

For API usage, see the rustdoc and cookbook.

Crate Organization

sacp

The core SDK. Provides:

  • Role types (Client, Agent, Proxy, Conductor) - the identities in ACP
  • Connection builders (builder(), connect_to(), connect_with())
  • Message handling (on_receive_request, on_receive_notification, on_receive_dispatch)
  • Protocol types (sacp::schema::*) - all ACP message types
  • MCP server builder - for adding tools to proxies

sacp-tokio

Tokio-specific utilities:

  • Process spawning - spawn agent/proxy processes and connect via stdio
  • Transport helpers - convert tokio streams to sacp transports

sacp-rmcp

Integration with the rmcp crate:

  • McpServer::from_rmcp() - wrap an rmcp server as an sacp MCP server

Role System

The type system is built around roles - the logical identity of an endpoint.

graph LR
    Client -->|connects to| Agent
    Agent -->|connects to| Client
    Proxy -->|connects to| Conductor
    Conductor -->|connects to| Proxy

Counterpart Relationship

Each role has exactly one counterpart - who it connects to:

RoleCounterpart
ClientAgent
AgentClient
ProxyConductor
ConductorProxy

This is encoded in the type system: impl ConnectTo<Client> for MyAgent means “MyAgent can connect to a client” (i.e., MyAgent plays the Agent role).

Peer Relationship

Some roles can communicate with multiple peers. The Proxy role is the key example:

graph TB
    subgraph "Proxy's view"
        Proxy
        Client[Client peer]
        Agent[Agent peer]
        Conductor[Conductor counterpart]
    end

    Proxy -.->|"send_to(Client, ...)"| Client
    Proxy -.->|"send_to(Agent, ...)"| Agent
    Proxy -->|"connect_to(conductor)"| Conductor
  • Counterpart (Conductor) - who the proxy connects to (transport layer)
  • Peers (Client, Agent) - who the proxy exchanges logical messages with

Message Flow

Dispatch Loop

Each connection runs a dispatch loop that processes incoming messages:

sequenceDiagram
    participant Transport
    participant DispatchLoop
    participant Handlers
    participant UserCode

    Transport->>DispatchLoop: incoming JSON-RPC message
    DispatchLoop->>Handlers: try handlers in order

    alt Handler matches
        Handlers->>UserCode: invoke callback
        UserCode-->>Handlers: return result
    else No handler matches
        Handlers->>DispatchLoop: default handler
    end

Handler Chain

Handlers are tried in registration order. The first matching handler wins:

graph TD
    Message[Incoming Message]
    H1[Handler 1: InitializeRequest]
    H2[Handler 2: PromptRequest]
    H3[Handler 3: catch-all]

    Message --> H1
    H1 -->|not InitializeRequest| H2
    H2 -->|not PromptRequest| H3
    H3 --> Done[Handle or error]

    H1 -->|matches| Process1[Process Initialize]
    H2 -->|matches| Process2[Process Prompt]

Ordering Guarantees

The dispatch loop provides sequential processing:

  1. Messages are processed one at a time
  2. A handler runs to completion before the next message is processed
  3. Spawned tasks (connection.spawn()) run concurrently with the dispatch loop

Important: Don’t block the dispatch loop. Use spawn() for long-running work.

Connection Lifecycle

stateDiagram-v2
    [*] --> Building: builder()
    Building --> Building: on_receive_*()
    Building --> Connected: connect_to(transport)
    Building --> Connected: connect_with(transport, closure)
    Connected --> Running: dispatch loop starts
    Running --> [*]: connection closes

Two Connection Modes

Reactive mode (connect_to): The connection runs handlers until the transport closes. Used for agents and proxies.

Active mode (connect_with): Runs a closure with access to the connection, then closes. Used for clients that drive the interaction.

Key Source Files

FilePurpose
src/sacp/src/role.rsRole trait and type definitions
src/sacp/src/role/acp.rsClient, Agent, Proxy, Conductor roles
src/sacp/src/component.rsConnectTo and Builder traits
src/sacp/src/builder.rsConnection builder implementation
src/sacp/src/dispatch.rsDispatch type and handler matching
src/sacp/src/mcp_server/MCP server builder
src/sacp/src/concepts/Rustdoc concept explanations

Design Decisions

Earlier versions used “link types” that encoded both sides (e.g., ClientToAgent). Roles are simpler:

  • One concept instead of two (role vs link)
  • Role types double as peer selectors (send_to(Agent, ...))
  • Clearer mental model: “I am X, connecting to Y”

Why Witness Macros?

The on_receive_request!() macros work around Rust’s lack of return-type notation. They capture the return type of closures at the call site, enabling type inference to work.

Why Not Traits for Handlers?

Handler closures are more ergonomic than trait implementations for most use cases. The HandleDispatchFrom trait exists for advanced cases (reusable handler components).

Protocol Reference

This chapter documents the SACP protocol extensions to ACP. These extensions use ACP’s extensibility mechanism through custom methods and _meta fields.

Overview

SACP defines two main protocol extensions:

  1. _proxy/successor/* - For proxy-to-successor communication
  2. _mcp/* - For MCP-over-ACP bridging

The _proxy/successor/* Protocol

Proxies communicate with their downstream component (next proxy or agent) through the conductor using these extension methods.

_proxy/successor/request

Send a request to the successor component.

Request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "_proxy/successor/request",
  "params": {
    // The actual ACP request to forward, flattened
    "method": "prompt",
    "params": {
      "messages": [...]
    }
  }
}

Response: The response is the successor’s response to the forwarded request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    // The successor's response
  }
}

Usage: When a proxy receives an ACP request from upstream and wants to forward it (possibly transformed) to the downstream component, it sends _proxy/successor/request to the conductor. The conductor routes it to the next component.

_proxy/successor/notification

Send a notification to the successor component.

Notification:

{
  "jsonrpc": "2.0",
  "method": "_proxy/successor/notification",
  "params": {
    // The actual ACP notification to forward, flattened
    "method": "cancelled",
    "params": {}
  }
}

Usage: When a proxy receives a notification from upstream and wants to forward it downstream.

Message Flow Examples

Example: Transforming a prompt

  1. Editor sends prompt request to conductor
  2. Conductor forwards as normal ACP prompt to Proxy A
  3. Proxy A modifies the prompt and sends:
    {
      "method": "_proxy/successor/request",
      "params": {
        "method": "prompt",
        "params": { /* modified prompt */ }
      }
    }
    
  4. Conductor routes to Proxy B as normal prompt
  5. Response flows back through the chain

Pass-through proxies: A proxy that doesn’t register any handlers passes all messages through unchanged - the conductor automatically routes unhandled messages.

Capability Handshake

The Proxy Capability

The conductor uses a two-way capability handshake to verify components can act as proxies.

InitializeRequest from conductor to proxy:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "0.7.0",
    "capabilities": {},
    "_meta": {
      "proxy": true
    }
  }
}

InitializeResponse from proxy to conductor:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "0.7.0",
    "serverInfo": {},
    "capabilities": {},
    "_meta": {
      "proxy": true
    }
  }
}

Why a two-way handshake?

The proxy capability is an active protocol - it requires the component to handle _proxy/successor/* messages and route communications. If a component doesn’t respond with the proxy capability, the conductor fails initialization with an error.

Agent initialization:

The last component (agent) is NOT offered the proxy capability:

{
  "method": "initialize",
  "params": {
    "protocolVersion": "0.7.0",
    "capabilities": {},
    "_meta": {}  // No proxy capability
  }
}

Agents don’t need SACP awareness.

The _mcp/* Protocol

SACP enables components to provide MCP servers that communicate over ACP messages instead of stdio.

MCP Server Declaration

Components declare MCP servers with ACP transport using a special URL scheme:

{
  "tools": {
    "mcpServers": {
      "sparkle": {
        "transport": "http",
        "url": "acp:550e8400-e29b-41d4-a716-446655440000",
        "headers": {}
      }
    }
  }
}

The acp:UUID URL signals ACP transport. The component generates a unique UUID to identify which component handles calls to this MCP server.

_mcp/connect

Create a new MCP connection (equivalent to “running the command”).

Request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "_mcp/connect",
  "params": {
    "acp_url": "acp:550e8400-e29b-41d4-a716-446655440000"
  }
}

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "connection_id": "conn-123"
  }
}

The connection_id is used in subsequent MCP messages to identify which connection.

_mcp/disconnect

Disconnect an MCP connection.

Notification:

{
  "jsonrpc": "2.0",
  "method": "_mcp/disconnect",
  "params": {
    "connection_id": "conn-123"
  }
}

_mcp/request

Send an MCP request over the ACP connection. This is bidirectional:

  • Agent→Component: MCP client calling MCP server (tool calls, resource reads, etc.)
  • Component→Agent: MCP server calling MCP client (sampling/createMessage, etc.)

Request:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "_mcp/request",
  "params": {
    "connection_id": "conn-123",
    // The actual MCP request, flattened
    "method": "tools/call",
    "params": {
      "name": "embody_sparkle",
      "arguments": {}
    }
  }
}

Response:

{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    // The MCP response
    "content": [
      {"type": "text", "text": "Embodiment complete"}
    ]
  }
}

_mcp/notification

Send an MCP notification over the ACP connection. Bidirectional like _mcp/request.

Notification:

{
  "jsonrpc": "2.0",
  "method": "_mcp/notification",
  "params": {
    "connection_id": "conn-123",
    // The actual MCP notification, flattened
    "method": "notifications/progress",
    "params": {
      "progressToken": "token-1",
      "progress": 50,
      "total": 100
    }
  }
}

Agent Capability: mcp_acp_transport

Agents that natively support MCP-over-ACP declare this capability:

{
  "_meta": {
    "mcp_acp_transport": true
  }
}

Conductor behavior:

  • If the agent has mcp_acp_transport: true, conductor passes MCP server declarations through unchanged
  • If the agent lacks this capability, conductor performs bridging adaptation:
    1. Binds a TCP port (e.g., localhost:54321)
    2. Transforms MCP server to use conductor mcp PORT command with stdio transport
    3. Spawns bridge process that converts between stdio (MCP) and ACP messages
    4. Agent thinks it’s talking to normal MCP server over stdio

Bridging transformation example:

Original (from component):

{
  "sparkle": {
    "transport": "http",
    "url": "acp:550e8400-e29b-41d4-a716-446655440000"
  }
}

Transformed (for agent without native support):

{
  "sparkle": {
    "command": "conductor",
    "args": ["mcp", "54321"],
    "transport": "stdio"
  }
}

The conductor mcp PORT process bridges between stdio and the conductor’s ACP message routing.

Message Direction Summary

MessageDirectionPurpose
_proxy/successor/requestProxy→ConductorForward request downstream
_proxy/successor/notificationProxy→ConductorForward notification downstream
_mcp/connectAgent↔ComponentEstablish MCP connection
_mcp/disconnectAgent↔ComponentClose MCP connection
_mcp/requestAgent↔ComponentBidirectional MCP requests
_mcp/notificationAgent↔ComponentBidirectional MCP notifications

Conductor Design

{{#rfd: proxying-acp}}

The Conductor (binary name: sacp-conductor) orchestrates P/ACP proxy chains. It coordinates the flow of ACP messages through a chain of proxy components.

For API usage, see the sacp-conductor rustdoc.

Overview

The conductor orchestrates proxy chains by sitting between every component. It spawns component processes and routes all messages, presenting itself as a normal ACP agent to the editor.

flowchart TB
    Editor[Editor]
    C[Conductor]
    P1[Component 1]
    P2[Component 2]
    
    Editor <-->|ACP via stdio| C
    C <-->|stdio| P1
    C <-->|stdio| P2

Key insight: Components never talk directly to each other. The conductor routes ALL messages using the _proxy/successor/* protocol.

From the editor’s perspective: Conductor is a normal ACP agent communicating over stdio.

From each component’s perspective:

  • Receives normal ACP messages from the conductor
  • Sends _proxy/successor/request to conductor to forward messages TO successor
  • Receives _proxy/successor/request from conductor for messages FROM successor

See Protocol Reference for detailed message formats.

Responsibilities

The conductor has four core responsibilities:

1. Process Management

  • Spawns component processes based on command-line arguments
  • Manages component lifecycle (startup, shutdown, error handling)
  • For MVP: If any component crashes, shut down the entire chain

Command-line interface:

# Agent mode - manages proxy chain
conductor agent sparkle-acp claude-code-acp

# MCP mode - bridges stdio to TCP for MCP-over-ACP
conductor mcp 54321

Agent mode creates a chain: Editor → Conductor → sparkle-acp → claude-code-acp

MCP mode bridges MCP JSON-RPC (stdio) to raw JSON-RPC (TCP connection to main conductor)

2. Message Routing

The conductor routes ALL messages between components. No component talks directly to another.

Message ordering: The conductor preserves message send order by routing all forwarding decisions through a central event loop, preventing responses from overtaking notifications.

Message flow types:

  1. Editor → First Component: Conductor forwards normal ACP messages
  2. Component → Successor: Component sends _proxy/successor/request to conductor, which unwraps and forwards to next component
  3. Successor → Component: Conductor wraps messages in _proxy/successor/request when sending FROM successor
  4. Responses: Flow back via standard JSON-RPC response IDs

See Protocol Reference for detailed request/response flow diagrams.

3. Capability Management

The conductor manages proxy capability handshakes during initialization:

Normal Mode (conductor as root):

  • Offers proxy: true to all components EXCEPT the last
  • Verifies each proxy component accepts the capability
  • Last component (agent) receives standard ACP initialization

Proxy Mode (conductor as proxy):

  • When conductor itself receives proxy: true during initialization
  • Offers proxy: true to ALL components (including the last)
  • Enables tree-structured proxy chains

See Proxy Mode below for hierarchical chain details.

4. MCP Bridge Adaptation

When components provide MCP servers with ACP transport ("url": "acp:$UUID"):

If agent has mcp_acp_transport capability:

  • Pass through MCP server declarations unchanged
  • Agent handles _mcp/* messages natively

If agent lacks mcp_acp_transport capability:

  • Bind TCP port for each ACP-transport MCP server
  • Transform MCP server spec to use conductor mcp $port
  • Spawn conductor mcp $port bridge processes
  • Route MCP tool calls:
    • Agent → stdio → bridge → TCP → conductor → _mcp/* messages backward up chain
    • Component responses flow back: component → conductor → TCP → bridge → stdio → agent

See MCP Bridge for full implementation details.

Proxy Mode

The conductor can itself operate as a proxy component within a larger chain, enabling tree-structured proxy architectures.

How Proxy Mode Works

When the conductor receives an initialize request with the proxy capability:

  1. Detection: Conductor detects it’s being used as a proxy component
  2. All components become proxies: Offers proxy: true to ALL managed components (including the last)
  3. Successor forwarding: When the final component sends _proxy/successor/request, conductor forwards to its own successor

Example: Hierarchical Chain

client → proxy1 → conductor (proxy mode) → final-agent
                      ↓ manages
                  p1 → p2 → p3

Message flow when p3 forwards to successor:

  1. p3 sends _proxy/successor/request to conductor
  2. Conductor recognizes it’s in proxy mode
  3. Conductor sends _proxy/successor/request to proxy1 (its predecessor)
  4. proxy1 routes to final-agent

Use Cases

Modular sub-chains: Group related proxies into a conductor-managed sub-chain that can be inserted anywhere

Conditional routing: A proxy can route to conductor-based sub-chains based on request type

Isolated environments: Each conductor manages its own component lifecycle while participating in larger chains

Implementation Notes

  • Proxy mode is detected during initialization by checking for proxy: true in incoming initialize request
  • In normal mode: last component is agent (no proxy capability)
  • In proxy mode: all components are proxies (all receive proxy capability)
  • The conductor’s own successor is determined by whoever initialized it

Initialization Flow

sequenceDiagram
    participant Editor
    participant Conductor
    participant Sparkle as Component1<br/>(Sparkle)
    participant Agent as Component2<br/>(Agent)

    Note over Conductor: Spawns both components at startup<br/>from CLI args
    
    Editor->>Conductor: acp/initialize [I0]
    Conductor->>Sparkle: acp/initialize (offers proxy capability) [I0]
    
    Note over Sparkle: Sees proxy capability offer,<br/>knows it has a successor
    
    Sparkle->>Conductor: _proxy/successor/request(acp/initialize) [I1]
    
    Note over Conductor: Unwraps request,<br/>knows Agent is last in chain
    
    Conductor->>Agent: acp/initialize (NO proxy capability - agent is last) [I1]
    Agent-->>Conductor: initialize response (capabilities) [I1]
    Conductor-->>Sparkle: _proxy/successor response [I1]
    
    Note over Sparkle: Sees Agent's capabilities,<br/>prepares response
    
    Sparkle-->>Conductor: initialize response (accepts proxy capability) [I0]
    
    Note over Conductor: Verifies Sparkle accepted proxy.<br/>If not, would fail with error.
    
    Conductor-->>Editor: initialize response [I0]

Key points:

  1. Conductor spawns ALL components at startup based on command-line args
  2. Sequential initialization: Conductor → Component1 → Component2 → … → Agent
  3. Proxy capability handshake:
    • Conductor offers proxy: true to non-last components (in InitializeRequest _meta)
    • Components must accept by responding with proxy: true (in InitializeResponse _meta)
    • Last component (agent) is NOT offered proxy capability
    • Conductor verifies acceptance and fails initialization if missing
  4. Components use _proxy/successor/request to initialize their successors
  5. Capabilities flow back up the chain: Each component sees successor’s capabilities before responding
  6. Message IDs: Preserved from editor (I0), new IDs for proxy messages (I1, I2, …)

Implementation Architecture

The conductor uses an actor-based architecture with message passing via channels.

Core Components

  • Main connection: Handles editor stdio and spawns the event loop
  • Component connections: Each component has a bidirectional JSON-RPC connection
  • Message router: Central actor that receives ConductorMessage enums and routes appropriately
  • MCP bridge actors: Manage MCP-over-ACP connections

Message Ordering Invariant

Critical invariant: All messages (requests, responses, notifications) between any two endpoints must maintain their send order.

The conductor ensures this invariant by routing all message forwarding through its central message queue (ConductorMessage channel). This prevents faster message types (responses) from overtaking slower ones (notifications).

Why This Matters

Without ordering preservation, a race condition can occur:

  1. Agent sends session/update notification
  2. Agent responds to session/prompt request
  3. Response takes a fast path (reply_actor with oneshot channels)
  4. Notification takes slower path (handler pipeline)
  5. Response arrives before notification → client loses notification data

Implementation

The conductor uses extension traits to route all forwarding through the central queue:

  • JrConnectionCxExt::send_proxied_message_via - Routes both requests and notifications
  • JrRequestCxExt::respond_via - Routes responses through the queue
  • JrResponseExt::forward_response_via - Ensures response forwarding maintains order

All message forwarding in both directions (client-to-agent and agent-to-client) flows through the conductor’s central event loop, which processes ConductorMessage enums sequentially. This serialization ensures messages arrive in the same order they were sent.

Message Routing Implementation

The conductor uses a recursive spawning pattern:

  1. Recursive chain building: Each component spawns the next, establishing connections
  2. Actor-based routing: All messages flow through a central conductor actor via channels
  3. Response routing: Uses JSON-RPC response IDs and request contexts to route back
  4. No explicit ID tracking: Context passing eliminates need for manual ID management

Key routing decisions:

  • Normal mode: Last component gets normal ACP (no proxy capability)
  • Proxy mode: All components get proxy capability, final component can forward to conductor’s successor
  • Bidirectional _proxy/successor/*: Used for both TO successor (unwrap and forward) and FROM successor (wrap and deliver)

Concurrency Model

Built on Tokio async runtime:

  • Async I/O: All stdio operations are non-blocking
  • Message passing: Components communicate via mpsc channels
  • Spawned tasks: Each connection handler runs as separate task
  • Error propagation: Tasks send errors back to main actor via channels

See source code in src/sacp-conductor/src/conductor.rs for implementation details.

Error Handling

Component Crashes

If any component process exits or crashes:

  1. Log error to stderr
  2. Shut down entire Conductor process
  3. Exit with non-zero status

The editor will see the ACP connection close and can handle appropriately.

Invalid Messages

If Conductor receives malformed JSON-RPC:

  • Log to stderr
  • Continue processing (don’t crash the chain)
  • May result in downstream errors

Initialization Failures

If component fails to initialize:

  1. Log error
  2. Return error response to editor
  3. Shut down

Implementation Phases

Phase 1: Basic Routing (MVP)

  • Design documented
  • Parse command-line arguments (component list)
  • Spawn components recursively (alternative to “spawn all at startup”)
  • Set up stdio pipes for all components
  • Message routing logic:
    • Editor → Component1 forwarding
    • _proxy/successor/request unwrapping and forwarding
    • Response routing via context passing (alternative to explicit ID tracking)
    • Component → Editor message routing
  • Actor-based message passing architecture with ConductorMessage enum
  • Error reporting from spawned tasks to conductor
  • PUNCH LIST - Remaining MVP items:
    • Fix typo: ComnponentToItsClientMessageComponentToItsClientMessage
    • Proxy capability handshake during initialization:
      • Offer proxy: true in _meta to non-last components during acp/initialize
      • Do NOT offer proxy to last component (agent)
      • Verify component accepts by checking for proxy: true in InitializeResponse _meta
      • Fail initialization with error “component X is not a proxy” if handshake fails
    • Add documentation/comments explaining recursive chain building
    • Add logging (message routing, component startup, errors)
    • Write tests (proxy capability handshake, basic routing, initialization, error handling)
    • Component crash detection and chain shutdown

Phase 2: Robust Error Handling

  • Basic error reporting from async tasks
  • Graceful component shutdown
  • Retry logic for transient failures
  • Health checks
  • Timeout handling for hung requests

Phase 3: Observability

  • Structured logging/tracing
  • Performance metrics
  • Debug mode with message inspection

Phase 4: Advanced Features

  • Dynamic component loading
  • Hot reload of components
  • Multiple parallel chains

Testing Strategy

Unit Tests

  • Message parsing and forwarding logic
  • Capability modification
  • Error handling paths

Integration Tests

  • Full chain initialization
  • Message flow through real components
  • Component crash scenarios
  • Malformed message handling

End-to-End Tests

  • Real editor + Conductor + test components
  • Sparkle + Claude Code integration
  • Performance benchmarks

Open Questions

  1. Component discovery: How do we find component binaries? PATH? Configuration file?
  2. Configuration: Should Conductor support a config file for default chains?
  3. Logging: Structured logging format? Integration with existing Symposium logging?
  4. Metrics: Should Conductor expose metrics (message counts, latency)?
  5. Security: Do we need to validate/sandbox component processes?

Key Source Files

FilePurpose
src/sacp-conductor/src/conductor.rsMain conductor implementation
src/sacp-conductor/src/conductor/mcp_bridge.rsMCP bridge actor
src/sacp-conductor/src/main.rsCLI entry point

MCP Bridge: Proxying MCP over ACP

The MCP Bridge enables agents without native MCP-over-ACP support to work with proxy components that provide MCP servers using ACP transport (acp:$UUID).

Problem Statement

Proxy components may want to expose MCP servers to agents using ACP as the transport layer. This allows:

  • Dynamic MCP server registration during session creation
  • Proxies to correlate MCP tool calls with specific ACP sessions
  • Unified protocol handling (everything flows through ACP messages)

However, many agents only support traditional MCP transports (stdio, SSE). The conductor bridges this gap by:

  1. Accepting acp:$UUID URLs in session/new requests
  2. Transforming them into stdio-based MCP servers the agent can connect to
  3. Routing MCP messages between the agent (stdio) and proxies (ACP _mcp/* messages)

High-Level Architecture

flowchart LR
    Proxy[Proxy Component]
    Conductor[Conductor]
    Agent[Agent Process]
    Bridge[MCP Bridge Process]
    
    Proxy -->|session/new with acp: URL| Conductor
    Conductor -->|session/new with stdio| Agent
    Agent <-->|stdio| Bridge
    Bridge <-->|TCP| Conductor
    Conductor <-->|_mcp/* messages| Proxy

Key decision: Use stdio + TCP bridge instead of direct stdio to agent, because:

  • Preserves agent isolation (agent only sees stdio)
  • Enables connection multiplexing (multiple bridges to one conductor)
  • Simplifies lifecycle management (bridge exits when agent closes stdio)

Session Initialization Flow

The conductor transforms MCP servers during session creation and correlates them with session IDs.

sequenceDiagram
    participant Proxy
    participant Conductor
    participant Listener as MCP Bridge<br/>Listener Actor
    participant Agent

    Note over Proxy: Wants to provide MCP server<br/>with session-specific context
    
    Proxy->>Conductor: session/new {<br/>  mcp_servers: [{<br/>    name: "research-tools",<br/>    url: "acp:uuid-123"<br/>  }]<br/>}
    
    Note over Conductor: Detects acp: transport<br/>Agent lacks mcp_acp_transport capability
    
    Conductor->>Conductor: Bind TCP listener on port 54321
    
    Conductor->>Listener: Spawn MCP Bridge Listener<br/>(acp_url: "acp:uuid-123")
    
    Note over Listener: Listening on TCP port 54321<br/>Waiting for session_id
    
    Conductor->>Agent: session/new {<br/>  mcp_servers: [{<br/>    name: "research-tools",<br/>    command: "conductor",<br/>    args: ["mcp", "54321"]<br/>  }]<br/>}
    
    Agent-->>Conductor: session/new response {<br/>  session_id: "sess-abc"<br/>}
    
    Note over Conductor: Extract session_id from response
    
    Conductor->>Listener: Send session_id: "sess-abc"<br/>(via oneshot channel)
    
    Note over Listener: Now has session_id<br/>Ready to accept connections
    
    Conductor-->>Proxy: session/new response {<br/>  session_id: "sess-abc"<br/>}
    
    Note over Agent: Later: spawns MCP bridge process
    
    Agent->>Bridge: spawn process:<br/>conductor mcp 54321
    
    Bridge->>Listener: TCP connect to localhost:54321
    
    Note over Listener: Connection arrives<br/>Session ID already known
    
    Listener->>Conductor: McpConnectionReceived {<br/>  acp_url: "acp:uuid-123",<br/>  session_id: "sess-abc"<br/>}
    
    Conductor->>Proxy: _mcp/connect {<br/>  acp_url: "acp:uuid-123",<br/>  session_id: "sess-abc"<br/>}
    
    Proxy-->>Conductor: _mcp/connect response {<br/>  connection_id: "conn-xyz"<br/>}
    
    Note over Proxy: Can correlate connection<br/>with session context
    
    Conductor-->>Listener: connection_id: "conn-xyz"
    
    Note over Listener,Bridge: Bridge now active<br/>Routes MCP <-> ACP messages

Key Decisions

Why spawn TCP listener before getting session_id (during request, not response)?

  • Agent may spawn bridge process immediately after receiving session/new response
  • If listener doesn’t exist yet, bridge connection fails with “connection refused”
  • Spawning during request ensures TCP port is ready before agent receives response
  • Session_id delivered asynchronously via oneshot channel once response arrives

Why send session_id to listener before forwarding response?

  • Ensures session_id is available before agent spawns bridge process
  • Eliminates race condition where TCP connection arrives before session_id known
  • Listener blocks on receiving session_id, guaranteeing it’s available when needed

Why include session_id in _mcp/connect?

  • Proxies need to correlate MCP connections with ACP sessions
  • Example: Research proxy remembers session context (current task, preferences)
  • Without session_id, proxy has no way to associate connection with session state

Why use oneshot channel for session_id delivery?

  • Listener spawned during request handling (before response available)
  • Response comes asynchronously from agent
  • Oneshot channel delivers session_id exactly once when response arrives
  • Clean separation: listener setup (during request) vs session_id delivery (during response)

Connection Lifecycle

Once the MCP connection is established, the bridge routes messages bidirectionally:

sequenceDiagram
    participant Agent
    participant Bridge as MCP Bridge<br/>Process
    participant Listener as Bridge Listener<br/>Actor
    participant Conductor
    participant Proxy

    Note over Agent,Proxy: Connection established (connection_id: "conn-xyz")

    Agent->>Bridge: MCP tools/list request<br/>(stdio JSON-RPC)
    
    Bridge->>Listener: JSON-RPC over TCP
    
    Listener->>Conductor: Raw JSON-RPC message
    
    Conductor->>Proxy: _mcp/request {<br/>  connection_id: "conn-xyz",<br/>  method: "tools/list",<br/>  params: {...}<br/>}
    
    Note over Proxy: Has connection_id -> session_id mapping<br/>Can use session context
    
    Proxy-->>Conductor: _mcp/request response {<br/>  tools: [...]<br/>}
    
    Conductor-->>Listener: JSON-RPC response
    
    Listener-->>Bridge: JSON-RPC over TCP
    
    Bridge-->>Agent: MCP response<br/>(stdio JSON-RPC)
    
    Note over Agent: Agent disconnects
    
    Agent->>Bridge: Close stdio
    
    Bridge->>Listener: Close TCP connection
    
    Listener->>Conductor: McpConnectionDisconnected {<br/>  connection_id: "conn-xyz"<br/>}
    
    Conductor->>Proxy: _mcp/disconnect {<br/>  connection_id: "conn-xyz"<br/>}
    
    Note over Proxy: Clean up session state

Key Decisions

Why route through conductor instead of direct bridge-to-proxy?

  • Maintains consistent message ordering through central conductor queue
  • Preserves conductor as sole message router
  • Simplifies error handling and lifecycle management

Why use connection_id instead of session_id in _mcp/* messages?

  • One session can have multiple MCP connections (multiple servers)
  • Connection_id uniquely identifies the bridge instance
  • Proxies maintain connection_id -> session_id mapping internally

Why send disconnect notification?

  • Allows proxies to clean up session-specific state
  • Enables resource cleanup (close files, release locks, etc.)
  • Provides explicit lifecycle boundary

Race Condition Handling

The session_id delivery mechanism prevents a race condition:

sequenceDiagram
    participant Conductor
    participant Listener as MCP Bridge Listener
    participant Agent
    participant Bridge as MCP Bridge Process

    Note over Conductor: Without oneshot channel coordination

    Conductor->>Listener: Spawn (no session_id yet)
    
    Conductor->>Agent: session/new request
    
    Note over Agent: Fast agent responds immediately
    
    Agent->>Bridge: Spawn bridge process
    
    Bridge->>Listener: TCP connect
    
    Note over Listener: ❌ No session_id available yet!<br/>Can't send McpConnectionReceived
    
    Agent-->>Conductor: session/new response {<br/>  session_id: "sess-abc"<br/>}
    
    Note over Conductor: Too late - connection already waiting

    rect rgb(200, 50, 50)
        Note over Listener,Bridge: Race condition:<br/>Connection arrived before session_id
    end

Solution: Listener blocks on oneshot channel:

sequenceDiagram
    participant Conductor
    participant Listener as MCP Bridge Listener
    participant Agent
    participant Bridge as MCP Bridge Process

    Conductor->>Listener: Spawn with oneshot receiver
    
    Note over Listener: Listening on TCP<br/>Waiting for session_id via oneshot
    
    Conductor->>Agent: session/new request
    
    Agent->>Bridge: Spawn bridge process
    
    Bridge->>Listener: TCP connect
    
    Note over Listener: Connection accepted<br/>⏸️ BLOCKS waiting for session_id
    
    Agent-->>Conductor: session/new response {<br/>  session_id: "sess-abc"<br/>}
    
    Conductor->>Listener: Send "sess-abc" via oneshot
    
    Note over Listener: ✅ Session ID received<br/>Unblocks with connection + session_id
    
    Listener->>Conductor: McpConnectionReceived {<br/>  acp_url: "acp:uuid-123",<br/>  session_id: "sess-abc"<br/>}

    rect rgb(50, 200, 50)
        Note over Listener,Bridge: No race condition:<br/>session_id always available
    end

Key decision: Block connection acceptance on session_id availability

  • Listener accepts TCP connection immediately (agent won’t wait)
  • But blocks sending McpConnectionReceived until session_id arrives
  • Guarantees session_id is always available when creating _mcp/connect request
  • Simple implementation: oneshot_rx.await? before sending message

Multiple MCP Servers

A single session can register multiple MCP servers:

flowchart TB
    Proxy[Proxy]
    Conductor[Conductor]
    Agent[Agent]
    
    Listener1[Listener: acp:uuid-1<br/>Port 54321]
    Listener2[Listener: acp:uuid-2<br/>Port 54322]
    
    Bridge1[Bridge: conductor mcp 54321]
    Bridge2[Bridge: conductor mcp 54322]
    
    Proxy -->|session/new with 2 servers| Conductor
    Conductor -->|session_id: sess-abc| Listener1
    Conductor -->|session_id: sess-abc| Listener2
    Conductor -->|session/new response| Proxy
    
    Agent <-->|stdio| Bridge1
    Agent <-->|stdio| Bridge2
    
    Bridge1 <-->|TCP| Listener1
    Bridge2 <-->|TCP| Listener2
    
    Listener1 -->|_mcp/* messages<br/>conn-1| Conductor
    Listener2 -->|_mcp/* messages<br/>conn-2| Conductor
    
    Conductor <-->|Both connections| Proxy
    
    style Listener1 fill:#e1f5ff
    style Listener2 fill:#e1f5ff

Key decisions:

  • Each acp: URL gets its own TCP port and listener
  • All listeners for a session receive the same session_id
  • Each connection gets unique connection_id
  • Proxy maintains map: connection_id -> (session_id, acp_url)

Implementation Components

McpBridgeListeners

  • Purpose: Manages TCP listeners for all acp: URLs
  • Lifecycle: Created with conductor, lives for entire conductor lifetime
  • Responsibilities:
    • Detect acp: URLs during session/new
    • Spawn TCP listeners on ephemeral ports
    • Transform MCP server specs to stdio transport
    • Deliver session_id to listeners via oneshot channels

McpBridgeListener Actor

  • Purpose: Accepts TCP connections for a specific acp: URL
  • Lifecycle: Spawned during session/new, lives until conductor exits
  • Responsibilities:
    • Listen on TCP port
    • Block on oneshot channel to receive session_id
    • Accept connections and send McpConnectionReceived with session_id
    • Spawn connection actors

McpBridgeConnectionActor

  • Purpose: Routes messages for a single MCP connection
  • Lifecycle: Spawned when agent connects, exits when agent disconnects
  • Responsibilities:
    • Read JSON-RPC from TCP, forward to conductor
    • Receive messages from conductor, write to TCP
    • Send McpConnectionDisconnected on close

MCP Bridge Process (conductor mcp $PORT)

  • Purpose: Bridges agent’s stdio to conductor’s TCP
  • Lifecycle: Spawned by agent, exits when stdio closes
  • Responsibilities:
    • Connect to TCP port on startup
    • Bidirectional stdio ↔ TCP forwarding
    • No protocol awareness (just bytes)

Error Handling

Agent Disconnects During Session Creation

If agent closes connection before sending session/new response:

  • Oneshot channel sender drops
  • Listener receives Err from oneshot
  • Listener exits gracefully
  • TCP port cleaned up

Bridge Process Crashes

If bridge process exits unexpectedly:

  • TCP connection closes
  • Listener detects disconnect
  • Sends McpConnectionDisconnected
  • Proxy cleans up state

Multiple Connections to Same Listener

Decision: Allow multiple connections per listener (for future flexibility)

  • Each connection gets unique connection_id
  • All connections share same session_id
  • Proxy can correlate all connections to session

Upgrading from sacp v10 to v11

This guide covers the breaking changes in sacp v11 and provides patterns for upgrading your code.

Overview

v11 introduces a simpler Role-based API that replaces the previous Link/Peer system. The main goals were:

  • Eliminate the Jr prefix from type names
  • Replace Link types (which encode both sides of a connection) with Role types (which encode one side)
  • Simplify the builder/connection API with clearer method names
  • Unify peer types with role types

Quick Reference

Type Renames

v10v11
Component<L>ConnectTo<R>
DynComponent<L>DynConnectTo<R>
ClientToAgentClient
AgentToClientAgent
ProxyToConductorProxy
ConductorToProxyConductor
AgentPeerAgent
ClientPeerClient
ConductorPeerConductor
JrConnectionCx<L>ConnectionTo<R>
JrRequestCx<T>Responder<T>
JrResponseCx<T>ResponseRouter<T>
JrResponse<T>SentRequest<T>
JrResponsePayloadJsonRpcResponse
JrRequestJsonRpcRequest
JrNotificationJsonRpcNotification
JrMessageJsonRpcMessage
JrMessageHandlerHandleDispatchFrom
handle_message (trait method)handle_dispatch_from
JrResponderRunWithConnectionTo (or just Run alias)
MessageCxDispatch
MatchMessageMatchDispatch
MatchMessageFromMatchDispatchFrom
Conductor (struct)ConductorImpl

Method Renames

v10v11
ClientToAgent::builder()Client.builder()
AgentToClient::builder()Agent.builder()
ProxyToConductor::builder()Proxy.builder()
.serve(transport).connect_to(transport)
.run_until(transport, closure).connect_with(transport, closure)
.connect_to(transport)?.serve().connect_to(transport)
.into_server().into_channel_and_future()
on_receive_message(...)on_receive_dispatch(...)
on_receive_message!()on_receive_dispatch!()
.forward_to_request_cx(cx).forward_response_to(responder)

Callback Parameter Renames

In request handlers, the context parameters have been renamed:

v10v11
request_cxresponder
response_cxrouter
cx (generic connection)connection
agent_cxconnection_to_agent
client_cxconnection_to_client
editor_cxconnection_to_editor
conductor_cxconnection_to_conductor

Common Upgrade Patterns

Pattern 1: Client connecting to an agent

v10:

#![allow(unused)]
fn main() {
use sacp::{ClientToAgent, Component};

ClientToAgent::builder()
    .name("my-client")
    .run_until(transport, async |cx| {
        let response = cx.send_request(MyRequest {}).block_task().await?;
        Ok(())
    })
    .await
}

v11:

#![allow(unused)]
fn main() {
use sacp::{Client, ConnectTo};

Client.builder()
    .name("my-client")
    .connect_with(transport, async |connection| {
        let response = connection.send_request(MyRequest {}).block_task().await?;
        Ok(())
    })
    .await
}

Pattern 2: Building an agent (reactive handler)

v10:

#![allow(unused)]
fn main() {
use sacp::{AgentToClient, Component};

AgentToClient::builder()
    .name("my-agent")
    .on_receive_request(async |req: InitializeRequest, request_cx, _cx| {
        request_cx.respond(InitializeResponse::new(req.protocol_version))
    }, sacp::on_receive_request!())
    .serve(transport)
    .await
}

v11:

#![allow(unused)]
fn main() {
use sacp::{Agent, ConnectTo};

Agent.builder()
    .name("my-agent")
    .on_receive_request(async |req: InitializeRequest, responder, _connection| {
        responder.respond(InitializeResponse::new(req.protocol_version))
    }, sacp::on_receive_request!())
    .connect_to(transport)
    .await
}

Pattern 3: Implementing Component trait (now ConnectTo)

v10:

#![allow(unused)]
fn main() {
use sacp::{Component, AgentToClient};
use sacp::link::ClientToAgent;

impl Component<AgentToClient> for MyAgent {
    async fn serve(self, client: impl Component<ClientToAgent>) -> Result<(), sacp::Error> {
        AgentToClient::builder()
            .name("my-agent")
            // handlers...
            .serve(client)
            .await
    }
}
}

v11:

#![allow(unused)]
fn main() {
use sacp::{ConnectTo, Agent, Client};

impl ConnectTo<Client> for MyAgent {
    async fn connect_to(self, client: impl ConnectTo<Agent>) -> Result<(), sacp::Error> {
        Agent.builder()
            .name("my-agent")
            // handlers...
            .connect_to(client)
            .await
    }
}
}

Pattern 4: Building a proxy

v10:

#![allow(unused)]
fn main() {
use sacp::{ProxyToConductor, ClientPeer, AgentPeer, Component};
use sacp::link::ConductorToProxy;

impl Component<ProxyToConductor> for MyProxy {
    async fn serve(self, client: impl Component<ConductorToProxy>) -> Result<(), sacp::Error> {
        ProxyToConductor::builder()
            .name("my-proxy")
            .on_receive_request_from(ClientPeer, async |req: MyRequest, request_cx, cx| {
                cx.send_request_to(AgentPeer, req)
                    .forward_to_request_cx(request_cx)
            }, sacp::on_receive_request!())
            .serve(client)
            .await
    }
}
}

v11:

#![allow(unused)]
fn main() {
use sacp::{Proxy, Client, Agent, Conductor, ConnectTo};

impl ConnectTo<Conductor> for MyProxy {
    async fn connect_to(self, client: impl ConnectTo<Proxy>) -> Result<(), sacp::Error> {
        Proxy.builder()
            .name("my-proxy")
            .on_receive_request_from(Client, async |req: MyRequest, responder, cx| {
                cx.send_request_to(Agent, req)
                    .forward_response_to(responder)
            }, sacp::on_receive_request!())
            .connect_to(client)
            .await
    }
}
}

Pattern 5: Creating dynamic component collections

v10:

#![allow(unused)]
fn main() {
use sacp::{DynComponent, Component};
use sacp::link::{AgentToClient, ProxyToConductor};

let proxies: Vec<DynComponent<ProxyToConductor>> = vec![
    DynComponent::new(Proxy1),
    DynComponent::new(Proxy2),
];

let agent: DynComponent<AgentToClient> = DynComponent::new(MyAgent);
}

v11:

#![allow(unused)]
fn main() {
use sacp::{DynConnectTo, ConnectTo, Client, Conductor};

let proxies: Vec<DynConnectTo<Conductor>> = vec![
    DynConnectTo::new(Proxy1),
    DynConnectTo::new(Proxy2),
];

let agent: DynConnectTo<Client> = DynConnectTo::new(MyAgent);
}

Pattern 6: Response handling

v10:

#![allow(unused)]
fn main() {
use sacp::JrResponse;

async fn recv<T: sacp::JrResponsePayload + Send>(
    response: sacp::JrResponse<T>,
) -> Result<T, sacp::Error> {
    // ...
}
}

v11:

#![allow(unused)]
fn main() {
use sacp::SentRequest;

async fn recv<T: sacp::JsonRpcResponse + Send>(
    response: sacp::SentRequest<T>,
) -> Result<T, sacp::Error> {
    // ...
}
}

Pattern 7: Custom message handlers

v10:

#![allow(unused)]
fn main() {
use sacp::{JrMessageHandler, MessageCx, Handled, JrConnectionCx};
use sacp::util::MatchMessage;

impl JrMessageHandler for MyHandler {
    type Link = sacp::link::UntypedLink;

    async fn handle_message(
        &mut self,
        message: MessageCx,
        cx: JrConnectionCx<Self::Link>,
    ) -> Result<Handled<MessageCx>, sacp::Error> {
        MatchMessage::new(message)
            .if_request(async |req: MyRequest, request_cx| {
                request_cx.respond(MyResponse {})
            })
            .done()
    }
}
}

v11:

#![allow(unused)]
fn main() {
use sacp::{HandleDispatchFrom, Dispatch, Handled, ConnectionTo};
use sacp::util::MatchDispatch;

impl HandleDispatchFrom for MyHandler {
    type Role = sacp::UntypedRole;

    async fn handle_dispatch_from(
        &mut self,
        message: Dispatch,
        cx: ConnectionTo<Self::Role>,
    ) -> Result<Handled<Dispatch>, sacp::Error> {
        MatchDispatch::new(message)
            .if_request(async |req: MyRequest, responder| {
                responder.respond(MyResponse {})
            })
            .done()
    }
}
}

Pattern 8: on_receive_message → on_receive_dispatch

v10:

#![allow(unused)]
fn main() {
.on_receive_message(async |message: MessageCx, cx| {
    message.respond_with_error(sacp::Error::method_not_found(), cx)
}, sacp::on_receive_message!())
}

v11:

#![allow(unused)]
fn main() {
.on_receive_dispatch(async |message: Dispatch, cx| {
    message.respond_with_error(sacp::Error::method_not_found(), cx)
}, sacp::on_receive_dispatch!())
}

Pattern 9: Using the conductor

v10:

#![allow(unused)]
fn main() {
use sacp_conductor::{Conductor, ProxiesAndAgent};

Conductor::new_agent("conductor", ProxiesAndAgent::new(agent), Default::default())
    .run(transport)
    .await
}

v11:

#![allow(unused)]
fn main() {
use sacp_conductor::{ConductorImpl, ProxiesAndAgent};

ConductorImpl::new_agent("conductor", ProxiesAndAgent::new(agent), Default::default())
    .run(transport)
    .await
}

Pattern 10: MCP server type parameters

v10:

#![allow(unused)]
fn main() {
use sacp::mcp_server::McpServer;

// For proxies
let server = McpServer::<ProxyToConductor, _>::builder("tools").build();

// For sessions (client-side)
let server = McpServer::<ClientToAgent, _>::builder("tools").build();
}

v11:

#![allow(unused)]
fn main() {
use sacp::mcp_server::McpServer;

// For proxies
let server = McpServer::<Conductor, _>::builder("tools").build();

// For sessions (client-side)
let server = McpServer::<Agent, _>::builder("tools").build();
}

Import Changes

v10:

#![allow(unused)]
fn main() {
use sacp::{
    Component, DynComponent,
    ClientToAgent, AgentToClient, ProxyToConductor,
    AgentPeer, ClientPeer, ConductorPeer,
    JrConnectionCx, JrRequestCx, JrResponse, JrResponsePayload,
    JrMessageHandler, MessageCx,
};
use sacp::link::{JrLink, ConductorToProxy, UntypedLink};
}

v11:

#![allow(unused)]
fn main() {
use sacp::{
    ConnectTo, DynConnectTo,
    Client, Agent, Proxy, Conductor,
    ConnectionTo, Responder, SentRequest, JsonRpcResponse,
    HandleDispatchFrom, Dispatch,
    UntypedRole,
};
}

Conceptual Changes

In v10, Link types encoded both sides of a connection:

  • ClientToAgent = “I am a client talking to an agent”
  • AgentToClient = “I am an agent talking to a client”

In v11, Role types encode the counterpart you connect to:

  • impl ConnectTo<Client> = “I can connect to a client” (i.e., I am an agent)
  • impl ConnectTo<Agent> = “I can connect to an agent” (i.e., I am a client)

Unified Peer/Role Types

In v10, peers were separate types (AgentPeer, ClientPeer) from links.

In v11, Agent, Client, Proxy, and Conductor are both:

  • Role types (used as type parameters)
  • Peer selectors (used in send_request_to(Agent, ...))
  • Builder starters (Agent.builder())

Builder Method Naming

The new method names better describe what they do:

  • builder() - “Start building a connection from this role”
  • connect_to(transport) - “Connect to this transport (reactive mode)”
  • connect_with(transport, closure) - “Connect with this transport and run closure (active mode)”

Migration Tips

  1. Start with imports: Update your use statements first to get the new types in scope.

  2. Search and replace: Many renames are mechanical:

    • ClientToAgent::builder()Client.builder()
    • AgentToClient::builder()Agent.builder()
    • ProxyToConductor::builder()Proxy.builder()
    • .serve(.connect_to(
    • .run_until(.connect_with(
    • request_cxresponder
    • MessageCxDispatch
  3. Component trait: Change impl Component<AgentToClient> to impl ConnectTo<Client> - remember the role is your counterpart, not yourself.

  4. Peer types in handlers: Change ClientPeer to Client, AgentPeer to Agent, etc.

  5. The conductor: Rename Conductor:: to ConductorImpl:: in conductor creation calls.

  6. Test helpers: Update JrResponse<T> to SentRequest<T> and JrResponsePayload to JsonRpcResponse.

Composable Agents via P/ACP (Proxying ACP)

Elevator pitch

What are you proposing to change?

We propose to prototype P/ACP (Proxying ACP), an extension to Zed’s Agent Client Protocol (ACP) that enables composable agent architectures through proxy chains. Instead of building monolithic AI tools, P/ACP allows developers to create modular components that can intercept and transform messages flowing between editors and agents.

This RFD builds on the concepts introduced in SymmACP: extending Zed’s ACP to support Composable Agents, with the protocol renamed to P/ACP for this implementation.

Key changes:

  • Define a proxy chain architecture where components can transform ACP messages
  • Create an orchestrator (Conductor) that manages the proxy chain and presents as a normal ACP agent to editors
  • Establish the _proxy/successor/* protocol for proxies to communicate with downstream components
  • Enable composition without requiring editors to understand P/ACP internals

Status quo

How do things work today and what problems does this cause? Why would we change things?

Today’s AI agent ecosystem is dominated by monolithic agents. We want people to be able to combine independent components to build custom agents targeting their specific needs. We want them to be able to use these with whatever editors and tooling they have. This is aligned with Symposium’s core values of openness, interoperability, and extensibility.

Motivating Example: Sparkle Integration

Consider integrating Sparkle (a collaborative AI framework) into a coding session with Zed and Claude. Sparkle provides an MCP server with tools, but requires an initialization sequence to load patterns and set up collaborative context.

Without P/ACP:

  • Users must manually run the initialization sequence each session
  • Or use agent-specific hooks (Claude Code has them, but not standardized across agents)
  • Or modify the agent to handle initialization automatically
  • Result: Manual intervention required, agent-specific configuration, no generic solution

With P/ACP:

flowchart LR
    Editor[Editor<br/>Zed]

    subgraph Conductor[Conductor Orchestrator]
        Sparkle[Sparkle Component]
        Agent[Base Agent]
        MCP[Sparkle MCP Server]

        Sparkle -->|proxy chain| Agent
        Sparkle -.->|provides tools| MCP
    end

    Editor <-->|ACP| Conductor

The Sparkle component:

  1. Injects Sparkle MCP server into the agent’s tool list during initialize
  2. Intercepts the first prompt and prepends Sparkle embodiment sequence
  3. Passes all other messages through transparently

From the editor’s perspective, it talks to a normal ACP agent. From the base agent’s perspective, it has Sparkle tools available. No code changes required on either side.

This demonstrates P/ACP’s core value: adding capabilities through composition rather than modification.

What we propose to do about it

What are you proposing to improve the situation?

We will develop an extension to ACP called P/ACP (Proxying ACP).

The heart of P/ACP is a proxy chain where each component adds specific capabilities:

flowchart LR
    Editor[ACP Editor]

    subgraph Orchestrator[P/ACP Orchestrator]
        O[Orchestrator Process]
    end

    subgraph ProxyChain[Proxy Chain - managed by orchestrator]
        P1[Proxy 1]
        P2[Proxy 2]
        Agent[ACP Agent]

        P1 -->|_proxy/successor/*| P2
        P2 -->|_proxy/successor/*| Agent
    end

    Editor <-->|ACP| O
    O <-->|routes messages| ProxyChain

P/ACP defines three kinds of actors:

  • Editors spawn the orchestrator and communicate via standard ACP
  • Orchestrator manages the proxy chain, appears as a normal ACP agent to editors
  • Proxies intercept and transform messages, communicate with downstream via _proxy/successor/* protocol
  • Agents provide base AI model behavior using standard ACP

The orchestrator handles message routing, making the proxy chain transparent to editors. Proxies can transform requests, responses, or add side-effects without editors or agents needing P/ACP awareness.

The Orchestrator: Conductor

P/ACP’s orchestrator is called the Conductor (binary name: conductor). The conductor has three core responsibilities:

  1. Process Management - Creates and manages component processes based on command-line configuration
  2. Message Routing - Routes messages between editor, components, and agent through the proxy chain
  3. Capability Adaptation - Observes component capabilities and adapts between them

Key adaptation: MCP Bridge

  • If the agent supports mcp_acp_transport, conductor passes MCP servers with ACP transport through unchanged
  • If not, conductor spawns conductor mcp $port processes to bridge between stdio (MCP) and ACP messages
  • Components can provide MCP servers without requiring agent modifications
  • See “MCP Bridge” section in Implementation Details for full protocol

Other adaptations include session pre-population, streaming support, content types, and tool formats.

From the editor’s perspective, it spawns one conductor process and communicates using normal ACP over stdio. The editor doesn’t know about the proxy chain.

Command-line usage:

# Agent mode - manages proxy chain
conductor agent sparkle-acp claude-code-acp

# MCP mode - bridges stdio to TCP for MCP-over-ACP
conductor mcp 54321

To editors, the conductor is a normal ACP agent - no special capabilities are advertised upstream.

Proxy Capability Handshake:

The conductor uses a two-way capability handshake to verify that proxy components can fulfill their responsibilities:

  1. Conductor offers proxy capability - When initializing non-last components (proxies), the conductor includes "proxy": true in the _meta field of the InitializeRequest
  2. Component accepts proxy capability - The component must respond with "proxy": true in the _meta field of its InitializeResponse
  3. Last component (agent) - The final component is treated as a standard ACP agent and does NOT receive the proxy capability offer

Why a two-way handshake? The proxy capability is an active protocol - it requires the component to handle _proxy/successor/* messages and route communications appropriately. Unlike passive capabilities (like “http” or “sse”) which are just declarations, proxy components must actively participate in message routing. If a component doesn’t respond with the proxy capability, the conductor fails initialization with an error like “component X is not a proxy”, since that component cannot fulfill its required function in the chain.

Shiny future

How will things will play out once this feature exists?

Composable Agent Ecosystems

P/ACP enables a marketplace of reusable proxy components. Developers can:

  • Compose custom agent pipelines from independently-developed proxies
  • Share proxies across different editors and agents
  • Test and debug proxies in isolation
  • Mix community-developed and custom proxies

Simplified Agent Development

Agent developers can focus on core model behavior without implementing cross-cutting concerns:

  • Logging, metrics, and observability become proxy responsibilities
  • Rate limiting and caching handled externally
  • Content filtering and safety policies applied consistently

Editor Simplicity

Editors gain enhanced functionality without custom integrations:

  • Add sophisticated agent behaviors by changing proxy chain configuration
  • Support new agent features without editor updates
  • Maintain compatibility with any ACP agent

Standardization Path

As the ecosystem matures, successful patterns may be:

  • Standardized in ACP specification itself
  • Adopted by other agent protocols
  • Used as reference implementations for proxy architectures

Implemented Extensions

MCP Bridge - ✅ Implemented via the _mcp/* protocol (see “Implementation details and plan” section). Components can provide MCP servers using ACP transport, enabling tool provision without agents needing P/ACP awareness. The conductor bridges between agents lacking native support and components.

Future Protocol Extensions

Extensions under consideration for future development:

Agent-Initiated Messages - Allow components to send messages after the agent has sent end-turn, outside the normal request-response cycle. Use cases include background task completion notifications, time-based reminders, or autonomous checkpoint creation.

Session Pre-Population - Create sessions with existing conversation history. Conductor adapts based on agent capabilities: uses native support if available, otherwise synthesizes a dummy prompt containing the history, intercepts the response, and starts the real session.

Rich Content Types - Extend content types beyond text to include HTML panels, interactive GUI components, or other structured formats. Components can transform between content types based on what downstream agents support.

Implementation details and plan

Tell me more about your implementation. What is your detailed implementaton plan?

The implementation focuses on building the Conductor and demonstrating the Sparkle integration use case.

P/ACP protocol

Definition: Editor vs Agent of a proxy

For an P/ACP proxy, the “editor” is defined as the upstream connection and the “agent” is the downstream connection.

flowchart LR
    Editor --> Proxy --> Agent

P/ACP editor capabilities

An P/ACP-aware editor provides the following capability during ACP initialization:

/// Including the symposium section *at all* means that the editor
/// supports symposium proxy initialization.
"_meta": {
    "symposium": {
        "version": "1.0",
        "html_panel": true,      // or false, if this is the ToEditor proxy
        "file_comment": true,    // or false, if this is the ToEditor proxy
    }
}

P/ACP proxies forward the capabilities they receive from their editor.

P/ACP component capabilities

P/ACP uses capabilities in the _meta field for the proxy handshake:

Proxy capability (two-way handshake):

The conductor offers the proxy capability to non-last components in InitializeRequest:

// InitializeRequest from conductor to proxy component
"_meta": {
    "symposium": {
        "version": "1.0",
        "proxy": true
    }
}

The component must accept by responding with the proxy capability in InitializeResponse:

// InitializeResponse from proxy component to conductor
"_meta": {
    "symposium": {
        "version": "1.0",
        "proxy": true
    }
}

If a component that was offered the proxy capability does not respond with it, the conductor fails initialization.

Agent capability: The last component in the chain (the agent) is NOT offered the proxy capability and does not need to respond with it. Agents are just normal ACP agents with no P/ACP awareness required.

The _proxy/successor/{send,receive} protocol

Proxies communicate with their downstream component (next proxy or agent) through special extension messages handled by the orchestrator:

_proxy/successor/send/request - Proxy wants to send a request downstream:

{
  "method": "_proxy/successor/send/request",
  "params": {
    "message": <ACP_REQUEST>
  }
}

_proxy/successor/send/notification - Proxy wants to send a notification downstream:

{
  "method": "_proxy/successor/send/notification",
  "params": {
    "message": <ACP_NOTIFICATION>
  }
}

_proxy/successor/receive/request - Orchestrator delivers a request from downstream:

{
  "method": "_proxy/successor/receive/request",
  "params": {
    "message": <ACP_REQUEST>
  }
}

_proxy/successor/receive/notification - Orchestrator delivers a notification from downstream:

{
  "method": "_proxy/successor/receive/notification",
  "params": {
    "message": <ACP_NOTIFICATION>
  }
}

Message flow example:

  1. Editor sends ACP prompt request to orchestrator
  2. Orchestrator forwards to Proxy1 as normal ACP message
  3. Proxy1 transforms and sends _proxy/successor/send/request { message: <modified_prompt> }
  4. Orchestrator routes that to Proxy2 as normal ACP prompt
  5. Eventually reaches agent, response flows back through chain
  6. Orchestrator wraps responses going upstream appropriately

Transparent proxy pattern: A pass-through proxy is trivial - just forward everything:

#![allow(unused)]
fn main() {
match message {
    // Forward requests from editor to successor
    AcpRequest(req) => send_to_successor_request(req),

    // Forward notifications from editor to successor
    AcpNotification(notif) => send_to_successor_notification(notif),

    // Forward from successor back to editor
    ExtRequest("_proxy/successor/receive/request", msg) => respond_to_editor(msg),
    ExtNotification("_proxy/successor/receive/notification", msg) => forward_to_editor(msg),
}
}

The MCP Bridge: _mcp/* Protocol

P/ACP enables components to provide MCP servers that communicate over ACP messages rather than traditional stdio. This allows components to handle MCP tool calls without agents needing special P/ACP awareness.

MCP Server Declaration with ACP Transport

Components declare MCP servers with ACP transport by using the HTTP MCP server format with a special URL scheme:

{
  "tools": {
    "mcpServers": {
      "sparkle": {
        "transport": "http",
        "url": "acp:550e8400-e29b-41d4-a716-446655440000",
        "headers": {}
      }
    }
  }
}

The acp:$UUID URL signals ACP transport. The component generates the UUID to identify which component handles calls to this MCP server.

Agent Capability: mcp_acp_transport

Agents that natively support MCP-over-ACP declare this capability:

{
  "_meta": {
    "mcp_acp_transport": true
  }
}

Conductor behavior:

  • If the final agent has mcp_acp_transport: true, conductor passes MCP server declarations through unchanged
  • If the final agent lacks this capability, conductor performs bridging adaptation:
    1. Binds a fresh TCP port (e.g., localhost:54321)
    2. Transforms the MCP server declaration to use conductor mcp $port as the command
    3. Spawns conductor mcp $port which connects back via TCP and bridges to ACP messages
    4. Always advertises mcp_acp_transport: true to intermediate components

Bridging Transformation Example

Original MCP server spec (from component):

{
  "sparkle": {
    "transport": "http",
    "url": "acp:550e8400-e29b-41d4-a716-446655440000",
    "headers": {}
  }
}

Transformed spec (passed to agent without mcp_acp_transport):

{
  "sparkle": {
    "command": "conductor",
    "args": ["mcp", "54321"],
    "transport": "stdio"
  }
}

The agent thinks it’s talking to a normal MCP server over stdio. The conductor mcp process bridges between stdio (MCP JSON-RPC) and TCP (connection to main conductor), which then translates to ACP _mcp/* messages.

MCP Message Flow Protocol

When MCP tool calls occur, they flow as ACP extension messages:

_mcp/client_to_server/request - Agent calling an MCP tool (flows backward up chain):

{
  "jsonrpc": "2.0",
  "id": "T1",
  "method": "_mcp/client_to_server/request",
  "params": {
    "url": "acp:550e8400-e29b-41d4-a716-446655440000",
    "message": {
      "jsonrpc": "2.0",
      "id": "mcp-123",
      "method": "tools/call",
      "params": {
        "name": "embody_sparkle",
        "arguments": {}
      }
    }
  }
}

Response:

{
  "jsonrpc": "2.0",
  "id": "T1",
  "result": {
    "message": {
      "jsonrpc": "2.0",
      "id": "mcp-123",
      "result": {
        "content": [
          {"type": "text", "text": "Embodiment complete"}
        ]
      }
    }
  }
}

_mcp/client_to_server/notification - Agent sending notification to MCP server:

{
  "jsonrpc": "2.0",
  "method": "_mcp/client_to_server/notification",
  "params": {
    "url": "acp:550e8400-e29b-41d4-a716-446655440000",
    "message": {
      "jsonrpc": "2.0",
      "method": "notifications/cancelled",
      "params": {}
    }
  }
}

_mcp/server_to_client/request - MCP server calling back to agent (flows forward down chain):

{
  "jsonrpc": "2.0",
  "id": "S1",
  "method": "_mcp/server_to_client/request",
  "params": {
    "url": "acp:550e8400-e29b-41d4-a716-446655440000",
    "message": {
      "jsonrpc": "2.0",
      "id": "mcp-456",
      "method": "sampling/createMessage",
      "params": {
        "messages": [...],
        "modelPreferences": {...}
      }
    }
  }
}

_mcp/server_to_client/notification - MCP server sending notification to agent:

{
  "jsonrpc": "2.0",
  "method": "_mcp/server_to_client/notification",
  "params": {
    "url": "acp:550e8400-e29b-41d4-a716-446655440000",
    "message": {
      "jsonrpc": "2.0",
      "method": "notifications/progress",
      "params": {
        "progressToken": "token-1",
        "progress": 50,
        "total": 100
      }
    }
  }
}

Message Routing

Client→Server messages (agent calling MCP tools):

  • Flow backward up the proxy chain (agent → conductor → components)
  • Component matches on params.url to identify which MCP server
  • Component extracts params.message, handles the MCP call, responds

Server→Client messages (MCP server callbacks):

  • Flow forward down the proxy chain (component → conductor → agent)
  • Component initiates when its MCP server needs to call back (sampling, logging, progress)
  • Conductor routes to agent (or via bridge if needed)

Conductor MCP Mode

The conductor binary has two modes:

  1. Agent mode: conductor agent [proxies...] agent

    • Manages P/ACP proxy chain
    • Routes ACP messages
  2. MCP mode: conductor mcp $port

    • Acts as MCP server over stdio
    • Connects to localhost:$port via TCP
    • Bridges MCP JSON-RPC (stdio) ↔ raw JSON-RPC (TCP to main conductor)

When bridging is needed, the main conductor spawns conductor mcp $port as the child process that the agent communicates with via stdio.

Additional Extension Messages

Proxies can define their own extension messages beyond _proxy/successor/* to provide specific capabilities. Examples might include:

  • Logging/observability: _proxy/log messages for structured logging
  • Metrics: _proxy/metric messages for tracking usage
  • Configuration: _proxy/config messages for dynamic reconfiguration

The orchestrator can handle routing these messages appropriately, or they can be handled by specific proxies in the chain.

These extensions are beyond the scope of this initial RFD and will be defined as needed by specific proxy implementations.

Implementation progress

What is the current status of implementation and what are the next steps?

Current Status: Implementation Phase

Completed:

  • ✅ P/ACP protocol design with Conductor orchestrator architecture
  • _proxy/successor/{send,receive} message protocol defined
  • scp Rust crate with JSON-RPC layer and ACP message types
  • ✅ Comprehensive JSON-RPC test suite (21 tests)
  • ✅ Proxy message type definitions (ToSuccessorRequest, etc.)

In Progress:

  • Conductor orchestrator implementation
  • Sparkle P/ACP component
  • MCP Bridge implementation (see checklist below)

MCP Bridge Implementation Checklist

Phase 1: Conductor MCP Mode (COMPLETE ✅)

  • Implement conductor mcp $port CLI parsing
  • TCP connection to localhost:$port
  • Stdio → TCP bridging (read from stdin, send via TCP)
  • TCP → Stdio bridging (read from TCP, write to stdout)
  • Newline-delimited JSON framing
  • Error handling (connection failures, parse errors, reconnection logic)
  • Unit tests for message bridging
  • Integration test: standalone MCP bridge with mock MCP client/server

Phase 2: Conductor Agent Mode - MCP Detection & Bridging

  • Detect "transport": "http", "url": "acp:$UUID" MCP servers in initialization
  • Check final agent for mcp_acp_transport capability
  • Bind ephemeral TCP ports when bridging needed
  • Transform MCP server specs to use conductor mcp $port
  • Spawn conductor mcp $port subprocess per ACP-transport MCP server
  • Store mapping: UUID → TCP port → bridge process
  • Always advertise mcp_acp_transport: true to intermediate components
  • Integration test: full chain with MCP bridging

Phase 3: _mcp/* Message Routing

  • Route _mcp/client_to_server/request (TCP → ACP, backward up chain)
  • Route _mcp/client_to_server/notification (TCP → ACP, backward)
  • Route _mcp/server_to_client/request (ACP → TCP, forward down chain)
  • Route _mcp/server_to_client/notification (ACP → TCP, forward)
  • URL matching for component routing (params.url matches UUID)
  • Response routing back through bridge
  • Integration test: full _mcp/* message flow

Phase 4: Bridge Lifecycle Management

  • Clean up bridge processes on session end
  • Handle bridge process crashes
  • Handle component crashes (clean up associated bridges)
  • TCP connection cleanup on errors
  • Port cleanup and reuse

Phase 5: Component-Side MCP Integration

  • Sparkle component declares ACP-transport MCP server
  • Sparkle handles _mcp/client_to_server/* messages
  • Sparkle initiates _mcp/server_to_client/* callbacks
  • End-to-end test: Sparkle embodiment via MCP bridge

Phase 1: Minimal Sparkle Demo

Goal: Demonstrate Sparkle integration through P/ACP composition.

Components:

  1. Conductor orchestrator - Process management, message routing, capability adaptation
  2. Sparkle P/ACP component - Injects Sparkle MCP server, handles embodiment sequence
  3. Integration test - Validates end-to-end flow with mock editor/agent

Demo flow:

Zed → Conductor → Sparkle Component → Claude
                ↓
           Sparkle MCP Server

Success criteria:

  • Sparkle MCP server appears in agent’s tool list
  • First prompt triggers Sparkle embodiment sequence
  • Subsequent prompts work normally
  • All other ACP messages pass through unchanged

Detailed MVP Walkthrough

This section shows the exact message flows for the minimal Sparkle demo.

Understanding UUIDs in the flow:

There are two distinct types of UUIDs in these sequences:

  1. Message IDs (JSON-RPC request IDs): These identify individual JSON-RPC requests and must be tracked to route responses correctly. When a component forwards a message using _proxy/successor/request, it creates a fresh message ID for the downstream request and remembers the mapping to route the response back.

  2. Session IDs (ACP session identifiers): These identify ACP sessions and flow through the chain unchanged. The agent creates a session ID, and all components pass it back unmodified.

Conductor’s routing rules:

  1. Message from Editor → Forward “as is” to first component (same message ID)
  2. _proxy/successor/request from component → Unwrap payload and send to next component (using message ID from the wrapper)
  3. Response from downstream → Send back to whoever made the _proxy request
  4. First component’s response → Send back to Editor

Components don’t talk directly to each other - all communication flows through Conductor via the _proxy protocol.

Scenario 1: Initialization and Session Creation

The editor spawns Conductor with component names, Conductor spawns the components, and initialization flows through the chain.

sequenceDiagram
    participant Editor as Editor<br/>(Zed)
    participant Conductor as Conductor<br/>Orchestrator
    participant Sparkle as Sparkle<br/>Component
    participant Agent as Base<br/>Agent

    Note over Editor: Spawns Conductor with args:<br/>"sparkle-acp agent-acp"
    Editor->>Conductor: spawn process
    activate Conductor
    
    Note over Conductor: Spawns both components
    Conductor->>Sparkle: spawn "sparkle-acp"
    activate Sparkle
    Conductor->>Agent: spawn "agent-acp"
    activate Agent
    
    Note over Editor,Agent: === Initialization Phase ===
    
    Editor->>Conductor: initialize (id: I0)
    Conductor->>Sparkle: initialize (id: I0)<br/>(offers PROXY capability)
    
    Note over Sparkle: Sees proxy capability offer,<br/>initializes successor
    
    Sparkle->>Conductor: _proxy/successor/request (id: I1)<br/>payload: initialize
    Conductor->>Agent: initialize (id: I1)<br/>(NO proxy capability - agent is last)
    Agent-->>Conductor: initialize response (id: I1)
    Conductor-->>Sparkle: _proxy/successor response (id: I1)
    
    Note over Sparkle: Sees Agent capabilities,<br/>prepares response
    
    Sparkle-->>Conductor: initialize response (id: I0)<br/>(accepts PROXY capability)
    
    Note over Conductor: Verifies Sparkle accepted proxy.<br/>If not, would fail with error.
    
    Conductor-->>Editor: initialize response (id: I0)
    
    Note over Editor,Agent: === Session Creation ===
    
    Editor->>Conductor: session/new (id: U0, tools: M0)
    Conductor->>Sparkle: session/new (id: U0, tools: M0)
    
    Note over Sparkle: Wants to inject Sparkle MCP server
    
    Sparkle->>Conductor: _proxy/successor/request (id: U1)<br/>payload: session/new with tools (M0, sparkle-mcp)
    Conductor->>Agent: session/new (id: U1, tools: M0 + sparkle-mcp)
    
    Agent-->>Conductor: response (id: U1, sessionId: S1)
    Conductor-->>Sparkle: response to _proxy request (id: U1, sessionId: S1)
    
    Note over Sparkle: Remembers mapping U0 → U1
    
    Sparkle-->>Conductor: response (id: U0, sessionId: S1)
    Conductor-->>Editor: response (id: U0, sessionId: S1)
    
    Note over Editor,Agent: Session S1 created,<br/>Sparkle MCP server available to agent

Key messages:

  1. Editor → Conductor: initialize (id: I0)

    {
      "jsonrpc": "2.0",
      "id": "I0",
      "method": "initialize",
      "params": {
        "protocolVersion": "0.1.0",
        "capabilities": {},
        "clientInfo": {"name": "Zed", "version": "0.1.0"}
      }
    }
    
  2. Conductor → Sparkle: initialize (id: I0, with PROXY capability)

    {
      "jsonrpc": "2.0",
      "id": "I0",
      "method": "initialize",
      "params": {
        "protocolVersion": "0.1.0",
        "capabilities": {
          "_meta": {
            "symposium": {
              "version": "1.0",
              "proxy": true
            }
          }
        },
        "clientInfo": {"name": "Conductor", "version": "0.1.0"}
      }
    }
    
  3. Sparkle → Conductor: _proxy/successor/request (id: I1, wrapping initialize)

    {
      "jsonrpc": "2.0",
      "id": "I1",
      "method": "_proxy/successor/request",
      "params": {
        "message": {
          "method": "initialize",
          "params": {
            "protocolVersion": "0.1.0",
            "capabilities": {},
            "clientInfo": {"name": "Sparkle", "version": "0.1.0"}
          }
        }
      }
    }
    
  4. Conductor → Agent: initialize (id: I1, unwrapped, without PROXY capability)

    {
      "jsonrpc": "2.0",
      "id": "I1",
      "method": "initialize",
      "params": {
        "protocolVersion": "0.1.0",
        "capabilities": {},
        "clientInfo": {"name": "Sparkle", "version": "0.1.0"}
      }
    }
    
  5. Agent → Conductor: initialize response (id: I1)

    {
      "jsonrpc": "2.0",
      "id": "I1",
      "result": {
        "protocolVersion": "0.1.0",
        "capabilities": {},
        "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
      }
    }
    
  6. Conductor → Sparkle: _proxy/successor response (id: I1, wrapping Agent’s response)

    {
      "jsonrpc": "2.0",
      "id": "I1",
      "result": {
        "protocolVersion": "0.1.0",
        "capabilities": {},
        "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
      }
    }
    
  7. Sparkle → Conductor: initialize response (id: I0, accepting proxy capability)

    {
      "jsonrpc": "2.0",
      "id": "I0",
      "result": {
        "protocolVersion": "0.1.0",
        "capabilities": {
          "_meta": {
            "symposium": {
              "version": "1.0",
              "proxy": true
            }
          }
        },
        "serverInfo": {"name": "Sparkle + claude-code-acp", "version": "0.1.0"}
      }
    }
    

    Note: Sparkle MUST include "proxy": true in its response since it was offered the proxy capability. If this field is missing, Conductor will fail initialization with an error.

  8. Editor → Conductor: session/new (id: U0)

    {
      "jsonrpc": "2.0",
      "id": "U0",
      "method": "session/new",
      "params": {
        "tools": {
          "mcpServers": {
            "filesystem": {"command": "mcp-filesystem", "args": []}
          }
        }
      }
    }
    
  9. Conductor → Sparkle: session/new (id: U0, forwarded as-is)

    {
      "jsonrpc": "2.0",
      "id": "U0",
      "method": "session/new",
      "params": {
        "tools": {
          "mcpServers": {
            "filesystem": {"command": "mcp-filesystem", "args": []}
          }
        }
      }
    }
    
  10. Sparkle → Conductor: _proxy/successor/request (id: U1, with injected Sparkle MCP)

{
  "jsonrpc": "2.0",
  "id": "U1",
  "method": "_proxy/successor/request",
  "params": {
    "message": {
      "method": "session/new",
      "params": {
        "tools": {
          "mcpServers": {
            "filesystem": {"command": "mcp-filesystem", "args": []},
            "sparkle": {"command": "sparkle-mcp", "args": []}
          }
        }
      }
    }
  }
}
  1. Conductor → Agent: session/new (id: U1, unwrapped from _proxy message)
{
  "jsonrpc": "2.0",
  "id": "U1",
  "method": "session/new",
  "params": {
    "tools": {
      "mcpServers": {
        "filesystem": {"command": "mcp-filesystem", "args": []},
        "sparkle": {"command": "sparkle-mcp", "args": []}
      }
    }
  }
}
  1. Agent → Conductor: response (id: U1, with new session S1)
{
  "jsonrpc": "2.0",
  "id": "U1",
  "result": {
    "sessionId": "S1",
    "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
  }
}
  1. Conductor → Sparkle: _proxy/successor response (id: U1)
{
  "jsonrpc": "2.0",
  "id": "U1",
  "result": {
    "sessionId": "S1",
    "serverInfo": {"name": "claude-code-acp", "version": "0.1.0"}
  }
}
  1. Sparkle → Conductor: response (id: U0, with session S1)
{
  "jsonrpc": "2.0",
  "id": "U0",
  "result": {
    "sessionId": "S1",
    "serverInfo": {"name": "Conductor + Sparkle", "version": "0.1.0"}
  }
}

Scenario 2: First Prompt (Sparkle Embodiment)

When the first prompt arrives, Sparkle intercepts it and runs the embodiment sequence before forwarding the actual user prompt.

sequenceDiagram
    participant Editor as Editor<br/>(Zed)
    participant Conductor as Conductor<br/>Orchestrator
    participant Sparkle as Sparkle<br/>Component
    participant Agent as Base<br/>Agent

    Note over Editor,Agent: === First Prompt Flow ===
    
    Editor->>Conductor: session/prompt (id: P0, sessionId: S1)
    Conductor->>Sparkle: session/prompt (id: P0, sessionId: S1)
    
    Note over Sparkle: First prompt detected!<br/>Run embodiment sequence first
    
    Sparkle->>Conductor: _proxy/successor/request (id: P1)<br/>payload: session/prompt (embodiment)
    Conductor->>Agent: session/prompt (id: P1, embodiment)
    
    Agent-->>Conductor: response (id: P1, tool_use: embody_sparkle)
    Conductor-->>Sparkle: response to _proxy request (id: P1)
    
    Note over Sparkle: Embodiment complete,<br/>now send real prompt
    
    Sparkle->>Conductor: _proxy/successor/request (id: P2)<br/>payload: session/prompt (user message)
    Conductor->>Agent: session/prompt (id: P2, user message)
    
    Agent-->>Conductor: response (id: P2, actual answer)
    Conductor-->>Sparkle: response to _proxy request (id: P2)
    
    Note over Sparkle: Maps P2 → P0
    
    Sparkle-->>Conductor: response (id: P0, actual answer)
    Conductor-->>Editor: response (id: P0, actual answer)
    
    Note over Editor,Agent: User sees response,<br/>Sparkle initialized

Key messages:

  1. Editor → Conductor: session/prompt (id: P0, user’s first message)

    {
      "jsonrpc": "2.0",
      "id": "P0",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {"role": "user", "content": "Hello! Can you help me with my code?"}
        ]
      }
    }
    
  2. Conductor → Sparkle: session/prompt (id: P0, forwarded as-is)

    {
      "jsonrpc": "2.0",
      "id": "P0",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {"role": "user", "content": "Hello! Can you help me with my code?"}
        ]
      }
    }
    
  3. Sparkle → Conductor: _proxy/successor/request (id: P1, embodiment sequence)

    {
      "jsonrpc": "2.0",
      "id": "P1",
      "method": "_proxy/successor/request",
      "params": {
        "message": {
          "method": "session/prompt",
          "params": {
            "sessionId": "S1",
            "messages": [
              {
                "role": "user",
                "content": "Please use the embody_sparkle tool to load your collaborative patterns."
              }
            ]
          }
        }
      }
    }
    
  4. Conductor → Agent: session/prompt (id: P1, unwrapped embodiment)

    {
      "jsonrpc": "2.0",
      "id": "P1",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {
            "role": "user",
            "content": "Please use the embody_sparkle tool to load your collaborative patterns."
          }
        ]
      }
    }
    
  5. Agent → Conductor: response (id: P1, embodiment tool call)

    {
      "jsonrpc": "2.0",
      "id": "P1",
      "result": {
        "role": "assistant",
        "content": [
          {
            "type": "tool_use",
            "id": "tool-1",
            "name": "embody_sparkle",
            "input": {}
          }
        ]
      }
    }
    
  6. Sparkle → Conductor: _proxy/successor/request (id: P2, actual user prompt)

    {
      "jsonrpc": "2.0",
      "id": "P2",
      "method": "_proxy/successor/request",
      "params": {
        "message": {
          "method": "session/prompt",
          "params": {
            "sessionId": "S1",
            "messages": [
              {"role": "user", "content": "Hello! Can you help me with my code?"}
            ]
          }
        }
      }
    }
    
  7. Conductor → Agent: session/prompt (id: P2, unwrapped user prompt)

    {
      "jsonrpc": "2.0",
      "id": "P2",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {"role": "user", "content": "Hello! Can you help me with my code?"}
        ]
      }
    }
    
  8. Sparkle → Conductor: response (id: P0, forwarded to editor)

    {
      "jsonrpc": "2.0",
      "id": "P0",
      "result": {
        "role": "assistant",
        "content": "I'd be happy to help you with your code! What would you like to work on?"
      }
    }
    

Scenario 3: Subsequent Prompts (Pass-Through)

After embodiment, Sparkle passes all messages through transparently.

sequenceDiagram
    participant Editor as Editor<br/>(Zed)
    participant Conductor as Conductor<br/>Orchestrator
    participant Sparkle as Sparkle<br/>Component
    participant Agent as Base<br/>Agent

    Note over Editor,Agent: === Subsequent Prompt Flow ===
    
    Editor->>Conductor: session/prompt (id: P3, sessionId: S1)
    Conductor->>Sparkle: session/prompt (id: P3, sessionId: S1)
    
    Note over Sparkle: Already embodied,<br/>pass through unchanged
    
    Sparkle->>Conductor: _proxy/successor/request (id: P4)<br/>payload: session/prompt (unchanged)
    Conductor->>Agent: session/prompt (id: P4, unchanged)
    
    Agent-->>Conductor: response (id: P4)
    Conductor-->>Sparkle: response to _proxy request (id: P4)
    
    Note over Sparkle: Maps P4 → P3
    
    Sparkle-->>Conductor: response (id: P3)
    Conductor-->>Editor: response (id: P3)
    
    Note over Editor,Agent: Normal ACP flow,<br/>Sparkle and Conductor transparent

Key messages:

  1. Editor → Conductor: session/prompt (id: P3)

    {
      "jsonrpc": "2.0",
      "id": "P3",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {"role": "user", "content": "Can you refactor the authenticate function?"}
        ]
      }
    }
    
  2. Sparkle → Conductor: _proxy/successor/request (id: P4, message unchanged)

    {
      "jsonrpc": "2.0",
      "id": "P4",
      "method": "_proxy/successor/request",
      "params": {
        "message": {
          "method": "session/prompt",
          "params": {
            "sessionId": "S1",
            "messages": [
              {"role": "user", "content": "Can you refactor the authenticate function?"}
            ]
          }
        }
      }
    }
    
  3. Conductor → Agent: session/prompt (id: P4, unwrapped)

    {
      "jsonrpc": "2.0",
      "id": "P4",
      "method": "session/prompt",
      "params": {
        "sessionId": "S1",
        "messages": [
          {"role": "user", "content": "Can you refactor the authenticate function?"}
        ]
      }
    }
    
  4. Sparkle → Conductor: response (id: P3, forwarded to editor)

    {
      "jsonrpc": "2.0",
      "id": "P3",
      "result": {
        "role": "assistant",
        "content": "I'll help you refactor the authenticate function..."
      }
    }
    

Note that even though Sparkle is passing messages through “transparently”, it still uses the _proxy/successor/request protocol. This maintains the consistent routing pattern where all downstream communication flows through Conductor.

Implementation Note on Embodiment Responses:

For the MVP, when Sparkle runs the embodiment sequence before the user’s actual prompt, it will buffer both responses and concatenate them before sending back to the editor. This makes the embodiment transparent but loses some structure. A future RFD will explore richer content types (like subconversation) that would allow editors to distinguish between nested exchanges and main responses.

Phase 2: Tool Interception (FUTURE)

Goal: Route MCP tool calls through the proxy chain.

Conductor registers as a dummy MCP server. When Claude calls a Sparkle tool, the call routes back through the proxy chain to the Sparkle component for handling. This enables richer component interactions without requiring agents to understand P/ACP.

Phase 3: Additional Components (FUTURE)

Build additional P/ACP components that demonstrate different use cases:

  • Session history/context management
  • Logging and observability
  • Rate limiting
  • Content filtering

These will validate the protocol design and inform refinements.

Testing Strategy

Unit tests:

  • Test message serialization/deserialization
  • Test process spawning logic
  • Test stdio communication

Integration tests:

  • Spawn real proxy chains
  • Use actual ACP agents for end-to-end validation
  • Test error handling and cleanup

Manual testing:

  • Use with VSCode + ACP-aware agents
  • Verify with different proxy configurations
  • Test process management under various failure modes

Frequently asked questions

What questions have arisen over the course of authoring this document or during subsequent discussions?

What alternative approaches did you consider, and why did you settle on this one?

We considered extending MCP directly, but MCP is focused on tool provision rather than conversation flow control. We also looked at building everything as VSCode extensions, but that would lock us into a single editor ecosystem.

P/ACP’s proxy chain approach provides the right balance of modularity and compatibility - components can be developed independently while still working together.

How does this relate to other agent protocols like Google’s A2A?

P/ACP is complementary to protocols like A2A. While A2A focuses on agent-to-agent communication for remote services, P/ACP focuses on composing the user-facing development experience. You could imagine P/ACP components that use A2A internally to coordinate with remote agents.

What about security concerns with arbitrary proxy chains?

Users are responsible for the proxies they choose to run, similar to how they’re responsible for the software they install. Proxies can intercept and modify all communication, so trust is essential. For future versions, we’re considering approaches like Microsoft’s Wassette (WASM-based capability restrictions) to provide sandboxed execution environments.

What about the chat GUI interface?

We currently have a minimal chat GUI working in VSCode that can exchange basic messages with ACP agents. However, a richer chat interface with features like message history, streaming support, context providers, and interactive elements remains TBD.

Continue.dev has solved many of the hard problems for production-quality chat interfaces in VS Code extensions. Their GUI is specifically designed to be reusable - they use the exact same codebase for both VS Code and JetBrains IDEs by implementing different adapter layers.

Their architecture proves that message-passing protocols can cleanly separate GUI concerns from backend logic, which aligns perfectly with P/ACP’s composable design. When we’re ready to enhance the chat interface, we can evaluate whether to build on Continue.dev’s foundation or develop our own approach based on what we learn from the P/ACP proxy framework.

The Apache 2.0 license makes this legally straightforward, and their well-documented message protocols provide a clear integration path.

Why not just use hooks or plugins?

Hooks are fundamentally limited to what the host application anticipated. P/ACP proxies can intercept and modify the entire conversation flow, enabling innovations that the original tool designer never envisioned. This is the difference between customization and true composability.

What about performance implications of the proxy chain?

The proxy chain does add some latency as messages pass through multiple hops. However, we don’t expect this to be noticeable for typical development workflows. Most interactions are human-paced rather than high-frequency, and the benefits of composability outweigh the minimal latency cost.

How will users discover and configure proxy chains?

This will be determined over time as the ecosystem develops. We expect solutions to emerge organically, potentially including registries, configuration files, or marketplace-style discovery mechanisms.

What about resource management with multiple proxy processes?

Each proxy manages the lifecycle of processes it starts. When a proxy terminates, it cleans up its downstream processes. This creates a natural cleanup chain that prevents resource leaks.

Revision history

Initial draft based on architectural discussions.

Transport Architecture

Note: This document describes internal architecture and uses older terminology (e.g., JrConnection instead of the current API). For the user-facing API, see Building an Agent and Building a Proxy.

This chapter explains how the connection layer separates protocol semantics from transport mechanisms, enabling flexible deployment patterns including in-process message passing.

Overview

JrConnection provides the core JSON-RPC connection abstraction used by all SACP components. Originally designed around byte streams, it has been refactored to support pluggable transports that work with different I/O mechanisms while maintaining consistent protocol semantics.

Design Principles

Separation of Concerns

The architecture separates two distinct responsibilities:

  1. Protocol Layer: JSON-RPC semantics

    • Request ID assignment
    • Request/response correlation
    • Method dispatch to handlers
    • Error handling
  2. Transport Layer: Message movement

    • Reading/writing from I/O sources
    • Serialization/deserialization
    • Connection management

This separation enables:

  • In-process efficiency: Components in the same process can skip serialization
  • Transport flexibility: Easy to add new transport types (WebSockets, named pipes, etc.)
  • Testability: Mock transports for unit testing
  • Clarity: Clear boundaries between protocol and I/O concerns

The jsonrpcmsg::Message Boundary

The key insight is that jsonrpcmsg::Message provides a natural, transport-neutral boundary:

#![allow(unused)]
fn main() {
enum jsonrpcmsg::Message {
    Request { method, params, id },
    Response { result, error, id },
}
}

This type sits between the protocol and transport layers:

  • Above: Protocol layer works with application types (OutgoingMessage, UntypedMessage)
  • Below: Transport layer works with jsonrpcmsg::Message
  • Boundary: Clean, well-defined interface

Actor Architecture

Protocol Actors (Core JrConnection)

These actors live in JrConnection and understand JSON-RPC semantics:

Outgoing Protocol Actor

Input:  mpsc::UnboundedReceiver<OutgoingMessage>
Output: mpsc::UnboundedSender<jsonrpcmsg::Message>

Responsibilities:

  • Assign unique IDs to outgoing requests
  • Subscribe to reply_actor for response correlation
  • Convert application-level OutgoingMessage to protocol-level jsonrpcmsg::Message

Incoming Protocol Actor

Input:  mpsc::UnboundedReceiver<jsonrpcmsg::Message>
Output: Routes to reply_actor or registered handlers

Responsibilities:

  • Route responses to reply_actor (matches by ID)
  • Route requests/notifications to registered handlers
  • Convert jsonrpcmsg::Request to UntypedMessage for handlers

Reply Actor

Manages request/response correlation:

  • Maintains map from request ID to response channel
  • When response arrives, delivers to waiting request
  • Unchanged from original design

Task Actor

Runs user-spawned concurrent tasks via cx.spawn(). Unchanged from original design.

Transport Actors (Provided by Trait)

These actors are spawned by IntoJrConnectionTransport implementations and have zero knowledge of protocol semantics:

Transport Outgoing Actor

Input:  mpsc::UnboundedReceiver<jsonrpcmsg::Message>
Output: Writes to I/O (byte stream, channel, socket, etc.)

For byte streams:

  • Serialize jsonrpcmsg::Message to JSON
  • Write newline-delimited JSON to stream

For in-process channels:

  • Directly forward jsonrpcmsg::Message to channel

Transport Incoming Actor

Input:  Reads from I/O (byte stream, channel, socket, etc.)
Output: mpsc::UnboundedSender<jsonrpcmsg::Message>

For byte streams:

  • Read newline-delimited JSON from stream
  • Parse to jsonrpcmsg::Message
  • Send to incoming protocol actor

For in-process channels:

  • Directly forward jsonrpcmsg::Message from channel

Message Flow

Outgoing Message Flow

User Handler
    |
    | OutgoingMessage (request/notification/response)
    v
Outgoing Protocol Actor
    | - Assign ID (for requests)
    | - Subscribe to replies
    | - Convert to jsonrpcmsg::Message
    v
    | jsonrpcmsg::Message
    |
Transport Outgoing Actor
    | - Serialize (byte streams)
    | - Or forward directly (channels)
    v
I/O Destination

Incoming Message Flow

I/O Source
    |
Transport Incoming Actor
    | - Parse (byte streams)
    | - Or forward directly (channels)
    v
    | jsonrpcmsg::Message
    |
Incoming Protocol Actor
    | - Route responses → reply_actor
    | - Route requests → registered handlers
    v
Handler or Reply Actor

Message Ordering in the Conductor

When the conductor forwards messages between components, it must preserve send order to prevent race conditions. The conductor achieves this by routing all message forwarding through a central message queue.

Key insight: While the transport actors operate independently, the conductor’s routing logic serializes all forwarding decisions through a central event loop. This ensures that even though responses use a “fast path” (reply_actor with oneshot channels) at the transport level, the decision to forward them is serialized with notification forwarding at the protocol level.

Without this serialization, responses could overtake notifications when both are forwarded through proxy chains, causing the client to receive messages out of order. See Conductor Implementation for details.

Transport Trait

The IntoJrConnectionTransport trait defines how to bridge internal channels with I/O:

#![allow(unused)]
fn main() {
pub trait IntoJrConnectionTransport {
    fn setup_transport(
        self,
        cx: &JrConnectionCx,
        outgoing_rx: mpsc::UnboundedReceiver<jsonrpcmsg::Message>,
        incoming_tx: mpsc::UnboundedSender<jsonrpcmsg::Message>,
    ) -> Result<(), Error>;
}
}

Key points:

  • Consumed (self): Implementations move owned resources into spawned actors
  • Spawns via cx.spawn(): Uses connection context to spawn transport actors
  • Channels only: No knowledge of OutgoingMessage or response correlation
  • Returns quickly: Just spawns actors, doesn’t block

Transport Implementations

Byte Stream Transport

The default implementation works with any AsyncRead + AsyncWrite pair:

#![allow(unused)]
fn main() {
impl<OB: AsyncWrite, IB: AsyncRead> IntoJrConnectionTransport for (OB, IB) {
    fn setup_transport(self, cx, outgoing_rx, incoming_tx) -> Result<(), Error> {
        let (outgoing_bytes, incoming_bytes) = self;
        
        // Spawn incoming: read bytes → parse JSON → send Message
        cx.spawn(async move {
            let mut lines = BufReader::new(incoming_bytes).lines();
            while let Some(line) = lines.next().await {
                let message: jsonrpcmsg::Message = serde_json::from_str(&line?)?;
                incoming_tx.unbounded_send(message)?;
            }
            Ok(())
        });
        
        // Spawn outgoing: receive Message → serialize → write bytes
        cx.spawn(async move {
            while let Some(message) = outgoing_rx.next().await {
                let json = serde_json::to_vec(&message)?;
                outgoing_bytes.write_all(&json).await?;
                outgoing_bytes.write_all(b"\n").await?;
            }
            Ok(())
        });
        
        Ok(())
    }
}
}

Use cases:

  • Stdio connections to subprocess agents
  • TCP socket connections
  • Unix domain sockets
  • Any stream-based I/O

In-Process Channel Transport

For components in the same process, skip serialization entirely:

#![allow(unused)]
fn main() {
pub struct ChannelTransport {
    outgoing: mpsc::UnboundedSender<jsonrpcmsg::Message>,
    incoming: mpsc::UnboundedReceiver<jsonrpcmsg::Message>,
}

impl IntoJrConnectionTransport for ChannelTransport {
    fn setup_transport(self, cx, outgoing_rx, incoming_tx) -> Result<(), Error> {
        // Just forward messages, no serialization
        cx.spawn(async move {
            while let Some(message) = self.incoming.next().await {
                incoming_tx.unbounded_send(message)?;
            }
            Ok(())
        });
        
        cx.spawn(async move {
            while let Some(message) = outgoing_rx.next().await {
                self.outgoing.unbounded_send(message)?;
            }
            Ok(())
        });
        
        Ok(())
    }
}
}

Benefits:

  • Zero serialization overhead: Messages passed by value
  • Same-process efficiency: Ideal for conductor with in-process proxies
  • Full type safety: No parsing errors possible

Construction API

Flexible Construction

The refactored API separates handler setup from transport selection:

#![allow(unused)]
fn main() {
// Build connection with handlers
let connection = JrConnection::new()
    .name("my-component")
    .on_receive_request(|req: InitializeRequest, cx| {
        cx.respond(InitializeResponse::make())
    })
    .on_receive_notification(|notif: SessionNotification, _cx| {
        Ok(())
    });

// Provide transport at the end
connection.serve_with(transport).await?;
}

Byte Stream Convenience

For the common case of byte streams, use the convenience constructor:

#![allow(unused)]
fn main() {
JrConnection::from_streams(stdout, stdin)
    .on_receive_request(...)
    .serve()
    .await?;
}

This is equivalent to:

#![allow(unused)]
fn main() {
JrConnection::new()
    .on_receive_request(...)
    .serve_with((stdout, stdin))
    .await?;
}

Use Cases

1. Standard Agent (Stdio)

Traditional subprocess agent with stdio communication:

#![allow(unused)]
fn main() {
JrConnection::from_streams(
    tokio::io::stdout().compat_write(),
    tokio::io::stdin().compat()
)
    .name("my-agent")
    .on_receive_request(handle_prompt)
    .serve()
    .await?;
}

2. In-Process Proxy Chain

Conductor with proxies in the same process for maximum efficiency:

#![allow(unused)]
fn main() {
// Create paired channel transports
let (transport_a, transport_b) = create_paired_transports();

// Spawn proxy in background
tokio::spawn(async move {
    JrConnection::new()
        .on_receive_message(proxy_handler)
        .serve_with(transport_a)
        .await
});

// Connect to proxy
JrConnection::new()
    .on_receive_request(agent_handler)
    .serve_with(transport_b)
    .await?;
}

No serialization overhead between components!

3. Network-Based Components

TCP socket connections between components:

#![allow(unused)]
fn main() {
let stream = TcpStream::connect("localhost:8080").await?;
let (read, write) = stream.split();

JrConnection::new()
    .on_receive_request(handler)
    .serve_with((write.compat_write(), read.compat()))
    .await?;
}

4. Testing with Mock Transport

Unit tests without real I/O:

#![allow(unused)]
fn main() {
let (transport, mock) = create_mock_transport();

tokio::spawn(async move {
    JrConnection::new()
        .on_receive_request(my_handler)
        .serve_with(transport)
        .await
});

// Test by sending messages directly
mock.send_request("initialize", params).await?;
let response = mock.receive_response().await?;
assert_eq!(response.method, "initialized");
}

Benefits

Performance

  • In-process optimization: Skip serialization when components are co-located
  • Zero-copy potential: Direct message passing for channels
  • Flexible trade-offs: Choose appropriate transport for deployment

Flexibility

  • Transport-agnostic handlers: Write handler logic once, use anywhere
  • Easy experimentation: Try different transports without code changes
  • Future-proof: Add new transports (WebSockets, gRPC, etc.) without refactoring

Testing

  • Mock transports: Unit test handlers without I/O
  • Deterministic tests: Control message timing precisely
  • Isolated testing: Test protocol logic separate from I/O

Clarity

  • Clear boundaries: Protocol semantics vs transport mechanics
  • Focused implementations: Each layer has single responsibility
  • Maintainability: Changes to transport don’t affect protocol logic

Implementation Status

  • Phase 1: Documentation complete
  • 🚧 Phase 2: Actor splitting in progress
  • 📋 Phase 3: Trait introduction planned
  • 📋 Phase 4: In-process transport planned
  • 📋 Phase 5: Conductor integration planned

See src/sacp/PLAN.md for detailed implementation tracking.