This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Muxio RPC Framework
Loading…
Muxio RPC Framework
Relevant source files
- Cargo.lock
- experiments/simd-r-drive-muxio-service-definition/Cargo.toml
- experiments/simd-r-drive-ws-client/Cargo.toml
- experiments/simd-r-drive-ws-server/Cargo.toml
Purpose and Scope
This document describes the Muxio RPC (Remote Procedure Call) framework as implemented in SIMD R Drive for remote storage access over WebSocket connections. The framework provides a type-safe, multiplexed communication protocol using bitcode serialization for efficient binary data transfer.
For information about the WebSocket server implementation, see WebSocket Server. For the native Rust client implementation, see Native Rust Client. For Python client integration, see Python WebSocket Client API.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:1-23 experiments/simd-r-drive-ws-client/Cargo.toml:1-22 experiments/simd-r-drive-muxio-service-definition/Cargo.toml:1-17
Architecture Overview
The Muxio RPC framework consists of multiple layers that work together to provide remote procedure calls over WebSocket connections:
Muxio RPC Framework Layer Architecture
graph TB
subgraph "Client Application Layer"
App["Application Code"]
end
subgraph "Client RPC Stack"
Caller["muxio-rpc-service-caller\nMethod Invocation"]
ClientRuntime["muxio-tokio-rpc-client\nWebSocket Client Runtime"]
end
subgraph "Shared Contract"
ServiceDef["simd-r-drive-muxio-service-definition\nService Interface Contract\nMethod Signatures"]
Bitcode["bitcode\nBinary Serialization"]
end
subgraph "Server RPC Stack"
ServerRuntime["muxio-tokio-rpc-server\nWebSocket Server Runtime"]
Endpoint["muxio-rpc-service-endpoint\nRequest Router"]
end
subgraph "Server Application Layer"
Impl["DataStore Implementation"]
end
subgraph "Core Framework"
Core["muxio-rpc-service\nBase RPC Traits & Types"]
end
App --> Caller
Caller --> ClientRuntime
ClientRuntime --> ServiceDef
ClientRuntime --> Bitcode
ClientRuntime --> Core
ServiceDef --> Bitcode
ServiceDef --> Core
ServerRuntime --> ServiceDef
ServerRuntime --> Bitcode
ServerRuntime --> Core
ServerRuntime --> Endpoint
Endpoint --> Impl
style ServiceDef fill:#f9f9f9,stroke:#333,stroke-width:2px
The framework is organized into distinct layers:
| Layer | Crates | Responsibility |
|---|---|---|
| Core Framework | muxio-rpc-service | Base traits, types, and RPC protocol definitions |
| Service Definition | simd-r-drive-muxio-service-definition | Shared interface contract between client and server |
| Serialization | bitcode | Efficient binary encoding/decoding of messages |
| Client Runtime | muxio-tokio-rpc-client, muxio-rpc-service-caller | WebSocket client, method invocation, request management |
| Server Runtime | muxio-tokio-rpc-server, muxio-rpc-service-endpoint | WebSocket server, request routing, response handling |
Sources: Cargo.lock:1250-1336 experiments/simd-r-drive-ws-server/Cargo.toml:14-17 experiments/simd-r-drive-ws-client/Cargo.toml:14-21
Core Framework Components
muxio-rpc-service
The muxio-rpc-service crate provides the foundational abstractions for the RPC system. This crate defines the core traits and types that both client and server components build upon.
Core RPC Framework Message Structure and Dependencies
graph TB
subgraph "muxio-rpc-service Crate"
RpcService["#[async_trait]\nRpcService Trait"]
Request["RpcRequest\nStruct"]
Response["RpcResponse\nStruct"]
ServiceDef["Service Definition\nInfrastructure"]
end
subgraph "RpcRequest Fields"
ReqID["request_id: u64\n(unique per call)"]
MethodID["method_id: u64\n(xxhash-rust XXH3)"]
Payload["payload: Vec<u8>\n(bitcode serialized)"]
end
subgraph "RpcResponse Fields"
RespID["request_id: u64\n(matches request)"]
Result["result: Result<Vec<u8>, Error>\n(bitcode serialized)"]
end
subgraph "Dependencies"
AsyncTrait["async-trait"]
Futures["futures"]
NumEnum["num_enum"]
XXHash["xxhash-rust"]
end
RpcService -->|defines| ServiceDef
Request -->|contains| ReqID
Request -->|contains| MethodID
Request -->|contains| Payload
Response -->|contains| RespID
Response -->|contains| Result
RpcService -.uses.- AsyncTrait
MethodID -.hashed with.- XXHash
The muxio-rpc-service crate provides:
| Component | Type | Purpose |
|---|---|---|
RpcService | #[async_trait] trait | Defines async service interface with method dispatch |
RpcRequest | Struct | Contains request_id, method_id (XXH3 hash from xxhash-rust), and bitcode payload |
RpcResponse | Struct | Contains request_id and Result<Vec<u8>, Error> variant |
| Method ID hashing | xxhash-rust XXH3 | Generates stable 64-bit method identifiers |
| Enum conversion | num_enum | Converts between numeric and enum representations |
The framework uses async-trait to enable async methods in traits, and XXH3 hashing (via xxhash-rust) for method identification, allowing fast O(1) method dispatch without string comparisons.
Sources: Cargo.lock:1261-1272 experiments/simd-r-drive-muxio-service-definition/Cargo.toml15
Service Definition Layer
simd-r-drive-muxio-service-definition
The simd-r-drive-muxio-service-definition crate serves as the shared RPC contract between clients and servers. This crate is compiled into both client and server binaries, ensuring type-safe method signatures on both sides.
Service Definition Compilation Model
graph TB
subgraph "simd-r-drive-muxio-service-definition"
Contract["RPC Service Contract"]
Methods["Method Signatures"]
Types["Shared Types"]
end
subgraph "Client Binary"
ClientStub["Generated Client Stubs"]
end
subgraph "Server Binary"
ServerImpl["Generated Server Handlers"]
end
Contract --> Methods
Contract --> Types
Methods -->|compiled into| ClientStub
Methods -->|compiled into| ServerImpl
Types -->|used by| ClientStub
Types -->|used by| ServerImpl
ClientStub -->|invokes via| WS["WebSocket"]
WS -->|routes to| ServerImpl
The service definition provides the RPC interface contract. Both client and server depend on this crate, which defines:
| Component | Description | Implementation |
|---|---|---|
| Method signatures | DataStore operations (write, read, delete, etc.) | Uses muxio-rpc-service traits |
| Request types | Bitcode-serializable structs for each method | Implements bitcode::Encode |
| Response types | Bitcode-serializable result types | Implements bitcode::Decode |
| Error types | Shared error definitions | Serializable across RPC boundary |
Method ID Generation
Each RPC method is identified by a stable method_id computed as the XXH3 hash of its signature string. This enables O(1) method routing:
Method ID Computation and Routing with Code Entities
flowchart LR
Sig["Method Signature\n'write(key: &[u8], value: &[u8])\n-> Result<u64>'"]
XXH3["xxhash_rust::xxh3\nxxh3_64(sig.as_bytes())"]
ID["method_id: u64\ne.g., 0x1a2b3c4d5e6f7890"]
HashMap["HashMap<u64,\nBox<dyn Fn>>\nin RpcServiceEndpoint"]
Lookup["HashMap::get\n(&method_id)"]
Handler["async fn handler\n(decoded args)"]
Sig -->|hash at compile time| XXH3
XXH3 --> ID
ID -->|stored in| HashMap
HashMap -->|O 1 lookup| Lookup
Lookup --> Handler
The XXH3 hash (via xxhash-rust crate) ensures:
| Property | Implementation | Benefit |
|---|---|---|
| Deterministic routing | xxh3_64(signature.as_bytes()) | Same signature → same ID |
| Fast dispatch | HashMap::get(&method_id) | O(1) integer key lookup |
| Version compatibility | Different signatures → different IDs | Breaking changes detected |
| Collision resistance | 64-bit hash space (2^64 values) | Negligible collision probability |
| Compile-time computation | const or build-time hashing | No runtime overhead |
The xxhash-rust dependency provides the xxh3_64 function used by muxio-rpc-service for method ID generation. The server’s RpcServiceEndpoint struct maintains the HashMap<u64, Box<dyn Fn>> dispatcher.
Sources: Cargo.lock:1261-1272 Cargo.lock:1905-1915 experiments/simd-r-drive-muxio-service-definition/Cargo.toml:1-17
Bitcode Serialization
The framework uses the bitcode crate (version 0.6.6) for efficient binary serialization with the following characteristics:
graph LR
subgraph "Bitcode Serialization Pipeline"
RustType["Rust Type\n#[derive(Encode, Decode)]"]
Encode["bitcode::encode\n<T>(&value)"]
Binary["Vec<u8>\nCompact Binary"]
Decode["bitcode::decode\n<T>(&bytes)"]
RustType2["Rust Type\nReconstructed"]
end
subgraph "bitcode Dependencies"
BitcodeDerve["bitcode_derive\nproc macros"]
Bytemuck["bytemuck\nzero-copy casts"]
Arrayvec["arrayvec\nstack arrays"]
Glam["glam\nSIMD vectors"]
end
RustType -->|serialize| Encode
Encode --> Binary
Binary -->|deserialize| Decode
Decode --> RustType2
Encode -.uses.- BitcodeDerve
Encode -.uses.- Bytemuck
Decode -.uses.- BitcodeDerve
Decode -.uses.- Bytemuck
Serialization Features
Bitcode Encoding/Decoding Pipeline with Dependencies
| Feature | Implementation | Benefit |
|---|---|---|
| Zero-copy deserialization | bytemuck for Pod types | Minimal overhead for aligned data |
| Compact encoding | Variable-length integers, bit packing | Smaller than bincode/MessagePack |
| Type safety | #[derive(Encode, Decode)] proc macros | Compile-time serialization code |
| Performance | ~50ns per small struct | Lower CPU than JSON/CBOR |
| SIMD support | glam integration | Efficient vector serialization |
Integration with RPC
The serialization is integrated at multiple points:
| Integration Point | Operation | Code Path |
|---|---|---|
| Request serialization | bitcode::encode(&args) → Vec<u8> | Client RpcServiceCaller::call |
| Wire transfer | Vec<u8> in RpcRequest.payload | WebSocket binary message |
| Request deserialization | bitcode::decode::<Args>(&payload) | Server RpcServiceEndpoint::dispatch |
| Response serialization | bitcode::encode(&result) → Vec<u8> | Server after method execution |
| Response deserialization | bitcode::decode::<Result>(&payload) | Client response handler |
The use of #[derive(Encode, Decode)] on request/response types ensures compile-time validation of serialization compatibility.
Sources: Cargo.lock:392-414 experiments/simd-r-drive-muxio-service-definition/Cargo.toml14
Client-Side Components
flowchart TB
subgraph "Client Call Flow"
ClientApp["Client Application"]
Caller["RpcServiceCaller\nStruct"]
GenID["Generate request_id\n(AtomicU64::fetch_add)"]
Request["Create RpcRequest\nStruct"]
Serialize["bitcode::encode\n(method args)"]
Send["Send via\ntokio::sync::mpsc"]
Await["tokio::sync::oneshot\nawait response"]
Deserialize["bitcode::decode\n(response payload)"]
Return["Return Result\nto caller"]
end
ClientApp -->|async fn call| Caller
Caller --> GenID
GenID --> Request
Request --> Serialize
Serialize --> Send
Send --> Await
Await --> Deserialize
Deserialize --> Return
Return --> ClientApp
muxio-rpc-service-caller
The muxio-rpc-service-caller crate provides the client-side method invocation interface:
Client Method Invocation Flow with tokio Primitives
Key responsibilities and implementation:
| Responsibility | Implementation | Purpose |
|---|---|---|
| Method call marshalling | RpcServiceCaller struct | Provides typed interface to remote methods |
| Request ID generation | AtomicU64::fetch_add(1, Ordering::Relaxed) | Unique, monotonic request identifiers |
| Response awaiting | tokio::sync::oneshot::Receiver | Single-use channel for response delivery |
| Request queuing | tokio::sync::mpsc::Sender | Sends requests to send loop |
| Error propagation | Result<T, RpcError> return types | Type-safe error handling |
The caller uses tokio’s async primitives to coordinate between the application thread and the WebSocket send/receive loops.
Sources: Cargo.lock:1274-1285 experiments/simd-r-drive-ws-client/Cargo.toml18
graph TB
subgraph "muxio-tokio-rpc-client Crate"
Client["RpcClient\nStruct"]
SendLoop["send_loop\ntokio::task::spawn"]
RecvLoop["recv_loop\ntokio::task::spawn"]
PendingMap["Arc<DashMap<u64,\noneshot::Sender<Result>>>\nShared state"]
ReqChan["mpsc::Receiver\n<RpcRequest>"]
end
subgraph "tokio-tungstenite Integration"
WS["WebSocketStream\n<MaybeTlsStream>"]
Split["ws.split()"]
WSRead["SplitStream\n(read half)"]
WSWrite["SplitSink\n(write half)"]
end
subgraph "Application Layer"
AppCall["async fn call()"]
Future["impl Future\n<Output=Result>"]
end
AppCall -->|1. create oneshot| Client
Client -->|2. insert into| PendingMap
Client -->|3. mpsc::send| ReqChan
ReqChan -->|4. recv request| SendLoop
SendLoop -->|5. bitcode::encode| SendLoop
SendLoop -->|6. send binary| WSWrite
WSRead -->|7. next binary| RecvLoop
RecvLoop -->|8. bitcode::decode| RecvLoop
RecvLoop -->|9. lookup by id| PendingMap
PendingMap -->|10. oneshot::send| Future
Future -->|11. return| AppCall
WS --> Split
Split --> WSRead
Split --> WSWrite
muxio-tokio-rpc-client
The muxio-tokio-rpc-client crate implements the WebSocket client runtime with request multiplexing and response routing:
Client Runtime Request Multiplexing with tokio and tungstenite
Implementation details:
| Component | Type | Purpose |
|---|---|---|
RpcClient | Struct | Main client interface, owns WebSocket and spawns tasks |
send_loop | tokio::task | Receives from mpsc, serializes, writes to SplitSink |
recv_loop | tokio::task | Reads from SplitStream, deserializes, routes via DashMap |
| Pending requests | Arc<DashMap<u64, oneshot::Sender>> | Thread-safe map for response routing |
| Request channel | mpsc::Sender/Receiver<RpcRequest> | Queue for outbound requests |
| WebSocket | tokio_tungstenite::WebSocketStream | Binary WebSocket with TLS support |
| Split streams | futures::stream::SplitStream/SplitSink | Separate read/write halves |
The multiplexing architecture uses DashMap for lock-free concurrent access to pending requests. The WebSocket stream is split into read and write halves, allowing the send_loop and recv_loop tasks to operate independently. Each request gets a unique request_id, and the recv_loop task matches responses back to waiting callers via oneshot channels.
Sources: Cargo.lock:1302-1318 experiments/simd-r-drive-ws-client/Cargo.toml16 Cargo.lock:681-693
Server-Side Components
graph TB
subgraph "muxio-tokio-rpc-server Crate"
Server["RpcServer\nStruct"]
AxumApp["axum::Router\nwith WebSocket route"]
AcceptLoop["tokio::spawn\n(per connection)"]
ConnHandler["handle_connection\nasync fn"]
Dispatcher["RpcServiceEndpoint\n<ServiceImpl>"]
end
subgraph "axum WebSocket Integration"
Route["GET /ws\nWebSocket upgrade"]
WSUpgrade["axum::extract::ws\nWebSocketUpgrade"]
WSStream["axum::extract::ws\nWebSocket"]
end
subgraph "Service Implementation"
ServiceImpl["Arc<ServiceImpl>\n(e.g., DataStore)"]
Methods["#[async_trait]\nRpcService methods"]
end
subgraph "Method Dispatch"
MethodMap["HashMap<u64,\nBox<dyn Fn>>\n(method_id → handler)"]
end
AxumApp -->|upgrade| WSUpgrade
WSUpgrade -->|on_upgrade| WSStream
WSStream -->|tokio::spawn| AcceptLoop
AcceptLoop --> ConnHandler
ConnHandler -->|recv Message::Binary| ConnHandler
ConnHandler -->|bitcode::decode| ConnHandler
ConnHandler -->|dispatch by id| MethodMap
MethodMap -->|invoke| Methods
Methods -.implemented by.- ServiceImpl
Methods -->|return Result| ConnHandler
ConnHandler -->|bitcode::encode| ConnHandler
ConnHandler -->|send Message::Binary| WSStream
Dispatcher -->|owns| MethodMap
Dispatcher -->|holds Arc| ServiceImpl
muxio-tokio-rpc-server
The muxio-tokio-rpc-server crate implements the WebSocket server runtime with connection management and request dispatching:
Server Runtime with axum WebSocket Integration
The server runtime architecture:
| Component | Type | Purpose |
|---|---|---|
RpcServer | Struct | Main server, creates axum::Router with WebSocket route |
axum::Router | HTTP router | Handles WebSocket upgrade at /ws endpoint |
WebSocketUpgrade | axum::extract | Performs HTTP → WebSocket protocol upgrade |
| Connection handler | async fn per client | Spawned via tokio::spawn for each connection |
RpcServiceEndpoint | Generic struct | Routes method_id to service methods via HashMap |
| Method dispatcher | HashMap<u64, Box<dyn Fn>> | O(1) lookup and async invocation of methods |
| Service implementation | Arc<ServiceImpl> | Shared DataStore instance across connections |
Request Processing Pipeline
Each incoming request follows this pipeline:
Server Request Processing Pipeline with Code Entities
The dispatcher performs O(1) method lookup using the method_id hash from the HashMap, then invokes the corresponding service implementation. All service methods use #[async_trait], allowing concurrent request handling. The use of Arc<ServiceImpl> enables safe sharing of the DataStore across multiple client connections.
Sources: Cargo.lock:1320-1336 experiments/simd-r-drive-ws-server/Cargo.toml16 Cargo.lock:305-340
Request/Response Flow
Complete RPC Call Sequence
End-to-End RPC Call Flow
Message Format
The Muxio RPC wire protocol uses WebSocket binary frames with bitcode-encoded messages. The exact frame structure is managed by the muxio framework, but the logical message structure is:
| Component | Encoding | Description |
|---|---|---|
| Request message | bitcode | Contains request_id, method_id, and method arguments |
| Response message | bitcode | Contains request_id and result (success/error) |
| WebSocket frame | Binary | Single frame per request/response for small messages |
| Fragmentation | Automatic | Large payloads may use multiple frames |
The use of WebSocket binary frames and bitcode serialization provides:
- Compact encoding : Smaller than JSON or MessagePack
- Zero-copy potential : bitcode can deserialize without copying
- Type safety : Compile-time verification of message structure
Sources: Cargo.lock:133-143 Cargo.lock:648-656 Cargo.lock:1213-1222
Error Handling
The framework provides comprehensive error handling across the RPC boundary:
RPC Error Classification and Propagation
Error Categories
| Category | Origin | Handling |
|---|---|---|
| Serialization errors | Bitcode encoding/decoding failure | Logged and returned as RpcError |
| Network errors | WebSocket connection issues | Automatic reconnect or error propagation |
| Application errors | DataStore operation failures | Serialized and returned to client |
| Timeout errors | Request took too long | Client-side timeout with error result |
Error Recovery
The framework implements several recovery strategies:
- Connection loss : Client automatically attempts reconnection
- Request timeout : Client cancels pending request after configured duration
- Serialization failure : Error logged and generic error returned
- Invalid method ID : Server returns “method not found” error
Sources: Cargo.lock:1261-1336
Performance Characteristics
The Muxio RPC framework is optimized for high-performance remote storage access:
| Metric | Characteristic | Impact |
|---|---|---|
| Serialization overhead | ~50-100 ns for typical payloads | Minimal CPU impact |
| Request multiplexing | Thousands of concurrent requests | High throughput |
| Binary protocol | Compact wire format | Reduced bandwidth usage |
| Zero-copy deserialization | Direct memory references | Lower latency for large payloads |
The use of bitcode serialization and WebSocket binary frames minimizes overhead compared to text-based protocols like JSON over HTTP. The multiplexed architecture allows clients to issue multiple concurrent requests without blocking, essential for high-performance batch operations.
Sources: Cargo.lock:392-414 Cargo.lock:1250-1336
Dismiss
Refresh this wiki
Enter email to refresh