Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Muxio RPC Framework

Loading…

Muxio RPC Framework

Relevant source files

Purpose and Scope

This document describes the Muxio RPC (Remote Procedure Call) framework as implemented in SIMD R Drive for remote storage access over WebSocket connections. The framework provides a type-safe, multiplexed communication protocol using bitcode serialization for efficient binary data transfer.

For information about the WebSocket server implementation, see WebSocket Server. For the native Rust client implementation, see Native Rust Client. For Python client integration, see Python WebSocket Client API.

Sources: experiments/simd-r-drive-ws-server/Cargo.toml:1-23 experiments/simd-r-drive-ws-client/Cargo.toml:1-22 experiments/simd-r-drive-muxio-service-definition/Cargo.toml:1-17


Architecture Overview

The Muxio RPC framework consists of multiple layers that work together to provide remote procedure calls over WebSocket connections:

Muxio RPC Framework Layer Architecture

graph TB
    subgraph "Client Application Layer"
        App["Application Code"]
end
    
    subgraph "Client RPC Stack"
        Caller["muxio-rpc-service-caller\nMethod Invocation"]
ClientRuntime["muxio-tokio-rpc-client\nWebSocket Client Runtime"]
end
    
    subgraph "Shared Contract"
        ServiceDef["simd-r-drive-muxio-service-definition\nService Interface Contract\nMethod Signatures"]
Bitcode["bitcode\nBinary Serialization"]
end
    
    subgraph "Server RPC Stack"
        ServerRuntime["muxio-tokio-rpc-server\nWebSocket Server Runtime"]
Endpoint["muxio-rpc-service-endpoint\nRequest Router"]
end
    
    subgraph "Server Application Layer"
        Impl["DataStore Implementation"]
end
    
    subgraph "Core Framework"
        Core["muxio-rpc-service\nBase RPC Traits & Types"]
end
    
 
   App --> Caller
 
   Caller --> ClientRuntime
 
   ClientRuntime --> ServiceDef
 
   ClientRuntime --> Bitcode
 
   ClientRuntime --> Core
    
 
   ServiceDef --> Bitcode
 
   ServiceDef --> Core
    
 
   ServerRuntime --> ServiceDef
 
   ServerRuntime --> Bitcode
 
   ServerRuntime --> Core
 
   ServerRuntime --> Endpoint
 
   Endpoint --> Impl
    
    style ServiceDef fill:#f9f9f9,stroke:#333,stroke-width:2px

The framework is organized into distinct layers:

LayerCratesResponsibility
Core Frameworkmuxio-rpc-serviceBase traits, types, and RPC protocol definitions
Service Definitionsimd-r-drive-muxio-service-definitionShared interface contract between client and server
SerializationbitcodeEfficient binary encoding/decoding of messages
Client Runtimemuxio-tokio-rpc-client, muxio-rpc-service-callerWebSocket client, method invocation, request management
Server Runtimemuxio-tokio-rpc-server, muxio-rpc-service-endpointWebSocket server, request routing, response handling

Sources: Cargo.lock:1250-1336 experiments/simd-r-drive-ws-server/Cargo.toml:14-17 experiments/simd-r-drive-ws-client/Cargo.toml:14-21


Core Framework Components

muxio-rpc-service

The muxio-rpc-service crate provides the foundational abstractions for the RPC system. This crate defines the core traits and types that both client and server components build upon.

Core RPC Framework Message Structure and Dependencies

graph TB
    subgraph "muxio-rpc-service Crate"
        RpcService["#[async_trait]\nRpcService Trait"]
Request["RpcRequest\nStruct"]
Response["RpcResponse\nStruct"]
ServiceDef["Service Definition\nInfrastructure"]
end
    
    subgraph "RpcRequest Fields"
        ReqID["request_id: u64\n(unique per call)"]
MethodID["method_id: u64\n(xxhash-rust XXH3)"]
Payload["payload: Vec<u8>\n(bitcode serialized)"]
end
    
    subgraph "RpcResponse Fields"
        RespID["request_id: u64\n(matches request)"]
Result["result: Result<Vec<u8>, Error>\n(bitcode serialized)"]
end
    
    subgraph "Dependencies"
        AsyncTrait["async-trait"]
Futures["futures"]
NumEnum["num_enum"]
XXHash["xxhash-rust"]
end
    
 
   RpcService -->|defines| ServiceDef
 
   Request -->|contains| ReqID
 
   Request -->|contains| MethodID
 
   Request -->|contains| Payload
 
   Response -->|contains| RespID
 
   Response -->|contains| Result
    
    RpcService -.uses.- AsyncTrait
    MethodID -.hashed with.- XXHash

The muxio-rpc-service crate provides:

ComponentTypePurpose
RpcService#[async_trait] traitDefines async service interface with method dispatch
RpcRequestStructContains request_id, method_id (XXH3 hash from xxhash-rust), and bitcode payload
RpcResponseStructContains request_id and Result<Vec<u8>, Error> variant
Method ID hashingxxhash-rust XXH3Generates stable 64-bit method identifiers
Enum conversionnum_enumConverts between numeric and enum representations

The framework uses async-trait to enable async methods in traits, and XXH3 hashing (via xxhash-rust) for method identification, allowing fast O(1) method dispatch without string comparisons.

Sources: Cargo.lock:1261-1272 experiments/simd-r-drive-muxio-service-definition/Cargo.toml15


Service Definition Layer

simd-r-drive-muxio-service-definition

The simd-r-drive-muxio-service-definition crate serves as the shared RPC contract between clients and servers. This crate is compiled into both client and server binaries, ensuring type-safe method signatures on both sides.

Service Definition Compilation Model

graph TB
    subgraph "simd-r-drive-muxio-service-definition"
        Contract["RPC Service Contract"]
Methods["Method Signatures"]
Types["Shared Types"]
end
    
    subgraph "Client Binary"
        ClientStub["Generated Client Stubs"]
end
    
    subgraph "Server Binary"
        ServerImpl["Generated Server Handlers"]
end
    
 
   Contract --> Methods
 
   Contract --> Types
 
   Methods -->|compiled into| ClientStub
 
   Methods -->|compiled into| ServerImpl
 
   Types -->|used by| ClientStub
 
   Types -->|used by| ServerImpl
    
 
   ClientStub -->|invokes via| WS["WebSocket"]
WS -->|routes to| ServerImpl

The service definition provides the RPC interface contract. Both client and server depend on this crate, which defines:

ComponentDescriptionImplementation
Method signaturesDataStore operations (write, read, delete, etc.)Uses muxio-rpc-service traits
Request typesBitcode-serializable structs for each methodImplements bitcode::Encode
Response typesBitcode-serializable result typesImplements bitcode::Decode
Error typesShared error definitionsSerializable across RPC boundary

Method ID Generation

Each RPC method is identified by a stable method_id computed as the XXH3 hash of its signature string. This enables O(1) method routing:

Method ID Computation and Routing with Code Entities

flowchart LR
    Sig["Method Signature\n'write(key: &[u8], value: &[u8])\n-> Result&lt;u64&gt;'"]
XXH3["xxhash_rust::xxh3\nxxh3_64(sig.as_bytes())"]
ID["method_id: u64\ne.g., 0x1a2b3c4d5e6f7890"]
HashMap["HashMap&lt;u64,\nBox&lt;dyn Fn&gt;&gt;\nin RpcServiceEndpoint"]
Lookup["HashMap::get\n(&method_id)"]
Handler["async fn handler\n(decoded args)"]
Sig -->|hash at compile time| XXH3
 
   XXH3 --> ID
 
   ID -->|stored in| HashMap
 
   HashMap -->|O 1 lookup| Lookup
 
   Lookup --> Handler

The XXH3 hash (via xxhash-rust crate) ensures:

PropertyImplementationBenefit
Deterministic routingxxh3_64(signature.as_bytes())Same signature → same ID
Fast dispatchHashMap::get(&method_id)O(1) integer key lookup
Version compatibilityDifferent signatures → different IDsBreaking changes detected
Collision resistance64-bit hash space (2^64 values)Negligible collision probability
Compile-time computationconst or build-time hashingNo runtime overhead

The xxhash-rust dependency provides the xxh3_64 function used by muxio-rpc-service for method ID generation. The server’s RpcServiceEndpoint struct maintains the HashMap<u64, Box<dyn Fn>> dispatcher.

Sources: Cargo.lock:1261-1272 Cargo.lock:1905-1915 experiments/simd-r-drive-muxio-service-definition/Cargo.toml:1-17


Bitcode Serialization

The framework uses the bitcode crate (version 0.6.6) for efficient binary serialization with the following characteristics:

graph LR
    subgraph "Bitcode Serialization Pipeline"
        RustType["Rust Type\n#[derive(Encode, Decode)]"]
Encode["bitcode::encode\n&lt;T&gt;(&value)"]
Binary["Vec&lt;u8&gt;\nCompact Binary"]
Decode["bitcode::decode\n&lt;T&gt;(&bytes)"]
RustType2["Rust Type\nReconstructed"]
end
    
    subgraph "bitcode Dependencies"
        BitcodeDerve["bitcode_derive\nproc macros"]
Bytemuck["bytemuck\nzero-copy casts"]
Arrayvec["arrayvec\nstack arrays"]
Glam["glam\nSIMD vectors"]
end
    
 
   RustType -->|serialize| Encode
 
   Encode --> Binary
 
   Binary -->|deserialize| Decode
 
   Decode --> RustType2
    
    Encode -.uses.- BitcodeDerve
    Encode -.uses.- Bytemuck
    Decode -.uses.- BitcodeDerve
    Decode -.uses.- Bytemuck

Serialization Features

Bitcode Encoding/Decoding Pipeline with Dependencies

FeatureImplementationBenefit
Zero-copy deserializationbytemuck for Pod typesMinimal overhead for aligned data
Compact encodingVariable-length integers, bit packingSmaller than bincode/MessagePack
Type safety#[derive(Encode, Decode)] proc macrosCompile-time serialization code
Performance~50ns per small structLower CPU than JSON/CBOR
SIMD supportglam integrationEfficient vector serialization

Integration with RPC

The serialization is integrated at multiple points:

Integration PointOperationCode Path
Request serializationbitcode::encode(&args)Vec<u8>Client RpcServiceCaller::call
Wire transferVec<u8> in RpcRequest.payloadWebSocket binary message
Request deserializationbitcode::decode::<Args>(&payload)Server RpcServiceEndpoint::dispatch
Response serializationbitcode::encode(&result)Vec<u8>Server after method execution
Response deserializationbitcode::decode::<Result>(&payload)Client response handler

The use of #[derive(Encode, Decode)] on request/response types ensures compile-time validation of serialization compatibility.

Sources: Cargo.lock:392-414 experiments/simd-r-drive-muxio-service-definition/Cargo.toml14


Client-Side Components

flowchart TB
    subgraph "Client Call Flow"
        ClientApp["Client Application"]
Caller["RpcServiceCaller\nStruct"]
GenID["Generate request_id\n(AtomicU64::fetch_add)"]
Request["Create RpcRequest\nStruct"]
Serialize["bitcode::encode\n(method args)"]
Send["Send via\ntokio::sync::mpsc"]
Await["tokio::sync::oneshot\nawait response"]
Deserialize["bitcode::decode\n(response payload)"]
Return["Return Result\nto caller"]
end
    
 
   ClientApp -->|async fn call| Caller
 
   Caller --> GenID
 
   GenID --> Request
 
   Request --> Serialize
 
   Serialize --> Send
 
   Send --> Await
 
   Await --> Deserialize
 
   Deserialize --> Return
 
   Return --> ClientApp

muxio-rpc-service-caller

The muxio-rpc-service-caller crate provides the client-side method invocation interface:

Client Method Invocation Flow with tokio Primitives

Key responsibilities and implementation:

ResponsibilityImplementationPurpose
Method call marshallingRpcServiceCaller structProvides typed interface to remote methods
Request ID generationAtomicU64::fetch_add(1, Ordering::Relaxed)Unique, monotonic request identifiers
Response awaitingtokio::sync::oneshot::ReceiverSingle-use channel for response delivery
Request queuingtokio::sync::mpsc::SenderSends requests to send loop
Error propagationResult<T, RpcError> return typesType-safe error handling

The caller uses tokio’s async primitives to coordinate between the application thread and the WebSocket send/receive loops.

Sources: Cargo.lock:1274-1285 experiments/simd-r-drive-ws-client/Cargo.toml18

graph TB
    subgraph "muxio-tokio-rpc-client Crate"
        Client["RpcClient\nStruct"]
SendLoop["send_loop\ntokio::task::spawn"]
RecvLoop["recv_loop\ntokio::task::spawn"]
PendingMap["Arc&lt;DashMap&lt;u64,\noneshot::Sender&lt;Result&gt;&gt;&gt;\nShared state"]
ReqChan["mpsc::Receiver\n&lt;RpcRequest&gt;"]
end
    
    subgraph "tokio-tungstenite Integration"
        WS["WebSocketStream\n&lt;MaybeTlsStream&gt;"]
Split["ws.split()"]
WSRead["SplitStream\n(read half)"]
WSWrite["SplitSink\n(write half)"]
end
    
    subgraph "Application Layer"
        AppCall["async fn call()"]
Future["impl Future\n&lt;Output=Result&gt;"]
end
    
 
   AppCall -->|1. create oneshot| Client
 
   Client -->|2. insert into| PendingMap
 
   Client -->|3. mpsc::send| ReqChan
 
   ReqChan -->|4. recv request| SendLoop
 
   SendLoop -->|5. bitcode::encode| SendLoop
 
   SendLoop -->|6. send binary| WSWrite
 
   WSRead -->|7. next binary| RecvLoop
 
   RecvLoop -->|8. bitcode::decode| RecvLoop
 
   RecvLoop -->|9. lookup by id| PendingMap
 
   PendingMap -->|10. oneshot::send| Future
 
   Future -->|11. return| AppCall
    
 
   WS --> Split
 
   Split --> WSRead
 
   Split --> WSWrite

muxio-tokio-rpc-client

The muxio-tokio-rpc-client crate implements the WebSocket client runtime with request multiplexing and response routing:

Client Runtime Request Multiplexing with tokio and tungstenite

Implementation details:

ComponentTypePurpose
RpcClientStructMain client interface, owns WebSocket and spawns tasks
send_looptokio::taskReceives from mpsc, serializes, writes to SplitSink
recv_looptokio::taskReads from SplitStream, deserializes, routes via DashMap
Pending requestsArc<DashMap<u64, oneshot::Sender>>Thread-safe map for response routing
Request channelmpsc::Sender/Receiver<RpcRequest>Queue for outbound requests
WebSockettokio_tungstenite::WebSocketStreamBinary WebSocket with TLS support
Split streamsfutures::stream::SplitStream/SplitSinkSeparate read/write halves

The multiplexing architecture uses DashMap for lock-free concurrent access to pending requests. The WebSocket stream is split into read and write halves, allowing the send_loop and recv_loop tasks to operate independently. Each request gets a unique request_id, and the recv_loop task matches responses back to waiting callers via oneshot channels.

Sources: Cargo.lock:1302-1318 experiments/simd-r-drive-ws-client/Cargo.toml16 Cargo.lock:681-693


Server-Side Components

graph TB
    subgraph "muxio-tokio-rpc-server Crate"
        Server["RpcServer\nStruct"]
AxumApp["axum::Router\nwith WebSocket route"]
AcceptLoop["tokio::spawn\n(per connection)"]
ConnHandler["handle_connection\nasync fn"]
Dispatcher["RpcServiceEndpoint\n&lt;ServiceImpl&gt;"]
end
    
    subgraph "axum WebSocket Integration"
        Route["GET /ws\nWebSocket upgrade"]
WSUpgrade["axum::extract::ws\nWebSocketUpgrade"]
WSStream["axum::extract::ws\nWebSocket"]
end
    
    subgraph "Service Implementation"
        ServiceImpl["Arc&lt;ServiceImpl&gt;\n(e.g., DataStore)"]
Methods["#[async_trait]\nRpcService methods"]
end
    
    subgraph "Method Dispatch"
        MethodMap["HashMap&lt;u64,\nBox&lt;dyn Fn&gt;&gt;\n(method_id → handler)"]
end
    
 
   AxumApp -->|upgrade| WSUpgrade
 
   WSUpgrade -->|on_upgrade| WSStream
 
   WSStream -->|tokio::spawn| AcceptLoop
 
   AcceptLoop --> ConnHandler
 
   ConnHandler -->|recv Message::Binary| ConnHandler
 
   ConnHandler -->|bitcode::decode| ConnHandler
 
   ConnHandler -->|dispatch by id| MethodMap
 
   MethodMap -->|invoke| Methods
    Methods -.implemented by.- ServiceImpl
 
   Methods -->|return Result| ConnHandler
 
   ConnHandler -->|bitcode::encode| ConnHandler
 
   ConnHandler -->|send Message::Binary| WSStream
    
 
   Dispatcher -->|owns| MethodMap
 
   Dispatcher -->|holds Arc| ServiceImpl

muxio-tokio-rpc-server

The muxio-tokio-rpc-server crate implements the WebSocket server runtime with connection management and request dispatching:

Server Runtime with axum WebSocket Integration

The server runtime architecture:

ComponentTypePurpose
RpcServerStructMain server, creates axum::Router with WebSocket route
axum::RouterHTTP routerHandles WebSocket upgrade at /ws endpoint
WebSocketUpgradeaxum::extractPerforms HTTP → WebSocket protocol upgrade
Connection handlerasync fn per clientSpawned via tokio::spawn for each connection
RpcServiceEndpointGeneric structRoutes method_id to service methods via HashMap
Method dispatcherHashMap<u64, Box<dyn Fn>>O(1) lookup and async invocation of methods
Service implementationArc<ServiceImpl>Shared DataStore instance across connections

Request Processing Pipeline

Each incoming request follows this pipeline:

Server Request Processing Pipeline with Code Entities

The dispatcher performs O(1) method lookup using the method_id hash from the HashMap, then invokes the corresponding service implementation. All service methods use #[async_trait], allowing concurrent request handling. The use of Arc<ServiceImpl> enables safe sharing of the DataStore across multiple client connections.

Sources: Cargo.lock:1320-1336 experiments/simd-r-drive-ws-server/Cargo.toml16 Cargo.lock:305-340


Request/Response Flow

Complete RPC Call Sequence

End-to-End RPC Call Flow

Message Format

The Muxio RPC wire protocol uses WebSocket binary frames with bitcode-encoded messages. The exact frame structure is managed by the muxio framework, but the logical message structure is:

ComponentEncodingDescription
Request messagebitcodeContains request_id, method_id, and method arguments
Response messagebitcodeContains request_id and result (success/error)
WebSocket frameBinarySingle frame per request/response for small messages
FragmentationAutomaticLarge payloads may use multiple frames

The use of WebSocket binary frames and bitcode serialization provides:

  • Compact encoding : Smaller than JSON or MessagePack
  • Zero-copy potential : bitcode can deserialize without copying
  • Type safety : Compile-time verification of message structure

Sources: Cargo.lock:133-143 Cargo.lock:648-656 Cargo.lock:1213-1222


Error Handling

The framework provides comprehensive error handling across the RPC boundary:

RPC Error Classification and Propagation

Error Categories

CategoryOriginHandling
Serialization errorsBitcode encoding/decoding failureLogged and returned as RpcError
Network errorsWebSocket connection issuesAutomatic reconnect or error propagation
Application errorsDataStore operation failuresSerialized and returned to client
Timeout errorsRequest took too longClient-side timeout with error result

Error Recovery

The framework implements several recovery strategies:

  • Connection loss : Client automatically attempts reconnection
  • Request timeout : Client cancels pending request after configured duration
  • Serialization failure : Error logged and generic error returned
  • Invalid method ID : Server returns “method not found” error

Sources: Cargo.lock:1261-1336


Performance Characteristics

The Muxio RPC framework is optimized for high-performance remote storage access:

MetricCharacteristicImpact
Serialization overhead~50-100 ns for typical payloadsMinimal CPU impact
Request multiplexingThousands of concurrent requestsHigh throughput
Binary protocolCompact wire formatReduced bandwidth usage
Zero-copy deserializationDirect memory referencesLower latency for large payloads

The use of bitcode serialization and WebSocket binary frames minimizes overhead compared to text-based protocols like JSON over HTTP. The multiplexed architecture allows clients to issue multiple concurrent requests without blocking, essential for high-performance batch operations.

Sources: Cargo.lock:392-414 Cargo.lock:1250-1336

Dismiss

Refresh this wiki

Enter email to refresh