WebSocket Server
Relevant source files
- Cargo.lock
- experiments/simd-r-drive-muxio-service-definition/Cargo.toml
- experiments/simd-r-drive-ws-client/Cargo.toml
- experiments/simd-r-drive-ws-server/Cargo.toml
Purpose and Scope
This document covers the simd-r-drive-ws-server WebSocket server implementation, which provides remote access to the SIMD R Drive storage engine over WebSocket connections using the Muxio RPC framework. The server accepts connections from both native Rust clients (see Native Rust Client) and Python applications (see Python WebSocket Client API), routing RPC requests to the underlying DataStore.
For information about the RPC protocol and serialization format, see Muxio RPC Framework. For details on the core storage operations being exposed, see DataStore API.
Architecture Overview
The WebSocket server is built on the Axum web framework and Tokio async runtime, providing a high-performance, concurrent RPC endpoint for storage operations. The server acts as a thin network wrapper around the DataStore, translating WebSocket messages into storage API calls.
graph TB
CLI["Server CLI\n(clap parser)"]
Axum["Axum Web Framework\nHTTP/WebSocket handling"]
MuxioServer["muxio-tokio-rpc-server\nRPC server runtime"]
Endpoint["muxio-rpc-service-endpoint\nRequest routing"]
ServiceDef["simd-r-drive-muxio-service-definition\nRPC contract"]
DataStore["simd-r-drive::DataStore\nStorage engine"]
Tokio["Tokio Runtime\nAsync executor"]
Tungstenite["tokio-tungstenite\nWebSocket protocol"]
CLI --> Axum
Axum --> MuxioServer
MuxioServer --> Endpoint
MuxioServer --> Tungstenite
Endpoint --> ServiceDef
Endpoint --> DataStore
Axum -.-> Tokio
MuxioServer -.-> Tokio
Tungstenite -.-> Tokio
Component Stack
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:1-23 Cargo.lock:305-339 Cargo.lock:1320-1335
Crate Structure
The server is located in the experiments workspace and has a minimal dependency footprint focused on networking and RPC:
| Dependency | Purpose | Version Source |
|---|---|---|
simd-r-drive | Core storage engine | workspace |
simd-r-drive-muxio-service-definition | RPC service contract | workspace |
muxio-tokio-rpc-server | RPC server runtime | workspace (0.9.0-alpha) |
muxio-rpc-service | RPC abstractions | workspace (0.9.0-alpha) |
axum | Web framework | 0.8.4 |
tokio | Async runtime | workspace |
clap | CLI argument parsing | workspace |
tracing / tracing-subscriber | Structured logging | workspace |
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:13-22 Cargo.lock:305-339 Cargo.lock:1320-1335
Server Initialization Flow
The server follows a standard initialization pattern: parse CLI arguments, configure logging, initialize the DataStore, and start the WebSocket server.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:13-22
sequenceDiagram
participant Main as "main()"
participant CLI as "clap::Parser"
participant Tracing as "tracing_subscriber"
participant DS as "DataStore::open()"
participant RPC as "muxio_tokio_rpc_server"
participant Axum as "axum::serve()"
Main->>CLI: Parse CLI arguments
CLI-->>Main: ServerArgs{port, path, log_level}
Main->>Tracing: init() with env_filter
Note over Tracing: Configure RUST_LOG levels
Main->>DS: open(data_file_path)
DS-->>Main: DataStore instance
Main->>RPC: Create endpoint with DataStore
RPC-->>Main: ServiceEndpoint
Main->>Axum: bind() and serve()
Note over Axum: Listen on 0.0.0.0:port
Axum->>Axum: Accept WebSocket connections
Request Processing Pipeline
When a client connects and sends RPC requests, the server processes them through multiple layers before reaching the storage engine.
Sources: Cargo.lock:305-339 Cargo.lock:1287-1299 Cargo.lock:1320-1335
graph LR
Client["WebSocket Client"]
WS["tokio-tungstenite\nWebSocket frame"]
Axum["Axum Router\nRoute: /ws"]
Server["muxio-tokio-rpc-server\nMessage decode"]
Router["muxio-rpc-service-endpoint\nMethod dispatch"]
Handler["Service Handler\n(read/write/delete)"]
DS["DataStore\nStorage operation"]
Client -->|Binary frame| WS
WS --> Axum
Axum --> Server
Server -->|Bitcode decode| Router
Router --> Handler
Handler --> DS
DS -.->|Result| Handler
Handler -.-> Router
Router -.->|Bitcode encode| Server
Server -.-> WS
WS -.->|Binary frame| Client
RPC Service Definition Integration
The server uses the simd-r-drive-muxio-service-definition crate to define the RPC interface contract. This crate acts as a shared dependency between the server and all clients, ensuring type-safe communication.
graph TB
subgraph "simd-r-drive-muxio-service-definition"
Service["Service Trait\nRPC method definitions"]
Types["Request/Response Types\nBitcode serializable"]
Bitcode["bitcode crate\nBinary serialization"]
end
subgraph "Server Side"
Endpoint["muxio-rpc-service-endpoint\nImplements Service Trait"]
Handler["Request Handlers\nCall DataStore methods"]
end
subgraph "Client Side"
Caller["muxio-rpc-service-caller\nInvokes Service methods"]
ClientImpl["ws-client implementation"]
end
Service --> Endpoint
Service --> Caller
Types --> Bitcode
Endpoint --> Handler
Caller --> ClientImpl
Service Definition Structure
Sources: experiments/simd-r-drive-muxio-service-definition/Cargo.toml:1-17 experiments/simd-r-drive-ws-server/Cargo.toml:14-17
The service definition uses the bitcode crate for efficient binary serialization, providing compact message sizes and high throughput compared to JSON-based protocols.
Sources: experiments/simd-r-drive-muxio-service-definition/Cargo.toml:14-15 Cargo.lock:392-402
graph TD
Server["simd-r-drive-ws-server"]
Server --> Axum["axum 0.8.4\nWeb framework"]
Server --> MuxioServer["muxio-tokio-rpc-server\nRPC runtime"]
Server --> ServiceDef["simd-r-drive-muxio-service-definition\nRPC contract"]
Server --> DataStore["simd-r-drive\nStorage engine"]
Server --> Tokio["tokio\nAsync runtime"]
Server --> Clap["clap\nCLI parsing"]
Server --> Tracing["tracing + tracing-subscriber\nLogging"]
Axum --> Hyper["hyper\nHTTP implementation"]
Axum --> TokioTung["tokio-tungstenite\nWebSocket protocol"]
MuxioServer --> Muxio["muxio\nCore RPC abstractions"]
MuxioServer --> Endpoint["muxio-rpc-service-endpoint\nRouting"]
ServiceDef --> Bitcode["bitcode\nSerialization"]
ServiceDef --> MuxioService["muxio-rpc-service\nService traits"]
Hyper --> Tokio
TokioTung --> Tokio
Endpoint --> Tokio
Dependency Graph
The server's dependency structure shows clear separation between web framework, RPC layer, and storage:
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:13-22 Cargo.lock:305-339 Cargo.lock:1320-1335
Configuration and CLI
The server is configured via command-line arguments using the clap crate's derive API. The server binary accepts the following parameters:
| Argument | Type | Default | Description |
|---|---|---|---|
--port | u16 | 8080 | Port number to bind |
--path | String | Required | Path to DataStore file |
--log-level | String | "info" | Tracing log level (error/warn/info/debug/trace) |
The server supports the RUST_LOG environment variable through tracing-subscriber with env-filter for fine-grained logging control.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:19-22 experiments/simd-r-drive-ws-server/Cargo.toml21
graph TB
subgraph "Tokio Runtime"
Scheduler["Work-Stealing Scheduler\nThread pool"]
end
subgraph "Connection Handlers"
Conn1["WebSocket Task 1\nRPC message loop"]
Conn2["WebSocket Task 2\nRPC message loop"]
ConnN["WebSocket Task N\nRPC message loop"]
end
subgraph "Shared State"
DS["Arc<DataStore>\nThread-safe storage"]
Endpoint["Arc<ServiceEndpoint>\nRequest router"]
end
Scheduler --> Conn1
Scheduler --> Conn2
Scheduler --> ConnN
Conn1 --> Endpoint
Conn2 --> Endpoint
ConnN --> Endpoint
Endpoint --> DS
style DS fill:#f9f9f9
style Endpoint fill:#f9f9f9
Concurrency Model
The server leverages Tokio's work-stealing scheduler to handle multiple concurrent WebSocket connections efficiently:
Each WebSocket connection runs as an independent async task, with the DataStore wrapped in Arc for shared access. The DataStore's internal locking (see Concurrency and Thread Safety) ensures safe concurrent reads and serialized writes.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml18 Cargo.lock:305-339
Transport Protocol
The server uses binary WebSocket frames over TCP, with the following characteristics:
| Property | Value | Notes |
|---|---|---|
| Protocol | WebSocket (RFC 6455) | Via tokio-tungstenite |
| Message Format | Binary frames | Not text frames |
| Serialization | Bitcode | Compact binary format |
| Framing | Message-per-frame | Each RPC call is one frame |
| Multiplexing | Muxio protocol | Request/response correlation |
The binary format and bitcode serialization provide significantly better performance than text-based protocols like JSON over WebSocket.
Sources: Cargo.lock:305-339 experiments/simd-r-drive-muxio-service-definition/Cargo.toml14
Error Handling
The server implements error handling at multiple layers:
- WebSocket Layer : Connection errors, protocol violations handled by
tokio-tungstenite - RPC Layer : Serialization errors, invalid method calls handled by
muxio-tokio-rpc-server - Storage Layer : I/O errors, validation failures propagated from
DataStore - Tracing : All errors logged with structured context via
tracingspans
Errors are serialized back to clients as RPC error responses, preserving error context across the network boundary.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:19-20 Cargo.lock:1320-1335
Performance Characteristics
The server is designed for high-throughput operation with minimal overhead:
- Zero-Copy Reads : DataStore's
EntryHandle(see Memory Management and Zero-Copy Access) allows serving read responses without copying payload data - Async I/O : Tokio's epoll/kqueue-based I/O enables efficient handling of thousands of concurrent connections
- Binary Protocol : Bitcode serialization reduces CPU overhead and bandwidth usage compared to text formats
- Write Batching : Clients can use batch operations (see Write and Read Modes) to amortize RPC overhead
Sources: experiments/simd-r-drive-ws-server/Cargo.toml14 experiments/simd-r-drive-muxio-service-definition/Cargo.toml14
Deployment Considerations
When deploying the WebSocket server, consider:
- Single DataStore Instance : The server opens one DataStore file. Multiple servers require separate files or external coordination
- Port Binding : Default bind address is
0.0.0.0(all interfaces). Use firewall rules or reverse proxy for access control - No TLS : The server does not implement TLS. Use a reverse proxy (nginx, HAProxy) for encrypted connections
- Resource Limits : Memory usage scales with DataStore size (memory-mapped file) plus per-connection buffers
- Graceful Shutdown : Tokio runtime handles SIGTERM/SIGINT for clean connection closure
Sources: experiments/simd-r-drive-ws-server/Cargo.toml:1-23
Experimental Status
As indicated by its location in the experiments/ workspace directory, this server is currently experimental and subject to breaking changes. The API surface and configuration options may evolve as the Muxio RPC framework stabilizes.
Sources: experiments/simd-r-drive-ws-server/Cargo.toml2 experiments/simd-r-drive-ws-server/Cargo.toml11