Architecture
Three roles: seekers request compute, coordinators match and route jobs, workers execute them. All communication runs over standard HTTP and WebSocket.
System overview
┌──────────┐ ┌──────────────────┐ ┌───────────┐
│ │ 1. GET /v1/quote │ │ │ │
│ Seeker │ ───────────────────────▶ │ Coordinator │ ◀── WebSocket ───────▶ │ Worker │
│ (SDK) │ 2. POST /v1/jobs/ │ (Express + │ register │ (Desktop) │
│ │ commit/{tier} │ WebSocket) │ heartbeat │ │
│ │ 3. GET /v1/jobs/{id} │ │ job_assign ──────▶ │ │
│ │ ◀────── poll ────────── │ │ ◀──── job_complete ──── │ │
└──────────┘ └────────┬─────────┘ └───────────┘
│
┌──────┴──────┐
│ SQLite │
│ ─ jobs │
│ ─ trust │
│ ─ receipts │
└─────────────┘Roles
Seeker
A seeker is any client that submits jobs:
- A backend service using the
ComputeRouterSDK - A mobile app making REST calls
- A CLI tool like the
cloudbot-demo
Seekers interact exclusively over HTTP. They never open WebSocket connections.
Coordinator
The coordinator is the central routing node. It runs an Express HTTP server and a WebSocket server on the same port (via server.on('upgrade')).
What it does:
- Accepts job submissions via REST
- Gates paid endpoints with x402 middleware (when enabled)
- Maintains a live WebSocket connection pool of workers (the
WorkerHub) - Matches jobs to workers using atomic
claimWorker()— a synchronous select-and-mark-busy that prevents time-of-check/time-of-use races - Stores jobs, trust pairings, and receipts in SQLite
- Enforces privacy routing — PRIVATE jobs go only to trust-paired workers
Two coordinator instances run in parallel:
- Monad (port 4010) — EVM chain,
eip155:10143, usesExactEvmScheme - Solana (port 4020) —
solana:EtWTRABZaYq6iMfeYKouRu166VU2xqa1, usesExactSvmScheme
Each has its own SQLite database (monad.db / solana.db).
Worker
Workers connect to a coordinator via WebSocket and execute assigned jobs:
| Type | Capabilities | Use case |
|---|---|---|
| Desktop | LLM_INFER, TASK | Full compute — runs Ollama for LLM inference + built-in task execution |
| Seeker | TASK only | Lightweight — text summarization, classification, JSON extraction |
Workers generate an ed25519 keypair on first run, register with the coordinator, send heartbeats every 10 seconds, and sign cryptographic receipts for every completed job.
Data flow: job lifecycle
1. Quote
Seeker → GET /v1/quote?job_type=LLM_INFER&policy=AUTO
← { price: "$0.010", endpoint: "/v1/jobs/commit/fast", policy_resolved: "FAST" }The quote endpoint resolves AUTO policy (LLM_INFER defaults to FAST, TASK defaults to CHEAP) and returns the price and commit endpoint.
2. Commit with payment
Seeker → POST /v1/jobs/commit/fast (with X-PAYMENT header if x402 enabled)
← { job_id: "uuid" }If x402 is enabled, the first request returns 402 Payment Required with payment details. The SDK handles this transparently — signs a stablecoin payment and retries.
3. Worker assignment
The coordinator finds an available worker using claimWorker():
- For
PUBLICjobs: any online worker with matching capabilities - For
PRIVATEjobs: only workers that the user has trust-paired with
If no worker is available, the coordinator retries every 2 seconds for up to 30 seconds. Private jobs with no trusted worker fail immediately with 422.
Coordinator → WebSocket → Worker: job_assign { job_id, job_type, payload, policy, privacy_class }4. Execution and receipt
The worker executes the job (Ollama for LLM, built-in logic for TASK), then:
- Hashes the output with SHA-256
- Builds a receipt with job_id, provider_pubkey, output_hash, timestamp
- Signs the receipt with ed25519
- Sends
job_completewith output + bundled receipt over WebSocket
The coordinator stores the job result and receipt atomically.
5. Poll for result
Seeker → GET /v1/jobs/{id}
← { id, status: "completed", result: {...}, receipt: {...} }The SDK polls every 500ms. Task jobs time out after 30 seconds, LLM jobs after 60 seconds.
Database schema
The coordinator uses SQLite with WAL mode and foreign keys:
-- Jobs table
CREATE TABLE jobs (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
policy TEXT NOT NULL,
privacy_class TEXT NOT NULL DEFAULT 'PUBLIC',
user_id TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
payload TEXT NOT NULL,
result TEXT,
worker_pubkey TEXT,
created_at TEXT NOT NULL,
completed_at TEXT
);
-- Trust pairings
CREATE TABLE trust_pairings (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
provider_pubkey TEXT,
pairing_code TEXT NOT NULL UNIQUE,
claimed INTEGER NOT NULL DEFAULT 0,
expires_at TEXT NOT NULL,
created_at TEXT NOT NULL
);
-- Receipts
CREATE TABLE receipts (
id TEXT PRIMARY KEY,
job_id TEXT NOT NULL REFERENCES jobs(id),
provider_pubkey TEXT NOT NULL,
receipt_json TEXT NOT NULL,
signature TEXT NOT NULL,
verified INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL
);Single-port design
Both HTTP and WebSocket run on the same port per coordinator. The Express HTTP server is created with http.createServer(app), and the WorkerHub upgrades WebSocket connections on that same server:
const httpServer = http.createServer(app);
const hub = new WorkerHub(httpServer, db);One port, one process, one URL for both REST and WebSocket.