Dispatch
API Reference

SDK Reference

The @dispatch/compute-router package is the main SDK for submitting jobs. It routes to decentralized workers first, then falls back to hosted BYOK providers (OpenAI, Anthropic) if the network is unavailable.

Installation

pnpm add @dispatch/compute-router @dispatch/protocol

ComputeRouter

Constructor

import { ComputeRouter } from "@dispatch/compute-router";

const router = new ComputeRouter({
  coordinatorUrls: {
    monad: "http://localhost:4010",
    solana: "http://localhost:4020",
  },
  preferredHosted: "openai",  // optional, default: "openai"
  x402Clients: {              // optional, for production with payments
    monad: monadX402Client,
    solana: solanaX402Client,
  },
});

ComputeRouterConfig

interface ComputeRouterConfig {
  coordinatorUrls: {
    monad: string;
    solana: string;
  };
  preferredHosted?: "openai" | "anthropic";
  x402Clients?: {
    monad?: X402ClientLike;
    solana?: X402ClientLike;
  };
}
FieldTypeRequiredDescription
coordinatorUrlsobjectYesURLs for Monad and Solana coordinators
preferredHosted"openai" or "anthropic"NoPreferred hosted fallback. Default: "openai"
x402ClientsobjectNox402 HTTP clients for automatic payment handling

Methods

runLLM

Run an LLM inference job.

const result = await router.runLLM({
  prompt: "Explain quantum computing in one paragraph.",
  max_tokens: 256,
  policy: Policy.AUTO,          // optional, default: AUTO
  privacy: PrivacyClass.PUBLIC, // optional, default: PUBLIC
  user_id: "user_abc123",
  chainPreference: "monad",    // optional, default: "monad"
});

Parameters

ParamTypeRequiredDescription
promptstringYesThe prompt to send to the LLM
max_tokensnumberNoMaximum tokens in the response
policyPolicyNoPricing tier. Default: AUTO (resolves to FAST)
privacyPrivacyClassNoPrivacy class. Default: PUBLIC
user_idstringYesUser identifier
chainPreference"monad" or "solana"NoWhich coordinator to use. Default: "monad"

Returns Promise<ComputeResult>

runTask

Run a task job (summarize, classify, or extract_json).

const result = await router.runTask({
  task_type: "classify",
  input: "This product is amazing and works perfectly.",
  policy: Policy.AUTO,          // optional, default: AUTO
  privacy: PrivacyClass.PUBLIC, // optional, default: PUBLIC
  user_id: "user_abc123",
  chainPreference: "monad",    // optional, default: "monad"
});

Parameters

ParamTypeRequiredDescription
task_type"summarize" or "classify" or "extract_json"YesTask type
inputstringYesInput text to process
policyPolicyNoPricing tier. Default: AUTO (resolves to CHEAP)
privacyPrivacyClassNoPrivacy class. Default: PUBLIC
user_idstringYesUser identifier
chainPreference"monad" or "solana"NoWhich coordinator to use. Default: "monad"

Returns Promise<ComputeResult>

ComputeResult

Both runLLM and runTask return a ComputeResult:

interface ComputeResult {
  output: unknown;
  route: string;
  price: string | null;
  latency_ms: number;
  receipt: unknown | null;
}
FieldTypeDescription
outputunknownThe job output (shape depends on job type)
routestringWhich adapter handled the job
pricestring | nullPrice in USD (e.g., "$0.010") or null for hosted
latency_msnumberEnd-to-end latency in milliseconds
receiptunknown | nullCryptographic receipt from the worker (decentralized only)

Route values

RouteDescription
"decentralized:monad"Handled by Monad coordinator
"decentralized:solana"Handled by Solana coordinator
"hosted:openai"Fallback to OpenAI BYOK
"hosted:anthropic"Fallback to Anthropic BYOK

Routing logic

The SDK tries adapters in this order:

  1. Decentralized (selected chain) — submits to coordinator, polls for result
  2. Hosted fallback — tries preferred hosted provider, then the other
runLLM / runTask

  ├─ Try decentralized:{chainPreference}
  │   ├─ Success → return result
  │   └─ Fail → log warning, continue

  ├─ Try hosted:{preferredHosted}
  │   ├─ Success → return result
  │   └─ Fail → log warning, continue

  ├─ Try hosted:{other}
  │   ├─ Success → return result
  │   └─ Fail → throw Error

  └─ "All compute adapters failed"

Decentralized flow

When routing through the decentralized network, the SDK:

  1. Gets a quoteGET /v1/quote?job_type=...&policy=...
  2. Submits the jobPOST /v1/jobs/commit/\{tier\}
  3. Handles x402 — if 402 returned and x402Client configured, signs payment and retries
  4. Polls for resultGET /v1/jobs/\{id\} every 500ms until completed or timed out

Timeouts:

  • TASK jobs: 30 seconds
  • LLM_INFER jobs: 60 seconds

Hosted BYOK fallback

For hosted fallback, set the corresponding environment variables:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
PREFERRED_HOSTED_PROVIDER=openai

The hosted adapters call provider APIs directly — no coordinator, no x402 payments. The route field in the result shows which hosted provider was used.