Digital Twin QPU — API Reference
v1.1 · 34 endpoints All requests go via the gateway at /api/v1/digital-twin/. The gateway requires a valid JWT; it injects X-API-Key when forwarding to the twin — clients never send the raw API key directly. Endpoints marked FPGA are designed for hardware integration and are not exposed through the frontend UI.
Authentication
POST /api/v1/auth/login, then send Authorization: Bearer <access_token> on every request. The gateway validates the JWT and injects X-API-Key downstream — you never send the raw key. Step 1 — Get a JWT
curl -s -X POST "http://localhost/api/v1/auth/login" \
-H "Content-Type: application/json" \
-d '{"email":"admin@qcraft.local","password":"YOUR_PASSWORD"}'
# Response: {"access_token":"eyJ...","refresh_token":"eyJ...","token_type":"Bearer"}Step 2 — Use the JWT
# Save the token export ACCESS_TOKEN="eyJ..." # Health check curl -s "http://localhost/api/v1/digital-twin/health" \ -H "Authorization: Bearer $ACCESS_TOKEN" # List backends curl -s "http://localhost/api/v1/digital-twin/backends" \ -H "Authorization: Bearer $ACCESS_TOKEN" | jq .
Client Examples
Replace http://localhost with your gateway base URL. All calls use the access_token from login.
curl — submit a job
export BASE="http://localhost/api/v1/digital-twin"
export TOKEN="YOUR_ACCESS_TOKEN"
# Submit a Bell-state circuit (QASM string)
curl -s -X POST "$BASE/jobs" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"backend_id": "ibm_jakarta_like",
"circuits": ["OPENQASM 2.0; include \"qelib1.inc\"; qreg q[2]; creg c[2]; h q[0]; cx q[0],q[1]; measure q -> c;"],
"shots": 1024
}'
# Poll job status
curl -s "$BASE/jobs/JOB_ID" -H "Authorization: Bearer $TOKEN"
# Get results
curl -s "$BASE/jobs/JOB_ID/result" -H "Authorization: Bearer $TOKEN"Python (requests)
import requests, time
BASE = "http://localhost/api/v1/digital-twin"
headers = {"Authorization": "Bearer YOUR_ACCESS_TOKEN"}
# Submit job
r = requests.post(f"{BASE}/jobs", headers=headers, json={
"backend_id": "ibm_jakarta_like",
"circuits": ['OPENQASM 2.0; include "qelib1.inc"; qreg q[2]; creg c[2]; h q[0]; cx q[0],q[1]; measure q -> c;'],
"shots": 1024,
})
job_id = r.json()["job_id"]
print("Submitted:", job_id)
# Poll until done
while True:
status = requests.get(f"{BASE}/jobs/{job_id}", headers=headers).json()
print("Status:", status["status"])
if status["status"] in ("COMPLETED", "FAILED", "CANCELLED"):
break
time.sleep(1)
# Get counts
result = requests.get(f"{BASE}/jobs/{job_id}/result", headers=headers).json()
print("Counts:", result["results"][0]["counts"])TypeScript / Node (fetch)
const BASE = "http://localhost/api/v1/digital-twin";
const headers = {
"Authorization": `Bearer ${ACCESS_TOKEN}`,
"Content-Type": "application/json",
};
// Submit job
const res = await fetch(`${BASE}/jobs`, {
method: "POST",
headers,
body: JSON.stringify({
backend_id: "ibm_jakarta_like",
circuits: ['OPENQASM 2.0; include "qelib1.inc"; qreg q[2]; creg c[2]; h q[0]; cx q[0],q[1]; measure q -> c;'],
shots: 1024,
}),
});
const { job_id } = await res.json();
// Poll
let status;
do {
await new Promise(r => setTimeout(r, 1000));
status = await (await fetch(`${BASE}/jobs/${job_id}`, { headers })).json();
} while (!["COMPLETED","FAILED","CANCELLED"].includes(status.status));
const result = await (await fetch(`${BASE}/jobs/${job_id}/result`, { headers })).json();
console.log(result.results[0].counts);Java (HttpClient)
var client = HttpClient.newHttpClient();
var headers = new String[]{"Authorization","Bearer "+ACCESS_TOKEN,"Content-Type","application/json"};
String base = "http://localhost/api/v1/digital-twin";
// Submit
var body = """{"backend_id":"ibm_jakarta_like","circuits":["OPENQASM 2.0; ..."],"shots":1024}""";
var req = HttpRequest.newBuilder()
.uri(URI.create(base + "/jobs"))
.headers(headers)
.POST(BodyPublishers.ofString(body))
.build();
var resp = client.send(req, BodyHandlers.ofString());
System.out.println(resp.body());Postman
- Create a collection variable
ACCESS_TOKEN - Add a pre-request script to auto-login and set
ACCESS_TOKEN - On each request: Headers →
Authorization:Bearer <ACCESS_TOKEN>(use Postman's collection variable in double-curly notation) - Import the OpenAPI spec from
/api/v1/digital-twin/openapi.json
Core (provider-style)
| Method | Path | Description |
|---|---|---|
| GET | /health | Health check — returns {"status":"healthy","timestamp":"..."} |
| GET | /backends | List all operational backends with topology, basis gates, qubit count, and limits |
| GET | /backends/{backend_id} | Full details for one backend including qubit_positions for topology rendering |
| GET | /metrics | Per-backend JSON metrics: queue depth, running/completed/failed jobs, average latencies |
| GET | /metrics/prometheus | Same metrics in Prometheus text exposition format for scraping |
Jobs
| Method | Path | Description |
|---|---|---|
| POST | /jobs | Submit circuits for execution. Body: {"backend_id","circuits":[],"shots":1024,"metadata":{}}Returns: {"job_id","status":"QUEUED",...} |
| GET | /jobs/{job_id} | Job status — QUEUED, RUNNING, COMPLETED, FAILED, or CANCELLED |
| POST | /jobs/{job_id}/metadata | Attach decoder or QEC metadata to a job. Used by FPGA decoders to push decoder_metrics and qec_metrics. |
| GET | /jobs/{job_id}/result | Fetch per-circuit counts. Returns 202 while still queued/running, 410 if cancelled. |
| DELETE | /jobs/{job_id} | Cancel a queued job (cannot cancel a running job). |
Example response — job result
{
"job_id": "abc123",
"backend_id": "ibm_jakarta_like",
"status": "COMPLETED",
"shots": 1024,
"results": [
{
"circuit_index": 0,
"shots": 1024,
"counts": {"00": 512, "11": 512}
}
]
}Telemetry
X-API-Key header (passed automatically by the gateway). Config base values are the defaults; controller overrides are reflected in real-time. | Method | Path | Description |
|---|---|---|
| GET | /telemetry/qubits?backend_id= | Per-qubit snapshot: T1, T2, readout error, single-qubit error, drift multiplier — all with current controller overrides applied |
| GET | /telemetry/qec?backend_id=&job_id=&circuit_index= | QEC metrics: uses decoder-pushed metadata first (metric_source=decoder), falls back to heuristic. Persists the data point. |
| GET | /telemetry/qec/summary?backend_id=&job_id= | Aggregated QEC summary: latest point, average logical & physical error rates over all captured points |
| GET | /telemetry/qec/history?backend_id=&job_id=&limit= | Full time-series array of QEC metric points captured via prior /telemetry/qec calls |
| GET | /telemetry/events?backend_id=&limit=&since_timestamp= | Event log: job lifecycle events and controller commands |
| GET | /telemetry/events/stream?backend_id= | Server-Sent Events (SSE) live stream. Not recommended on Lambda (response buffering). Use polling instead. |
| GET | /telemetry/loop?backend_id= | Controller loop cadence: count, last timestamp, avg/min/max interval of policy_update commands |
| GET | /telemetry/trace/export?backend_id=&job_id= | Export all telemetry events + QEC points for a job (reproducibility trace) |
| GET | /telemetry/scenario/compatibility?backend_id= | Reports which YAML keys in the active scenario are recognized vs. silently ignored |
Controller
| Method | Path | Description |
|---|---|---|
| POST | /controller/{backend_id}/commands | Apply a command. See command types below. |
| GET | /controller/{backend_id}/state | Current controller state: all applied overrides, status, last heartbeat |
Command types
inject_noise_overrideScale gate and/or readout errors globally
{
"command_type": "inject_noise_override",
"payload": {
"gate_error_multiplier": 1.5,
"readout_error_multiplier": 2.0,
"exec_time_scale": 1.2
}
}set_qubit_noiseOverride noise for a specific qubit (config = default, this takes precedence). Pass -1 to revert to config default.
{
"command_type": "set_qubit_noise",
"payload": {
"qubit": 3,
"t1_us": 80.0,
"t2_us": 60.0,
"readout_error": 0.05,
"gate_error": 0.002
}
}reset_qubit_noiseClear per-qubit overrides. Omit qubit to reset all qubits to config defaults.
{
"command_type": "reset_qubit_noise",
"payload": { "qubit": 3 }
}set_scenarioHot-swap the active noise scenario at runtime (no restart needed).
{
"command_type": "set_scenario",
"payload": {
"scenario_name": "noisy_day"
}
}policy_updateClosed-loop controller push: gate, readout, drift multipliers + timing scales
{
"command_type": "policy_update",
"payload": {
"gate_error_multiplier": 1.1,
"readout_error_multiplier": 0.9,
"drift_multiplier": 1.0,
"exec_time_scale": 1.0
}
}set_pulse_scheduleMap a pulse schedule to a gate error scaling (FPGA proxy)
{
"command_type": "set_pulse_schedule",
"payload": { "error_multiplier": 1.3 }
}set_adc_gain / set_dac_biasDirectly scale readout or gate errors via ADC/DAC proxy values
{
"command_type": "set_adc_gain",
"payload": { "readout_error_multiplier": 1.8 }
}resetReset all overrides (global + per-qubit) to config-file defaults
{
"command_type": "reset",
"payload": {}
}Controller API v1 FPGA
| Method | Path | Description |
|---|---|---|
| POST | /controller_api/v1/{backend_id}/pulse_schedule | Submit a pulse schedule (gate-like action list). Returns schedule_id for use in readout.Body: {"actions":[{"gate":"h","qubits":[0]},...],"n_qubits":2} |
| GET | /controller_api/v1/{backend_id}/readout?schedule_id=&round_index= | Run stored pulse schedule as a circuit and return per-qubit outcomes + 32-bit packed meas_data_packed.Without schedule_id: returns synthetic Bernoulli outcomes based on avg readout error. |
| POST | /controller_api/v1/{backend_id}/readout_config | Set readout configuration for subsequent readouts. Body: {"channels":[0,1,2],"integration_time_ns":1000,"thresholds":{"0":0.5}} |
| GET | /controller_api/v1/{backend_id}/cryo | Get cryostat stub state: temperature, stage, T1/T2 multipliers, mixing chamber, still temps |
| POST | /controller_api/v1/{backend_id}/cryo | Set cryo parameters. Setting temperature_mk automatically computes T1/T2 multipliers via a physics model (superconducting transmon curve). Explicit t1_multiplier/t2_multiplier override the physics curve.Temperature model: 15 mK → 1.0×, 30 mK → 0.85×, 50 mK → 0.60×, 100 mK → 0.20× |
Readout data format (meas_data_packed)
// 32-bit integer where bit i = outcome of qubit i (LSB = qubit 0)
// Example: qubits 0 and 2 measured as |1⟩, qubits 1,3 as |0⟩ → packed = 0b0101 = 5
{
"outcomes": [1, 0, 1, 0],
"meas_data_packed": 5
}QEC Session API FPGA
| Method | Path | Description |
|---|---|---|
| POST | /qec_session/v1/ | Create a session. Body: {"backend_id","code_family":"surface","code_distance":3}Returns: session_id, num_stabilizers |
| GET | /qec_session/v1/{session_id}/round/{round_index}/syndrome | Get synthetic syndrome bits for this round. Advances internal round counter. Returns syndrome_bits[] + meas_data_packed. |
| POST | /qec_session/v1/{session_id}/round/{round_index}/correction | Submit decoder correction bits for this round. Stores correction; next syndrome GET advances to round+1. |
| GET | /qec_session/v1/{session_id}/status | Session status: current round, code family, code distance, active/ended |
| POST | /qec_session/v1/{session_id}/end | End session and return total rounds + stats (corrections_received) |
QEC session loop (Python pseudocode)
import requests
BASE = "http://localhost/api/v1/digital-twin"
H = {"Authorization": "Bearer "+TOKEN}
# 1. Create session
s = requests.post(f"{BASE}/qec_session/v1/", headers=H, json={
"backend_id": "ibm_jakarta_like",
"code_family": "repetition",
"code_distance": 3,
}).json()
session_id = s["session_id"]
print("Stabilizers:", s["num_stabilizers"])
# 2. Run rounds
for round_idx in range(10):
syndrome = requests.get(
f"{BASE}/qec_session/v1/{session_id}/round/{round_idx}/syndrome",
headers=H,
).json()
print(f"Round {round_idx} syndrome:", syndrome["syndrome_bits"])
# Your decoder logic here...
correction = [0] * len(syndrome["syndrome_bits"])
requests.post(
f"{BASE}/qec_session/v1/{session_id}/round/{round_idx}/correction",
headers=H, json={"correction_bits": correction},
)
# 3. End session
stats = requests.post(f"{BASE}/qec_session/v1/{session_id}/end", headers=H).json()
print("Total rounds:", stats["total_rounds"])
print("Corrections received:", stats["stats"]["corrections_received"])Config (YAML files)
PUT /config/content returns 501). Run the twin locally to edit files. | Method | Path | Description |
|---|---|---|
| GET | /config | List all .yaml/.yml file paths and writable flag |
| GET | /config/content?path=backends.yaml | Get raw YAML content of a file. Path is relative to the config directory. |
| PUT | /config/content | Update a config file. Body:{"path":"noise/ibm_jakarta_like.yaml","content":"noise:\n ..."}. Returns 501 on Lambda. |
Parameter precedence (highest to lowest)
- Per-qubit API override —
set_qubit_noisecommand sets absolute T1/T2/readout/gate values for a specific qubit - Global controller multiplier —
inject_noise_override,policy_update, cryo API — scale all qubits - Scenario overlay — loaded from
scenarios/*.yaml; multiplies base values. Hot-swappable viaset_scenario. - Config file default — values in
noise/*.yaml,devices/*.yaml,scheduler/*.yaml; baseline for all above
