autonomy-orchestrator¶
When to use this reference¶
This reference covers running your own autonomy-orchestrator binary —
self-host operation: starting the HTTP API with serve, migrating data with
migrate, and the full flag surface for TLS, logging, and metrics.
If you have been issued a hosted control-plane URL by an AutonomyOps tenant
and you only need to point your autonomy CLI at it (no binary to run, no
local infrastructure to set up), use the SaaS tier’s Hosted Orchestrator
connect-path documentation.
Both paths share the same autonomy CLI surface (fleet, rollout,
logs, …) — the only difference is who operates the orchestrator process.
Deeper deployment guidance¶
This reference is the public-safe surface: flags, endpoints, error
responses. Topology decisions (single-node SQLite vs HA PostgreSQL
cutover timing), sizing rules, CRL distribution patterns for multi-
replica deployments, pre-deployment checklists, and operational
pitfalls observed in past pilots are consolidated in an
engineering-and-partner-tier deployment guide that is not published to
this site. If you are operating a tenant deployment and need that
material, reach out via autonomyops.ai
and an engineer can share the internal guide
(docs/internal/orchestrator-self-hosted-deployment.md in the adk
repository) under the appropriate access agreement.
Common usage¶
autonomy-orchestrator --help
autonomy-orchestrator version
autonomy-orchestrator serve --listen 127.0.0.1:8888 --data-dir /var/lib/autonomy/orchestrator
autonomy-orchestrator serve --listen 0.0.0.0:8443 --tls-cert-file server.crt --tls-key-file server.key --tls-ca-file ca.crt
autonomy-orchestrator migrate --sqlite-dir /var/lib/autonomy/orchestrator --postgres-url "$AUTONOMY_POSTGRES_URL"
autonomy-orchestrator migrate --dry-run --sqlite-dir /var/lib/autonomy/orchestrator --postgres-url "$AUTONOMY_POSTGRES_URL"
Commands¶
serve¶
Starts the control-plane HTTP API on --listen backed by a local SQLite store.
serve does not accept a PostgreSQL connection — SQLite is the only backend.
Use migrate to transfer data to PostgreSQL for HA deployments.
Endpoints:
Method |
Path |
Description |
|---|---|---|
|
|
Ingest a JSON batch: |
|
|
Query stored events (newest first); params: |
|
|
Liveness probe: |
|
|
Serve CRL bytes (Content-Type: |
|
|
Publish a desired-state release |
|
|
Get the latest release for a channel ( |
|
|
Get all node acks for a release |
|
|
Upsert a node’s ack for a release |
|
|
Get all acks submitted by a node |
|
|
Per-node observational summary ( |
|
|
Publish a rollout plan |
|
|
List rollout plans |
|
|
Get rollout plan and status |
|
|
Immutable signed plan spec |
|
|
Mutable rollout status |
|
|
Pause an active rollout |
|
|
Resume a paused or halted rollout |
|
|
Manually promote the current stage |
|
|
Halt a rollout |
|
|
Cancel a rollout (terminal) |
|
|
Prometheus metrics; only registered when |
HA health endpoints not available in standalone mode. The routes under
/v1/health/read-ready,/v1/health/write-ready,/v1/health/quorum,/v1/health/audit,/v1/health/leader,/v1/ha/*require a PostgreSQLHealthServerto be wired at startup (RegisterPGHealth). The standaloneautonomy-orchestrator servecommand does not callRegisterPGHealthand therefore does not expose these endpoints.
Key flags:
Flag |
Default |
Description |
|---|---|---|
|
|
TCP address to listen on |
|
XDG cache dir |
Data directory for SQLite storage |
|
(disabled) |
Prometheus metrics listen address (e.g. |
|
|
Log output format: |
|
|
Minimum log level: |
|
— |
Server TLS certificate (PEM); enables TLS with |
|
— |
Server TLS private key (PEM) |
|
— |
CA certificate (PEM) for client verification (enables mTLS) |
|
— |
CRL file (PEM) to reject revoked client certificates; also enables |
|
— |
Control-plane CRL endpoint to pull (repeatable) |
|
|
Minimum CRL publishers that must agree before accepting an update |
|
|
CRL pull refresh interval; |
|
— |
TLS server name override for |
Notes:
The control plane holds observational authority only: it stores events as received and derives nothing from them (v1.13 §1.2.3).
Duplicate
event_idvalues are silently ignored (idempotent ingest).When
--tls-crl-fileis set, the CRL is hot-reloaded on each new client handshake; no restart is required after runningautonomy cert revoke.Fail-closed guarantee: if
--tls-crl-fileis set but the file is missing or has an invalid CA signature, the server refuses to start.GET /metricsis registered on the main mux in addition to the dedicated--metrics-addrserver when--metrics-addris set.
Error responses:
Code |
Condition |
|---|---|
|
Malformed JSON or missing required field |
|
Method not allowed |
|
SQLite writer busy beyond |
|
Disk full |
(fatal) |
|
migrate¶
Migrates all control-plane data from a SQLite source database to a PostgreSQL target. The SQLite source is read-only during migration; no source data is modified or deleted.
Migration order: schema (idempotent) → releases + node_acks → events → rollout_plans + stage_status → row-count validation → HACutoverAt record.
Key flags:
Flag |
Default |
Description |
|---|---|---|
|
same as |
Directory containing the SQLite |
|
|
PostgreSQL connection URL |
|
|
Number of rows per INSERT batch |
|
false |
Preview counts without writing to PostgreSQL |
|
false |
Check row counts only; skip data copy |
Rollback: stop the PostgreSQL CP nodes, revert config to backend=sqlite, restart.
The SQLite source is untouched and safe to roll back to.
version¶
Prints the binary version, build SHA, build date, tier, and runtime platform.
autonomy-orchestrator version
autonomy-orchestrator version --output json
Evidence¶
cmd/autonomy-orchestrator/main.gocmd/autonomy-orchestrator/commands/root.gocmd/autonomy-orchestrator/commands/serve.gocmd/autonomy-orchestrator/commands/migrate.gocmd/autonomy-orchestrator/commands/version.goorchestrator/server.go— route registrations (NewServer,SetMetrics,RegisterPGHealth)
See also¶
The SaaS tier’s Hosted Orchestrator — Connect Path — connect path for hosted tenants (no binary to run)
autonomy-orchestrator is the standalone fleet orchestrator binary for the AutonomyOps ADK.
It stores incoming telemetry events from edge nodes and exposes a query interface for
fleet-wide observability.