Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.superform.xyz/llms.txt

Use this file to discover all available pages before exploring further.

Use this page to choose the infrastructure tier for a Superform validator. The node is lightweight: it observes SuperVault PPS values, participates in OCR2 consensus, signs reports, and persists round state. It does not run an archive node, index chains, or replace your RPC provider. In production, most operator cost comes from RPC endpoints, managed Postgres, monitoring, and signing-key custody rather than the validator binary itself.

How much infrastructure do validators need?

For most approved operators, start with the recommended baseline:
  • 2 vCPU / 2 GB RAM validator host
  • static public endpoint for ragep2p on TCP 6690
  • managed PostgreSQL 15+
  • production RPC provider coverage for every assigned chain
  • KMS-backed onchain signing where supported
  • Prometheus-compatible metrics, alerts, and basic logs
The validator can run on AWS, GCP, Azure, Hetzner, OVH, or bare metal. Pick the platform you can operate reliably. A cheap VPS with weak backups, no alerting, and a rotating IP is worse than a slightly more expensive managed setup. Reference production deployment for a single validator:
ComponentRecommended baseline
Compute1 validator host, 2 vCPU ARM/x86, 2 GB RAM
Disk16–20 GB encrypted SSD
Public networkingStatic IP or stable DNS name; direct TCP 6690 reachability
DatabaseManaged PostgreSQL 15+, single-AZ, 20 GB, TLS, 7-day backups
Key custodyKMS-backed secp256k1 onchain signer; local or keystore-managed OCR2/P2P identities
MonitoringPrometheus metrics, Grafana dashboards/alerts, log retention
RPCPaid HTTPS + WSS endpoints for each assigned chain
A reference AWS deployment maps this to one t4g.small validator host in eu-west-1, a 16 GB encrypted gp3 root disk, one Elastic IP, RDS PostgreSQL 16 on db.t4g.micro with TLS and deletion protection, an AWS KMS secp256k1 CMK for the onchain signer, AWS Managed Prometheus/Grafana, and alert routing through SNS or an on-call channel. The validator, optional snapshotd IPC sidecar, and metrics collector can run as separate containers on the same host, with localhost-only Prometheus endpoints. This baseline is enough because the node is mostly stateless outside Postgres. OCR2 round state is small and recoverable; RPC quality and network reachability matter more than raw CPU.

Sizing tiers

ComponentMinimumRecommendedHigh-Availability
Use caseTestnet, staging, evaluationProduction validatorOperators with SLA requirements
Node CPU2 vCPU2 vCPU4 vCPU
Node RAM2 GB2 GB4–8 GB
Node disk20 GB SSD16–20 GB encrypted SSD20+ GB encrypted SSD
Architecturex86_64 or arm64arm64 preferredarm64 preferred
Public IPDynamic acceptable for testsStatic IP / stable DNSStatic IP / stable DNS with failover plan
PostgresLocal or self-hostedManaged single-AZ, 20 GBManaged multi-AZ, 20–50 GB
Key custodyLocal encrypted keystoreCloud KMS for onchain keyCloud KMS or HSM-backed signer
MetricsLocal PrometheusPrometheus + Grafana alertsManaged metrics + paging
BackupsManual7-day database backups7-day backups + cross-region snapshot policy
NetworkNon-residential preferredDatacenter network, 100 Mbps+Datacenter network, 1 Gbps / SLA-backed
Minimum is not a production recommendation. Use it to evaluate the software, not to operate a live validator with meaningful stake or uptime expectations.

AWS cost estimate

Approximate AWS eu-west-1 on-demand reference pricing for the recommended baseline. These are maintainable planning numbers, not a quote. They exclude Savings Plans, Reserved Instances, taxes, support plans, and RPC provider fees.
ItemReference specApprox. monthly cost
EC2 t4g.small730 h × public on-demand rate~$12
EBS gp3 root disk16 GB encrypted gp3~$1.5
RDS PostgreSQLdb.t4g.micro, single-AZ~$13
RDS storage + backups20 GB gp3 + 7-day backups~$3
AWS Managed Prometheus~10M allowlisted samples/month~$9
KMS key + signing API calls1 asymmetric secp256k1 CMK~$1–3
Secrets Manager~5 secrets~$2
ECR / SSM / CloudWatch logsLight operational usage~$1–3
Data transferragep2p + RPC client traffic~$2–5
Total before RPC feesSingle validator~$45–55/month
RPC fees are separate and may be the largest operating line item. Budget for reliable HTTPS and WSS endpoints on every assigned chain through Alchemy, QuickNode, or an equivalent provider. Adding a separate optional snapshot service host typically adds about $20–25/month for another small EC2 host, EIP, and extra metric ingestion. Redis or other cache costs depend on the deployment shape and whether the service is shared.

Cloud and bare-metal equivalents

Rough equivalents for the recommended tier:
ProviderComputeDatabase
AWSt4g.smallRDS PostgreSQL db.t4g.micro, single-AZ
GCPt2a-standard-1 or small x86 VMCloud SQL PostgreSQL small shared/burstable tier
AzureB2pls v2 or comparable 2 vCPU VMAzure Database for PostgreSQL burstable tier
Hetzner CloudCAX11 or comparable 2 vCPU VMManaged Postgres if available, otherwise self-hosted with backups
OVHcloudSmall 2 vCPU public cloud or ARM bare-metal hostPublic Cloud Database for PostgreSQL
Bare metalAny reliable 2 vCPU / 2 GB server with SSDSelf-hosted PostgreSQL 15+ with tested backups
Hetzner, OVH, and bare-metal deployments can be cheaper than AWS. The tradeoff is operational ownership: backups, monitoring, failover, OS patching, network reliability, and key custody become your responsibility.

Optional snapshot service

Some vaults need a price provider for cross-asset PPS conversion, such as Pendle, Spectra, or staking-style vaults that report shares in a different unit than the deposit asset. When those vaults are assigned, operators may run snapshotd as either:
  • IPC sidecar: same host as the validator, communicating over a Unix socket. Lowest cost and simplest when one validator consumes it.
  • Standalone HTTP service: separate host with JWT-gated HTTPS, useful when multiple validators or services share one snapshot endpoint.
Both modes can share the same Redis-compatible cache for vault metadata, so a validator restart does not need to re-fetch every vault domain separator. If all assigned vaults are standard ERC-4626 and do not require cross-asset conversion, the validator can read directly from configured vault contracts and the snapshot service may not be needed.

Quickstart

Fast end-to-end checklist once you are approved.

Node Setup

Install the node, configure runtime files, and verify startup.

Configuration Reference

Full config.toml, chains.yaml, KMS, and OCR2 timing reference.

Monitoring

Health endpoints, priority metrics, alerts, and operating habits.