Skip to main content

Hardware requirements

Safrochain nodes have very different sizing depending on their role. Treat the numbers below as minimums; CPU and disk should grow with chain age and traffic.

Per-role baselines

RoleCPURAMDiskNetworkPruning
Validator4 vCPU dedicated8 GB200 GB NVMe100 Mbps, low-latency to peers and remote signerdefault
Sentry / public seed4 vCPU8 GB200 GB NVMe1 Gbps, public-facingdefault
Public RPC (pruned)4 vCPU16 GB500 GB NVMe1 Gbpsdefault
Public RPC (archive)8 vCPU32 GB2 TB NVMe (grows ~50 GB / month)1 Gbpsnothing
Remote signer / cosigner2 vCPU4 GB50 GB NVMe50 Mbps to validator + other cosignersn/a

Pruning explained

Pruning controls how much application state and how many block bodies the node keeps on disk:

StrategyBehaviourDisk impact
nothingkeeps everything (archive node)full history; grows forever
defaultkeeps the most recent ~362 880 states + last 100 blocks~200 GB after 1 year
everythingkeeps only the last 2 states + 2 blockstiny; useless for most queries
customprecise control over pruning-keep-recent, pruning-intervaltune as needed

A typical mainnet config:

# config/app.toml

pruning = "default"
pruning-keep-recent = "100"
pruning-interval = "10"

For an archive node:

pruning = "nothing"

State sizes (rules of thumb)

The Safrochain mainnet has not launched yet, so these are projections from similar Cosmos SDK 0.50 chains:

  • ~5 GB after 1M blocks (year 1)
  • ~50–80 GB after 10M blocks (year 3)
  • archive grows ~50 GB / month at steady traffic

Plan for 3× headroom on disk so a snapshot copy fits beside the live data dir during catch-up.

Network

  • Validator → remote signer: ≤ 50 ms RTT (else signing margin shrinks).
  • Validator → sentry: ≤ 30 ms RTT.
  • Sentry → public peers: as good as your hosting provider.
  • Avoid running validators behind aggressive shared-hosting rate limits; signing one block/s while the kernel sheds packets is the fastest way to miss blocks.

File descriptors and ulimits

CometBFT P2P opens many sockets. Set:

# /etc/security/limits.d/safrochain.conf
safrochain soft nofile 65535
safrochain hard nofile 1048576

systemd users:

# /etc/systemd/system/safrochaind.service.d/limits.conf
[Service]
LimitNOFILE=1048576
ProviderValidatorSentryRPC defaultRPC archive
AWSc7i.xlarge + 200 GB gp3c7i.xlarge + 200 GB gp3m7i.xlarge + 500 GB gp3m7i.2xlarge + 2 TB gp3
GCPn2-standard-4 + 200 GB pd-ssdn2-standard-4 + 200 GB pd-ssdn2-standard-4 + 500 GB pd-ssdn2-standard-8 + 2 TB pd-ssd
HetznerCCX13 + 200 GBCCX13 + 200 GBCCX23 + 500 GBCCX33 + 2 TB
OVH bare metalAdvance-1 (Ryzen 5) + NVMeAdvance-1Advance-2 (Ryzen 7)Infra-1

Bare metal is a noticeable win for validators because the NVMe latency vs CPU jitter is more deterministic than virtualised IO.

When in doubt

Spin up a local testnet on a development workstation (Run a Node → Local testnet), watch peak CPU, RAM, and disk I/O during a few hours of normal block production, and provision your real node with those resources.