Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Before deploying a validator, verify that your infrastructure meets the hardware, software, network, and operational requirements described here.

Hardware requirements

This section was copied from existing reviewed documentation. Source: docs/src/validator_operator/validator_hardware_requirements.rst Reviewers: Skip this section. Remove markers after final approval.
This section describes hardware requirements for running a validator. Note that these are reference values. Actual requirements can vary based on usage of your validator. We recommend monitoring your production validator nodes with respect to CPU and memory usage of all components and disk usage of the database, and adjust the resourcing as needed. The requirements include both the validator and participant container. These requirements are largely identical between the docker-compose based deployment and the k8s deployment but exclude overhead from k8s itself or ingress.
UsageCPUsMemoryDB CPUsDB MemoryDB size
Experiments on local laptop or minimal VM16GB11GB1GB
Production validator with little activity28GB24GB10GB
Production validator for an app provider with moderate activity216GB24GB100GB

Database Latency

Components are relatively sensitive to database latency. If you use a managed database offering like GCP CloudSQL, it is recommended that you allocate it in the same region and zone that your cluster runs in.

Validator node

Usage levelCPURAMNotes
Development / LocalNet1 core6 GBSuitable for local testing only
Low-activity production2 cores8 GBSmall number of parties, low throughput
Moderate-activity production2 cores16 GBMultiple parties, steady transaction load

Database (PostgreSQL)

Usage levelCPURAMStorageIOPS
Development1 core1 GB1 GBStandard
Low-activity production2 cores4 GB10 GB1000+
Moderate-activity production2 cores4 GB100 GB3000+
Storage requirements grow over time based on transaction volume. Plan for growth and consider implementing participant pruning to manage database size.
Database latency has a direct impact on validator performance. Place your database in the same region and availability zone as your validator compute. Use SSD or NVMe-backed storage.

Network bandwidth

  • Minimum: 100 Mbps
  • Recommended: 1 Gbps

Software requirements

Docker Compose deployments

  • Docker 24.0 or newer
  • Docker Compose 2.26.0 or newer
  • curl and jq utilities
  • AMD64 or ARM64 architecture

Kubernetes deployments

  • Kubernetes 1.27 or newer (EKS, GKE, AKS, or self-managed)
  • Helm 3.11.1 or newer
  • An ingress controller (NGINX, Traefik, or cloud-native equivalent)
  • kubectl configured for your cluster

All deployments

  • PostgreSQL 14 or newer (managed service recommended for production)
  • Linux operating system (Ubuntu 22.04+, RHEL 8+)
  • Java 17 or 21 (if running Canton binaries directly; not required for containerized deployments)
Java 22 and newer are not supported. Containerized deployments bundle the correct Java version.

Network requirements

Static egress IP

Your validator must have a static egress IP address. This IP is registered with Super Validators during onboarding and added to their firewall allowlists. Each network environment (DevNet, TestNet, MainNet) requires a separate, dedicated IP. If you run in a cloud environment, use a NAT Gateway (AWS), Cloud NAT (GCP), or NAT Gateway (Azure) to ensure all outbound traffic exits through a single static IP.

Outbound connectivity

Your validator initiates all connections — it does not need to accept inbound connections from the network. Ensure your firewall allows outbound HTTPS (port 443) to:
  • *.sync.global — Global Synchronizer endpoints
  • *.canton.network.digitalasset.com — Canton Network infrastructure
  • GitHub container registry (ghcr.io) — For pulling container images

DNS resolution

Your infrastructure must resolve sync.global and related domains. If you use private DNS, add forwarders for these zones.

Ports

ServicePortDirectionPurpose
gRPC Ledger API5001Inbound (from your apps)Application access to the ledger
JSON Ledger API7575Inbound (from your apps)HTTP/JSON application access
Admin API5002Inbound (admin only)Node administration
Validator API5003Inbound (admin only)Canton Network operations, Canton Coin management
Sequencer443OutboundConnection to the Global Synchronizer
Never expose the Admin API (5002) or Validator API (5003) to the public internet. Restrict access to VPN or private networks.

Operational requirements

Running a validator is an ongoing operational commitment, not a one-time setup.

Monitoring and alerting

Set up monitoring for:
  • Node health and connectivity status
  • Database disk usage and query performance
  • Canton Coin balance for traffic fees
  • Log aggregation for troubleshooting

Backup strategy

Back up your PostgreSQL database regularly. The database contains your validator identity, party keys, and contract data. Loss of this data means loss of your validator identity and all hosted party state. For production deployments, use managed database services with automated backups, point-in-time recovery, and cross-region replication.

Upgrade capacity

The Global Synchronizer upgrades frequently. You need the operational capacity to apply updates within the required timeframes:
  • Security patches: Within 1 week
  • Minor updates: Within 2 weeks
  • Major version upgrades: Before the announced deadline
Validators running outdated software risk disconnection from the network.

Canton Coin balance

Validators pay traffic fees in Canton Coin (CC). You need a sufficient CC balance to cover transaction costs. On DevNet and TestNet, CC is available through faucets. On MainNet, CC has real economic value. The validator software can be configured to automatically purchase traffic when your balance runs low.

Kubernetes cluster sizing (production)

For a production Kubernetes deployment, plan for at least:
  • 3 worker nodes for high availability
  • 4 CPU cores per node
  • 16 GB RAM per node
  • SSD-backed storage class
  • An external managed PostgreSQL instance (not in-cluster)

Database hosting options

OptionProsCons
Managed (RDS, Cloud SQL, Azure Database)Automated backups, high availabilityCost, less operational control
Self-managed PostgreSQLFull control, cost-effectiveOperational overhead
Container-local PostgreSQLSimple setupNot suitable for production

Scaling considerations

Factors affecting scale

FactorImpact
Number of hosted partiesDatabase size, memory usage
Transaction volumeCPU, network, database IOPS
Contract complexityCPU for Daml execution
Historical retentionStorage requirements

Scaling strategies

StrategyWhen to use
Vertical scalingIncrease node resources for moderate growth
Database optimizationTune PostgreSQL for workload
PruningRemove old data to manage storage
PQS offloadingMove read queries to a separate service

Security requirements

Network security

RequirementDetails
TLSAll API endpoints must use TLS 1.2+
FirewallWhitelist-based access control
Network isolationSeparate management and data planes
DDoS protectionRecommended for public endpoints

Key management

Key TypeStorage
Party keysHSM recommended for production
TLS certificatesSecure certificate management
Database credentialsSecrets management (Vault, KMS)

Access control

AccessRecommendation
Admin APIVPN or private network only
SSH/ConsoleKey-based, MFA enabled
DatabaseNetwork-restricted, strong passwords

Cloud-specific guidance

AWS

ServiceRecommendation
ComputeEC2 (m6i.xlarge+) or EKS
DatabaseRDS PostgreSQL or Aurora
Storagegp3 EBS volumes
NetworkingVPC with NAT Gateway

Google Cloud

ServiceRecommendation
ComputeGCE (n2-standard-4+) or GKE
DatabaseCloud SQL PostgreSQL
StorageSSD persistent disks
NetworkingVPC with Cloud NAT

Azure

ServiceRecommendation
ComputeVM (Standard_D4s_v5+) or AKS
DatabaseAzure Database for PostgreSQL
StoragePremium SSD
NetworkingVNet with NAT Gateway

Next steps

Onboarding Process

Get onboarded with an SV sponsor.

Installation

Deploy your validator node.