This guide covers deploying a Super Validator (SV) node on Kubernetes using the Splice Helm charts. The charts deploy a complete SV node and connect it to a target Global Synchronizer network.Documentation Index
Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A running Kubernetes cluster with administrator access to create and manage namespaces
kubectl(at least v1.26.1) andhelm(at least v3.11.1) on your workstation- A static egress IP for your cluster — propose to existing SVs to add it to the IP allowlist
- The release artifacts bundle containing sample Helm value files (download from the release page and extract)
- Knowledge of the current migration ID for the synchronizer (0 for initial deployment, incremented by 1 for each subsequent migration)
Generating an SV Identity
SV operators are identified by a human-readable name and an EC public key. This identification is stable across deployments of the Global Synchronizer — you reuse your SV name and public key between network resets. Generate a keypair in the format expected by the SV node software:Preparing the Cluster
Create the application namespace:Configuring Authentication
The SV node components authenticate to each other and to external users using JWT access tokens issued by an OpenID Connect (OIDC) provider. You must:- Set up an OIDC provider that supports both machine-to-machine (Client Credentials Grant) and user-facing (Authorization Code Grant) authentication flows
- Configure your backends to use that OIDC provider
- Be reachable at an HTTPS URL (
OIDC_AUTHORITY_URL) from your cluster and from user browsers - Provide a discovery document at
OIDC_AUTHORITY_URL/.well-known/openid-configuration - Expose a JWK Set document
- Support the OAuth 2.0 Client Credentials Grant for machine-to-machine authentication
- Support the OAuth 2.0 Authorization Code Grant for user-facing authentication
- Sign all JWTs using the RS256 algorithm
OIDC_AUTHORITY_URL— URL for obtaining the openid-configuration and jwks.jsonVALIDATOR_CLIENT_ID/VALIDATOR_CLIENT_SECRET— For the validator app backendSV_CLIENT_ID/SV_CLIENT_SECRET— For the SV app backendWALLET_UI_CLIENT_ID— For the wallet web UISV_UI_CLIENT_ID— For the SV web UICNS_UI_CLIENT_ID— For the CNS web UI
https://canton.network.global. Once your setup is working, switch to dedicated audience values matching your deployment URLs.
Configuring CometBFT
The SV node includes a CometBFT node for BFT ordering. Configure your CometBFT Helm values with your node identity and network connection details:MIGRATION_ID with the current migration ID of the Global Synchronizer (0 for initial deployment, incremented for each subsequent migration).
State Sync
State sync bootstraps your node from the current network state instead of replaying from genesis. Enable it only when joining a chain that has already been running:Installing PostgreSQL
The SV node requires separate PostgreSQL databases for the participant, sequencer, mediator, and application components. You can deploy PostgreSQL in-cluster or use a managed cloud service (AWS RDS, GCP Cloud SQL, Azure Database for PostgreSQL). Each component’s Helm values reference a Kubernetes secret for database credentials:MIGRATION_ID with the current migration ID. The release artifacts bundle includes separate PostgreSQL value files for each component (postgres-values-participant.yaml, postgres-values-sequencer.yaml, postgres-values-mediator.yaml).
For production deployments, a managed cloud service is recommended for automated backups, failover, and monitoring.
Installing the Helm Charts
The SV node consists of several Helm chart releases:- CometBFT — The BFT consensus node
- Canton Participant — The Canton participant node
- Global Domain — The sequencer and mediator for the Global Synchronizer
- Scan — The Scan application for network observation
- Validator — The validator application
- SV Node — The SV application and its automation
- Info App — Optional informational endpoints
Configuring Ingress
Configure cluster ingress to expose the SV node’s web UIs and APIs. The standard setup uses Istio service mesh for routing and TLS termination. Exposed endpoints typically include:- SV web UI (
sv.sv.YOUR_HOSTNAME) - Wallet web UI (
wallet.sv.YOUR_HOSTNAME) - CNS web UI (
cns.sv.YOUR_HOSTNAME) - Scan UI and API
- Sequencer public API
Static Egress IP
Your cluster’s egress IP must be static and allowlisted by other SVs. Configure your cloud provider’s NAT gateway or load balancer to use a fixed IP address for outbound traffic from the SV namespace.Verifying the Deployment
After deployment, verify your node is operational:- Check that all pods are running:
kubectl get pods -n sv - Log into the SV web UI and confirm your node status
- Log into the Wallet web UI and verify your wallet balance
- Check the Scan UI to confirm your node is observing network activity
- Monitor CometBFT consensus participation in the logs
Next Steps
- Bootstrap Network — Network bootstrapping and SV operations
- Console Overview — Accessing the Canton Console for debugging
- Security Operations — Hardening your SV deployment