Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This guide covers deploying a Super Validator (SV) node on Kubernetes using the Splice Helm charts. The charts deploy a complete SV node and connect it to a target Global Synchronizer network.

Prerequisites

This section was adapted from existing reviewed documentation. Source: sv_operator/sv_helm.rst Reviewers: Skip this section. Remove markers after final approval.
  • A running Kubernetes cluster with administrator access to create and manage namespaces
  • kubectl (at least v1.26.1) and helm (at least v3.11.1) on your workstation
  • A static egress IP for your cluster — propose to existing SVs to add it to the IP allowlist
  • The release artifacts bundle containing sample Helm value files (download from the release page and extract)
  • Knowledge of the current migration ID for the synchronizer (0 for initial deployment, incremented by 1 for each subsequent migration)

Generating an SV Identity

This section was adapted from existing reviewed documentation. Source: sv_operator/sv_helm.rst Reviewers: Skip this section. Remove markers after final approval.
SV operators are identified by a human-readable name and an EC public key. This identification is stable across deployments of the Global Synchronizer — you reuse your SV name and public key between network resets. Generate a keypair in the format expected by the SV node software:
# Generate the keypair
openssl ecparam -name prime256v1 -genkey -noout -out sv-keys.pem

# Encode the keys
public_key_base64=$(openssl ec -in sv-keys.pem -pubout -outform DER 2>/dev/null | base64 | tr -d "\n")
private_key_base64=$(openssl pkcs8 -topk8 -nocrypt -in sv-keys.pem -outform DER 2>/dev/null | base64 | tr -d "\n")

# Output the keys
echo "public-key = \"$public_key_base64\""
echo "private-key = \"$private_key_base64\""

# Clean up
rm sv-keys.pem
Store both keys in a safe location. You will use them every time you deploy a new SV node, including when deploying to a different Global Synchronizer deployment or redeploying after a network reset. The public key and your desired SV name must be approved by a threshold of currently active SVs before you can join the network. For DevNet and TestNet, send the public key and your desired SV name to your point of contact at Digital Asset and wait for confirmation.

Preparing the Cluster

Create the application namespace:
kubectl create ns sv

Configuring Authentication

This section was adapted from existing reviewed documentation. Source: sv_operator/sv_helm.rst Reviewers: Skip this section. Remove markers after final approval.
The SV node components authenticate to each other and to external users using JWT access tokens issued by an OpenID Connect (OIDC) provider. You must:
  1. Set up an OIDC provider that supports both machine-to-machine (Client Credentials Grant) and user-facing (Authorization Code Grant) authentication flows
  2. Configure your backends to use that OIDC provider
Your OIDC provider must:
  • Be reachable at an HTTPS URL (OIDC_AUTHORITY_URL) from your cluster and from user browsers
  • Provide a discovery document at OIDC_AUTHORITY_URL/.well-known/openid-configuration
  • Expose a JWK Set document
  • Support the OAuth 2.0 Client Credentials Grant for machine-to-machine authentication
  • Support the OAuth 2.0 Authorization Code Grant for user-facing authentication
  • Sign all JWTs using the RS256 algorithm
The following configuration values are required from your OIDC provider:
  • OIDC_AUTHORITY_URL — URL for obtaining the openid-configuration and jwks.json
  • VALIDATOR_CLIENT_ID / VALIDATOR_CLIENT_SECRET — For the validator app backend
  • SV_CLIENT_ID / SV_CLIENT_SECRET — For the SV app backend
  • WALLET_UI_CLIENT_ID — For the wallet web UI
  • SV_UI_CLIENT_ID — For the SV web UI
  • CNS_UI_CLIENT_ID — For the CNS web UI
When first starting out, configure all three JWT token audiences to https://canton.network.global. Once your setup is working, switch to dedicated audience values matching your deployment URLs.

Configuring CometBFT

This section was adapted from existing reviewed documentation. Source: splice/apps/app/src/pack/examples/sv-helm/cometbft-values.yaml Reviewers: Skip this section. Remove markers after final approval.
The SV node includes a CometBFT node for BFT ordering. Configure your CometBFT Helm values with your node identity and network connection details:
nodeName: "YOUR_SV_NAME"

node:
  id: YOUR_COMETBFT_NODE_ID
  identifier: "global-domain-MIGRATION_ID-cometbft"
  externalAddress: "global-domain-MIGRATION_ID-cometbft.sv.YOUR_HOSTNAME:26MIGRATION_ID56"
  keysSecret: "cometbft-keys"

genesis:
  chainId: "TARGET_CLUSTER-MIGRATION_ID"
  chainIdSuffix: "0"
Replace MIGRATION_ID with the current migration ID of the Global Synchronizer (0 for initial deployment, incremented for each subsequent migration).

State Sync

State sync bootstraps your node from the current network state instead of replaying from genesis. Enable it only when joining a chain that has already been running:
stateSync:
  enable: false  # Set to true only for initial join
  rpcServers: "https://sv.sv-2.TARGET_HOSTNAME:443/api/sv/v0/admin/domain/cometbft/json-rpc,https://sv.sv-2.TARGET_HOSTNAME:443/api/sv/v0/admin/domain/cometbft/json-rpc"
State sync introduces a dependency on the sponsoring node for fetching the state snapshot on startup. Disable it after initial synchronization is complete and after network resets.

Installing PostgreSQL

The SV node requires separate PostgreSQL databases for the participant, sequencer, mediator, and application components. You can deploy PostgreSQL in-cluster or use a managed cloud service (AWS RDS, GCP Cloud SQL, Azure Database for PostgreSQL). Each component’s Helm values reference a Kubernetes secret for database credentials:
# Example: participant database configuration
persistence:
  host: participant-pg
  port: 5432
  secretName: participant-pg-secret
  databaseName: participant_MIGRATION_ID
  schema: participant
Replace MIGRATION_ID with the current migration ID. The release artifacts bundle includes separate PostgreSQL value files for each component (postgres-values-participant.yaml, postgres-values-sequencer.yaml, postgres-values-mediator.yaml). For production deployments, a managed cloud service is recommended for automated backups, failover, and monitoring.

Installing the Helm Charts

The SV node consists of several Helm chart releases:
  • CometBFT — The BFT consensus node
  • Canton Participant — The Canton participant node
  • Global Domain — The sequencer and mediator for the Global Synchronizer
  • Scan — The Scan application for network observation
  • Validator — The validator application
  • SV Node — The SV application and its automation
  • Info App — Optional informational endpoints
Each chart has a corresponding values file that you customize with your deployment-specific configuration (namespaces, storage classes, resource limits, authentication settings, and network endpoints).

Configuring Ingress

Configure cluster ingress to expose the SV node’s web UIs and APIs. The standard setup uses Istio service mesh for routing and TLS termination. Exposed endpoints typically include:
  • SV web UI (sv.sv.YOUR_HOSTNAME)
  • Wallet web UI (wallet.sv.YOUR_HOSTNAME)
  • CNS web UI (cns.sv.YOUR_HOSTNAME)
  • Scan UI and API
  • Sequencer public API

Static Egress IP

Your cluster’s egress IP must be static and allowlisted by other SVs. Configure your cloud provider’s NAT gateway or load balancer to use a fixed IP address for outbound traffic from the SV namespace.

Verifying the Deployment

After deployment, verify your node is operational:
  1. Check that all pods are running: kubectl get pods -n sv
  2. Log into the SV web UI and confirm your node status
  3. Log into the Wallet web UI and verify your wallet balance
  4. Check the Scan UI to confirm your node is observing network activity
  5. Monitor CometBFT consensus participation in the logs

Next Steps