Canton Network upgrades fall into two categories: minor upgrades that each node handles independently, and major upgrades that require network-wide coordination and downtime.Documentation Index
Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Minor Upgrades
Validator nodes
There are two types of upgrades: Version upgrades (this corresponds to an upgrade from0.A.X to 0.B.Y)
and protocol upgrades (the actual version can remain the same, only the protocol is upgraded and it requires no action).
Version upgrades can be done by each node independently and only require
an upgrade of the docker-compose file or a helm upgrade for a
kubernetes deployment.
You must not delete or uninstall any Postgres database, change migration IDs or secrets for a version upgrade;
Make sure to read the Release Notes to learn
about changes you may need to make as part of the upgrade.
Note that for docker-compose you must update the full bundle including
the docker compose file and the start.sh script and adjust
IMAGE_TAG. Only updating IMAGE_TAG is insufficient as the old
docker compose files might be incompatible with the new version.
Super Validator nodes
There are two types of upgrades: Version upgrades (this corresponds to an upgrade from0.A.X to 0.B.Y)
and protocol upgrades (the actual version can remain the same, only the protocol is upgraded).
Version upgrades can be done by each node independently and only require
a helm upgrade. Make sure to read the Release Notes to learn
about changes you may need to make as part of the upgrade.
Protocol upgrades are performed through logical synchronizer upgrades,
which allow upgrading the protocol version with very limited network downtime.
Major Upgrades (Synchronizer Upgrades with Downtime)
Major upgrades involve non-backwards-compatible changes to the Canton software and require a coordinated synchronizer migration. The existing synchronizer is paused, state is exported, and a new synchronizer is deployed with updated components.Overview for Validators
- New Canton and Splice releases become available
- SVs vote on and confirm a specific downtime window. Validators are informed.
- At the start of the window, SVs automatically pause traffic on the existing synchronizer
- Shortly after pausing, the validator software automatically exports migration dumps to attached volumes
- Verify your node has caught up to the paused synchronizer
- Create full backups of your node
- Wait until SVs signal that the migration is successful
- Upgrade your deployment with the new migration ID
- The validator app automatically consumes the migration dump and initializes the new participant
This process creates a new synchronizer instance. Traffic balances are per-synchronizer, so your balance resets to zero on the new instance. Purchase traffic on a pay-as-you-go basis in small increments to minimize losses from migration.
Overview for SV Operators
The SV upgrade process is more involved:- New releases become available
- SVs vote on a downtime window via on-ledger governance
- Deploy new Canton components and CometBFT node alongside the existing ones (can be done before downtime)
- At the downtime start, SV apps automatically pause the synchronizer
- Migration dumps are automatically exported
- Verify all SV components have caught up
- Create full backups
- Upgrade SV, scan, and validator apps with the new migration ID
- Apps consume the migration dump and initialize new components
- Once 2/3+ of SVs complete, the new synchronizer becomes operational automatically
- Keep old components running until validators have had time to catch up
Migration IDs
Each synchronizer migration increments the migration ID (starting from 0). The migration ID:- Determines which PostgreSQL database the participant uses (a fresh database per migration)
- Is tracked by the SV/validator apps for internal consistency
- Forms part of the CometBFT chain ID
- Must be correctly set in all Helm values during upgrade
State Preserved Across Migrations
- All identities (party IDs, participant IDs, node identities)
- Active ledger state (via migration dump)
- Historical app state in SV, scan, and validator databases (via database reuse)
State Not Preserved
- Transaction history on the participant (only serves history going forward)
- CometBFT blockchain state (fresh chain on new synchronizer)
- Traffic balances (reset to zero on new synchronizer instance)
Catching Up Before Migration
Before proceeding with the upgrade deployment, verify your node is fully caught up:- Check for
Wrote domain migration dumpmessages in the validator-app (or sv-app) logs - Check for
Ingested transactionmessages in app logs — if the latest message is 10+ minutes old, the app has likely caught up
Deploying the Upgrade (Validators)
Kubernetes:- Confirm migration dump exists in logs
- Re-run the Helm install steps with the incremented
MIGRATION_ID - Set
migration.migratingtotruein validator Helm values - Use
helm upgrade(nothelm install) for the validator chart - After successful migration: set
migration.migratingback tofalse, keep the incrementedMIGRATION_ID - The old participant database can be pruned after at least one week
- Confirm migration dump:
docker compose logs validator | grep "Wrote domain migration dump" - Stop the validator:
./stop.sh - Update the bundle and
IMAGE_TAGif needed - Restart with incremented migration ID (
-m <new_id>) and the-Mflag to trigger migration - After successful migration: restart without
-M, keep the new migration ID
Troubleshooting Failed Upgrades
If the upgrade fails, check:- Correct versions deployed both before and after migration
- Migration dump file exists on the volume. If missing, remove any stale dump and restart the app on the old version to trigger a fresh dump.
- The participant uses a fresh (empty) database for the new migration ID
- The correct incremented
MIGRATION_IDis set - The
migrating: trueflag (Helm) or-Margument (Docker Compose) is present
Recovering from a Failed SV Upgrade
If a major upgrade fails at the network level, each SV must submit a topology transaction to resume the old synchronizer:Next Steps
- Backup and Recovery — Backup procedures before upgrades
- Release Notes — Current release changes and requirements