Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Canton Network upgrades fall into two categories: minor upgrades that each node handles independently, and major upgrades that require network-wide coordination and downtime.

Minor Upgrades

Validator nodes

This section was copied from existing reviewed documentation. Source: docs/src/validator_operator/validator_upgrades.rst Reviewers: Skip this section. Remove markers after final approval.
There are two types of upgrades: Version upgrades (this corresponds to an upgrade from 0.A.X to 0.B.Y) and protocol upgrades (the actual version can remain the same, only the protocol is upgraded and it requires no action). Version upgrades can be done by each node independently and only require an upgrade of the docker-compose file or a helm upgrade for a kubernetes deployment. You must not delete or uninstall any Postgres database, change migration IDs or secrets for a version upgrade; Make sure to read the Release Notes to learn about changes you may need to make as part of the upgrade. Note that for docker-compose you must update the full bundle including the docker compose file and the start.sh script and adjust IMAGE_TAG. Only updating IMAGE_TAG is insufficient as the old docker compose files might be incompatible with the new version.

Super Validator nodes

This section was copied from existing reviewed documentation. Source: docs/src/sv_operator/sv_upgrades.rst Reviewers: Skip this section. Remove markers after final approval.
There are two types of upgrades: Version upgrades (this corresponds to an upgrade from 0.A.X to 0.B.Y) and protocol upgrades (the actual version can remain the same, only the protocol is upgraded). Version upgrades can be done by each node independently and only require a helm upgrade. Make sure to read the Release Notes to learn about changes you may need to make as part of the upgrade. Protocol upgrades are performed through logical synchronizer upgrades, which allow upgrading the protocol version with very limited network downtime.

Major Upgrades (Synchronizer Upgrades with Downtime)

Major upgrades involve non-backwards-compatible changes to the Canton software and require a coordinated synchronizer migration. The existing synchronizer is paused, state is exported, and a new synchronizer is deployed with updated components.

Overview for Validators

This section was adapted from existing reviewed documentation. Source: validator_operator/validator_major_upgrades.rst Reviewers: Skip this section. Remove markers after final approval.
  1. New Canton and Splice releases become available
  2. SVs vote on and confirm a specific downtime window. Validators are informed.
  3. At the start of the window, SVs automatically pause traffic on the existing synchronizer
  4. Shortly after pausing, the validator software automatically exports migration dumps to attached volumes
  5. Verify your node has caught up to the paused synchronizer
  6. Create full backups of your node
  7. Wait until SVs signal that the migration is successful
  8. Upgrade your deployment with the new migration ID
  9. The validator app automatically consumes the migration dump and initializes the new participant
This process creates a new synchronizer instance. Traffic balances are per-synchronizer, so your balance resets to zero on the new instance. Purchase traffic on a pay-as-you-go basis in small increments to minimize losses from migration.
For the full validator-side procedure (state preservation, migration dumps, catching up, deploying validator app and participant on Kubernetes or Docker Compose), see Validator Major Upgrades.

Overview for SV Operators

This section was adapted from existing reviewed documentation. Source: sv_operator/sv_major_upgrade.rst Reviewers: Skip this section. Remove markers after final approval.
The SV upgrade process is more involved:
  1. New releases become available
  2. SVs vote on a downtime window via on-ledger governance
  3. Deploy new Canton components and CometBFT node alongside the existing ones (can be done before downtime)
  4. At the downtime start, SV apps automatically pause the synchronizer
  5. Migration dumps are automatically exported
  6. Verify all SV components have caught up
  7. Create full backups
  8. Upgrade SV, scan, and validator apps with the new migration ID
  9. Apps consume the migration dump and initialize new components
  10. Once 2/3+ of SVs complete, the new synchronizer becomes operational automatically
  11. Keep old components running until validators have had time to catch up
For the full SV-side procedure (deploying new Canton components and CometBFT, migration dumps, updating apps, recovering from a failed upgrade, organizational details), see SV Major Upgrades.

Migration IDs

Each synchronizer migration increments the migration ID (starting from 0). The migration ID:
  • Determines which PostgreSQL database the participant uses (a fresh database per migration)
  • Is tracked by the SV/validator apps for internal consistency
  • Forms part of the CometBFT chain ID
  • Must be correctly set in all Helm values during upgrade

State Preserved Across Migrations

  • All identities (party IDs, participant IDs, node identities)
  • Active ledger state (via migration dump)
  • Historical app state in SV, scan, and validator databases (via database reuse)

State Not Preserved

  • Transaction history on the participant (only serves history going forward)
  • CometBFT blockchain state (fresh chain on new synchronizer)
  • Traffic balances (reset to zero on new synchronizer instance)

Catching Up Before Migration

Before proceeding with the upgrade deployment, verify your node is fully caught up:
  • Check for Wrote domain migration dump messages in the validator-app (or sv-app) logs
  • Check for Ingested transaction messages in app logs — if the latest message is 10+ minutes old, the app has likely caught up
SVs keep old sequencers available for a limited time after migration. If your node doesn’t catch up and migrate within that window, you won’t be able to catch up later.

Deploying the Upgrade (Validators)

This section was adapted from existing reviewed documentation. Source: validator_operator/validator_major_upgrades.rst Reviewers: Skip this section. Remove markers after final approval.
Kubernetes:
  1. Confirm migration dump exists in logs
  2. Re-run the Helm install steps with the incremented MIGRATION_ID
  3. Set migration.migrating to true in validator Helm values
  4. Use helm upgrade (not helm install) for the validator chart
  5. After successful migration: set migration.migrating back to false, keep the incremented MIGRATION_ID
  6. The old participant database can be pruned after at least one week
Docker Compose:
  1. Confirm migration dump: docker compose logs validator | grep "Wrote domain migration dump"
  2. Stop the validator: ./stop.sh
  3. Update the bundle and IMAGE_TAG if needed
  4. Restart with incremented migration ID (-m <new_id>) and the -M flag to trigger migration
  5. After successful migration: restart without -M, keep the new migration ID

Troubleshooting Failed Upgrades

This section was adapted from existing reviewed documentation. Source: validator_operator/validator_major_upgrades.rst Reviewers: Skip this section. Remove markers after final approval.
If the upgrade fails, check:
  • Correct versions deployed both before and after migration
  • Migration dump file exists on the volume. If missing, remove any stale dump and restart the app on the old version to trigger a fresh dump.
  • The participant uses a fresh (empty) database for the new migration ID
  • The correct incremented MIGRATION_ID is set
  • The migrating: true flag (Helm) or -M argument (Docker Compose) is present
If the validator app database contains data from a failed migration, check for ingested data:
SELECT * FROM update_history_last_ingested_offsets
WHERE history_id = (SELECT DISTINCT history_id FROM update_history_last_ingested_offsets)
  AND migration_id = <failed_migration_id>;
If rows are returned, restore the app database from the pre-upgrade backup and drop the failed migration’s participant database.

Recovering from a Failed SV Upgrade

This section was adapted from existing reviewed documentation. Source: sv_operator/sv_major_upgrade.rst Reviewers: Skip this section. Remove markers after final approval.
If a major upgrade fails at the network level, each SV must submit a topology transaction to resume the old synchronizer:
curl -X POST "https://sv.sv.YOUR_HOSTNAME/api/sv/v0/admin/domain/unpause" \
  -H "authorization: Bearer <token>"
The command completes once enough SVs have executed it. The old synchronizer then resumes operation.

Next Steps