Canton and Splice nodes use Logback for logging. Proper log configuration helps you diagnose issues quickly, meet audit requirements, and feed data into your monitoring stack.Documentation Index
Fetch the complete documentation index at: https://cantonfoundation-issue-365-details-history.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Where to find logs
When launched locally, splice-node will create alog/ directory located at the root of the repository and log into canton_network.log. Canton logs into canton.log.
The default log level, initially set to Debug, can be changed using the
--log-level-canton flag, for example: splice-node --config "${OUTPUT_CONFIG}" --log-level-canton=DEBUG ...lnav to read the logs. A guideline is provided in this documentation.
Logging in kubernetes (note that this only provides logs for a limited timeframe):
kubectl describe pod <pod-name>to get a detailed status of the given pod,kubectl logs <pod-name> -n <namespace-name>orkubectl logs -l app=<app-name> -n <namespace-name> --tail=-1to get logs for a given pod in a given namespace.
Log levels
Canton uses standard Logback levels. The most relevant for operations:- ERROR — Something failed and likely needs immediate attention. Transaction failures, database connectivity issues, synchronizer disconnections.
- WARN — Conditions that are unusual but not necessarily broken. Slow queries, retried operations, configuration deprecations.
- INFO — Normal operational events. Node startup, synchronizer connections established, health check results. This is the recommended default level for production.
- DEBUG — Detailed internal state for troubleshooting. Produces high volume; enable selectively and temporarily.
- TRACE — Extremely detailed protocol-level logging. Only useful for deep debugging with guidance from support.
Configuring log levels
Logback XML configuration
Canton nodes load their logging configuration from a Logback XML file. You can mount a custom configuration into your container:Runtime log level changes
You can adjust log levels at runtime through the Canton Console without restarting the node:Structured logging (JSON format)
For log aggregation systems, JSON-formatted logs are easier to parse and index. Configure a JSON encoder in your Logback configuration:trace-id for correlating log entries across a single transaction flow.
Log rotation
If you write logs to files (rather than stdout captured by Kubernetes), configure rotation to avoid filling the disk:Metrics exposure
For Prometheus metrics endpoints, scraping configuration, and Grafana dashboards, see Monitoring Setup.Health endpoints
You can check your validator’s health using the readiness endpoints. All CN applications provide the/readyz and /livez endpoints, which are used for readiness and liveness probes.
-
Checking readiness
-
In Kubernetes: readiness and liveness probes are already configured.
You can also manually check validator readiness with the following command:
-
In Docker: run for example this command to check validator liveness inside a container:
-
In Kubernetes: readiness and liveness probes are already configured.
You can also manually check validator readiness with the following command:
-
Using metrics
The
splice_store_last_ingested_record_time_msmetric represents the last ingested record time in each validator store. It can be used to track general activity of the node:- If this value continue to increase over time, your node is active and stays in sync with the network. Note that it only advances if your node actually ingests new transactions. For a validator collecting validator liveness rewards this happens every round so you should expect your lag to never go above 20min.
- If it remains static, further investigation may be required.
Metrics <metrics>.
Grafana dashboards
Grafana Dashboards
The release bundle () contains a set of Grafana dashboards that are built based on the metrics above. These dashboards can be imported into a Grafana instance. The dashboards are built assuming a K8s deployment, and may need to be modified for other deployment types. The dashboards can be found under the grafana-dashboards folder in the release bundle.The dashboards are built using queries specific for Prometheus native histograms.
Log aggregation
ELK Stack (Elasticsearch, Logstash, Kibana)
With JSON-formatted logs, ship them to Elasticsearch using Filebeat or Fluentd. Key configuration points:- Index logs by node type (participant, sequencer, mediator) for easier filtering
- Parse the
trace-idfield to correlate entries across nodes involved in the same transaction - Set up index lifecycle management to handle log retention
Grafana Loki
If you use Grafana for monitoring, Loki provides a lighter-weight alternative to Elasticsearch. Deploy Promtail as a DaemonSet to ship container logs to Loki, and query them alongside your metrics in Grafana dashboards.Key log messages to watch for
Certain log messages indicate conditions that warrant investigation or alerting:ACS_COMMITMENT_MISMATCH— The Active Contract Set commitment between your validator and the synchronizer does not match. This can indicate data corruption, a missed transaction, or a bug. Investigate immediately.SEQUENCER_SUBSCRIPTION_LOST— Your validator lost its connection to the sequencer. It will attempt to reconnect, but prolonged disconnection means you are not processing transactions.MEDIATOR_REJECTION— A transaction was rejected by the mediator. Check the rejection reason — it could be a legitimate conflict or an operational issue.DB_STORAGE_DEGRADATION— Database response times are exceeding thresholds. Check your PostgreSQL performance.TRAFFIC_BALANCE_LOW— Your traffic balance is running low. Purchase more traffic using Canton Coin to continue submitting transactions.
Helm configuration for logging
If you deploy with Helm, you can mount a custom Logback configuration:Troubleshooting with logs
When investigating an issue, find thetrace-id from the error or transaction submission and search for it across all node logs to reconstruct the full flow. Check timestamps for gaps — Canton protocol messages follow a defined sequence, and delays often point to the bottleneck. For intermittent issues, temporarily increase the log level for the relevant package, reproduce, then reset.