State of Testnet: Bug Sweeping
By Robin Massini

Mar 01, 2022

Dusk Network’s public testnet will launch as soon as possible.

Key takeaways from this update

  • The critical bug announced in our Critical Bug Report has been resolved with the Canonical patch.
  • Continuing the process of integration and testing, developers identified other issues, many of which have now been resolved.
  • A stack update is currently pending, after which a new cluster will be redeployed on our updated stack, necessitating further testing before subsequent community node deployment.

This article documents the state of our testnet Daybreak, which was postponed due to a critical bug. Since that announcement, our team has not only made great strides in fixing this specific bug but also implemented additional changes, completed further integrations, and begun performing cluster testing.

This article details these developments, as well as the progress being made to ensure a successful launch of testnet Daybreak.

Critical bug that prevented cluster forming is resolved

The ‘Situation of the state’ bug has been resolved. With the Canonical patch in place, we’ve seen the test-harness passing. This enabled us to launch the cluster in the pre-production environment, giving us the opportunity to scrutinize the network after forming.

Unfortunately, after network forming, some other issues surfaced.

State bloat & lack of charging gas is resolved

State bloat was caused by the copying of the entire contract bytecode; an action erroneously performed after the acceptance of every single block. The team placed the bytecode behind a link and cached it indefinitely, thereby retaining the bytecode without the need to re-write it every single time. While this solution works, it is merely a temporary fix intended for testnet. The issue will be resolved more thoroughly once we update the network towards VM version 2, which introduces a more sophisticated strategy to persist on disk.

Concurrently, a minor issue of the absence of gas charges was resolved.

Both patches can be found on our Github here:

Minor staking issue outside of the scope of testnet

Once we were able to create and test transactions in the cluster, the transfer appeared to behave as expected. However, minor glitches did become evident on staking. These glitches are known to the team and scheduled as a deliverable in the first biweekly release cycle after testnet launch.

Memory leak & Microkelvin proof regression

Once we were able to test the network for an increasing amount of blocks, we identified a new issue pertaining to a memory leak. In addition, a regression in our stack (specifically, our Microkelvin implementation) was experienced.

When the cluster hit a block height of approximately #23.000, the nodes began to report an out-of-memory error when creating a transfer transaction. As the state was under control, this error initially appeared implausible. After further scrutinizing the issue, the developers pinpointed the source of the problem, which was due to loading the prover-key and the precompiled circuits to create the ZK proof.

This can be explained as follows:

To cater to a low node requirement for a healthy decentralized network, we are currently running nodes on a constrained VPS with 1Gb of memory, whereas the prover key and pre-compiled circuits are approximately 200Mb of memory. This should not be an issue, since creating a transaction on network formation has been working perfectly. That is why we created a stress environment with faster block times of 100ms to reproduce this issue, and came to the conclusion that this problem surfaces reliably around block #25.500.

The issue is recorded at:

Over the weekend, we've successfully executed a modification to the Canonical library, which has now been upgraded to Canonical v0.7. Not only did this upgrade fix a regression in the proof system (which can be found here:, it also resulted in a resolution of the out-of-memory issue, or at the very least lessening of its impact.

Before this upgrade, the error pointed to a problem with the increase of the amount of entries of Canonical's STATIC_MAP (as highlighted here:, whereas the memory profiler is currently not reporting any leaks. Slower memory buildup is now observed and the increase is not significant enough to call for prioritization, nor impact network forming and stability, especially now that persistence appears to be working properly. In the unlikely case of problems, nodes could be restarted and will synchronize correctly.

State of cluster testing

At the moment, the consensus is running on the single Provisioner included in the genesis block. This was necessary to enable the capability of setting the block time arbitrarily. For instance, to 100ms in the case of reproducing the state bloat issue. A new update is currently pending, after which a cluster will be redeployed on the updated stack. Although this will require further testing of multiple consensus configurations, it does bring us one step closer to community node deployment.

We aim to provide further transparency on our progress in the lead-up to launching testnet Daybreak. That’s why we started the State of Testnet article series in the first place. You can find details on previous updates below:

Share this post

Subscribe to our newsletter

Dusk on GitHub Download Whitepaper