Sovrin Network Performance

Have there been any performance metrics published for the public Sovrin network or are there any performance metric goals?
(Such as those identified in the Hyperledger whitepaper - https://www.hyperledger.org/resources/publications/blockchain-performance-metrics?utm_source=twitter&utm_medium=social-media&utm_campaign=performance-metrics-whitepaper)

A year ago, this question was somewhat top-of-mind. I know that the test team was running tests to prove the network could sustain a particular read and write speed for long periods of time under heavy load. IIRC, the minimum acceptable read speed they wanted to see was 1000 reads per second, and the minimum acceptable write speed was between 1 and 2 orders of magnitude slower (which is still significantly faster than Ethereum and Bitcoin). I believe they exceeded their goals, but I don’t have crisp data.

Since then, I believe the network performance has remained approximately constant, but the feeling that we’d like to increase performance has shifted due to the move to peer DIDs. This offloads maybe 99% of all network reads and writes, and scales with perfect horizontalness, so it radically changes bottlenecks.

Maybe someone closer to the performance testing can comment with better data.

Thanks for your response. Would you mind explaining this a little more? I understand that the move to pairwise DIDs reduces read/write activity on Sovrin, but am confused by ‘scales with perfect horizontalness’.

Are you saying as number of users/connections grow, the Sovrin network performance is not impacted at all?

Or are you saying that the Sovrin network can be scaled horizontally to maintain network performance?

How can the Sovrin network be scaled horizontally? I would have thought that by increasing the number of Steward nodes, this would negatively impact performance.

Thanks!

I am saying that peer DIDs don’t use the ledger directly, at all. Instead, they require storage and bandwidth from the two parties that are exchanging and using those DIDs. They are never looked up on a ledger, but instead are used via peer message exchange. Because of this, if you add 100 or 10,000 times the number of peer DIDs, you have zero impact on the ledger for the increased DID storage and communication burden. You probably do have an indirect impact on the ledger in that all those parties using peer DIDs will use the ledger occasionally for other stuff (e.g., to test credential revocation or look up a schema)–but in terms of the pure DID-related burden, the impact is zero.

For more on peer DIDs, see https://openssi.github.io/peer-did-method-spec/index.html

Daniel’s response addresses an important part of our architecture for global usage.

For other people who ask this question, I’ll capture key points from our discussion at http://chat.hyperledger.come. Regarding the performance of the global ledger:

  • We planned our ledger analysis before the Hyperledger paper was available, and our work influenced that paper.
  • We have been using 10 writes per second and 100 reads per second as our performance minimum (if we fall below that we are worried). Benchmarks can go much much faster of course. The Sovrin Foundation uses those numbers as a guideline for our production configuration on a global pool of 24 nodes.
  • Our load testing is recorded in Jira. See the issues linked to: https://jira.hyperledger.org/browse/INDY-1343
  • We target different performance characteristics than many other chains. Because of the protections on correlation, performance metrics don’t need to be a primary focus (but any and all help here is very much appreciated).
  • The system queues requests to normalize loads and we encourage clients to preallocate things they need and also postpone posting objects to the ledger when possible. This spreads out the load. Also because information exchange happens in the peer-to-peer part of the system we don’t anticipate having “holiday season” type load spikes. So we don’t have to worry about “visa network” type numbers.

I totally disagree with @esplinr statement that performance should not be a primary focus. In DLT solutions blockchain performance working underneath the application is the main bottleneck, especially in end-user centric DLTs. Developers can improve their dapps by using offchain computation, oracles, lightning networks, plasma solutions and more, but those reequire strong base 1st layer. Without strong 1st layer not much can be done as 2nd layer solutions hit their limit quite soon.

Looking at the https://jira.hyperledger.org/browse/INDY-1343 I can safely assume the current network performance is the same for Sovrin network and equals around 10 TPS for writing with finalization time under few seconds. This is very low value for any blockchain and especially low for permissioned one that was created for one specialized purpose in mind. This is the performance that could easily be achieved in 10 years old Bitcoin. Natural questions that appear in mind after reading that are:

  • How did that happen and how result like that was classified as “production readibility” for “global” network?
  • How “customizied” blockchain architecture is able to perform worse than most of general purpose out there?

I expect I will get an answers saying that most of communication is done via p2p network so there is no need to write to ledger all the time or that other characteristics matters more such as security, finalization time, corellation protection and so - but the problem with that argument is - all of these, including ZK-proofs are 2nd layer solutions - those should not have impact at the performance at all if implemented correctly and those should not be presented as excuses. What’s more as permissioned blockchain having limited number of validators nor the p2p delay problem or network fragmentation or state sharding problems are present.

My question is - just to confirm - if I wanted to onboard 1 mln users now would it mean the network would require more than 1 day to process that? If Facebook wanted to onboard their users would it require 8 years to do that? Is there any method to improve that? Batching perhaps? Is there roadmap what kind of performance improvements will take place and when?

How did that happen and how result like that was classified as “production readibility” for “global” network?

The current network performance is sufficient for production use, and should be sufficient for the growth we expect over the next year.

How “customizied” blockchain architecture is able to perform worse than most of general purpose out there?

There are a lot of reasons why performance is not as good as we would like. Most are documented in this thread: Researching Sovrin Ledger 2.0

I expect I will get an answers saying that most of communication is done via p2p network so there is no need to write to ledger all the time or that other characteristics matters more such as security, finalization time, corellation protection …

Most of the answers you expect are the ones I would give. Improving the performance of the system has not been as important as providing the stability, security, correlation protection, and other features necessary for production use.

if I wanted to onboard 1 mln users now would it mean the network would require more than 1 day to process that? If Facebook wanted to onboard their users would it require 8 years to do that?

The key thing to recognize is that writes to the ledger are currently only required to define the credentials. The number of credentials issued, or the number of users of those credentials, is completely independent. The Government of British Columbia only required a handful of writes to the ledger in order to support the millions of credentials that they issued to different organizations. Facebook could issue a single credential to their entire user base with only three ledger writes.

Some of the proposed use cases for the network would require more frequent writes or writes by individual users. As we work on those features, we will need to reevaluate what network performance is acceptable.

Is there any method to improve that? Batching perhaps?

See the discussion thread about Ledger 2.0.

Is there roadmap what kind of performance improvements will take place and when?

Historically, my team consisted of the principle contributors to the ledger. Improving performance is not currently a priority for us. But if someone else wants to work on this effort, we would enjoy collaborating with them. The best way to continue the discussion is to join the channel #indy-ledger-next at http://chat.hyperledger.org

1 Like