Sovrin Network Performance


#1

Have there been any performance metrics published for the public Sovrin network or are there any performance metric goals?
(Such as those identified in the Hyperledger whitepaper - https://www.hyperledger.org/resources/publications/blockchain-performance-metrics?utm_source=twitter&utm_medium=social-media&utm_campaign=performance-metrics-whitepaper)


#2

A year ago, this question was somewhat top-of-mind. I know that the test team was running tests to prove the network could sustain a particular read and write speed for long periods of time under heavy load. IIRC, the minimum acceptable read speed they wanted to see was 1000 reads per second, and the minimum acceptable write speed was between 1 and 2 orders of magnitude slower (which is still significantly faster than Ethereum and Bitcoin). I believe they exceeded their goals, but I don’t have crisp data.

Since then, I believe the network performance has remained approximately constant, but the feeling that we’d like to increase performance has shifted due to the move to peer DIDs. This offloads maybe 99% of all network reads and writes, and scales with perfect horizontalness, so it radically changes bottlenecks.

Maybe someone closer to the performance testing can comment with better data.


#3

Thanks for your response. Would you mind explaining this a little more? I understand that the move to pairwise DIDs reduces read/write activity on Sovrin, but am confused by ‘scales with perfect horizontalness’.

Are you saying as number of users/connections grow, the Sovrin network performance is not impacted at all?

Or are you saying that the Sovrin network can be scaled horizontally to maintain network performance?

How can the Sovrin network be scaled horizontally? I would have thought that by increasing the number of Steward nodes, this would negatively impact performance.

Thanks!


#4

I am saying that peer DIDs don’t use the ledger directly, at all. Instead, they require storage and bandwidth from the two parties that are exchanging and using those DIDs. They are never looked up on a ledger, but instead are used via peer message exchange. Because of this, if you add 100 or 10,000 times the number of peer DIDs, you have zero impact on the ledger for the increased DID storage and communication burden. You probably do have an indirect impact on the ledger in that all those parties using peer DIDs will use the ledger occasionally for other stuff (e.g., to test credential revocation or look up a schema)–but in terms of the pure DID-related burden, the impact is zero.

For more on peer DIDs, see https://openssi.github.io/peer-did-method-spec/index.html


#5

Daniel’s response addresses an important part of our architecture for global usage.

For other people who ask this question, I’ll capture key points from our discussion at http://chat.hyperledger.come. Regarding the performance of the global ledger:

  • We planned our ledger analysis before the Hyperledger paper was available, and our work influenced that paper.
  • We have been using 10 writes per second and 100 reads per second as our performance minimum (if we fall below that we are worried). Benchmarks can go much much faster of course. The Sovrin Foundation uses those numbers as a guideline for our production configuration on a global pool of 24 nodes.
  • Our load testing is recorded in Jira. See the issues linked to: https://jira.hyperledger.org/browse/INDY-1343
  • We target different performance characteristics than many other chains. Because of the protections on correlation, performance metrics don’t need to be a primary focus (but any and all help here is very much appreciated).
  • The system queues requests to normalize loads and we encourage clients to preallocate things they need and also postpone posting objects to the ledger when possible. This spreads out the load. Also because information exchange happens in the peer-to-peer part of the system we don’t anticipate having “holiday season” type load spikes. So we don’t have to worry about “visa network” type numbers.