Given that other DLTs use mechanisms like proof of stake or proof of work to discourage spamming of the network I assume a spam-prevention mechanism must be present in Plenum but I’m not aware of any documentation that explains this. Can anyone clarify?
Since Plenum is a permissioned DLT, a higher priviledged entity (called Sponsor) has to onboard a public key on the ledger before that public key can be used to make requests on the ledger. And we plan to implement throttling on the requests by onboarded keys too.
I understand that each validator node will need to be onboarded so they have permission to act as a validator, but does the onboarding also apply to individual users? I ask because several other “traditional” blockchains have a cost (some coin value) for submitting a transaction to the network in order to deter malicious parties from submitting spam transactions and overwhelming the network with garbage.
If every user has to be onboarded by a sponsor then I guess the throttling would reduce the number of possible malicious requests they could provide and some mechanism for removing them from the network would also have to be available. And the throttling would have to be set based on the type of actor participating in the network - a ecommerce provider, for example, would need a significantly higher threshold than an individual human actor and determining that they are a malicious actor would require that many more bad transactions.
Will the onboarding process be complex or take long for users (as opposed to validators)?
I’m sorry - didn’t really explain myself terribly well in my original question - I’ll go with the excuse that it was late and I’m jet-lagged.
@srottem: I have participated in various discussions that explore the issue in your question. Among other things, I have heard proposals for a proof of memory (instead of a proof of work like Bitcoin); also a proof of patience, a proof of social connection, a captcha, or a proof of work that anchors to Bitcoin/Ethereum. The underlying tension that all of these proposals address is that, on the one hand, we want users to be bootstrapped quickly and easily–but on the other hand, we don’t want someone to be able to create 200 million bogus identities because bootstrapping is so easy. I don’t believe your question has been fully resolved, but I believe the final answer will be a “pick your favorite mechanism from a list of several acceptable alternatives” – and an ordinary human user will be able to bootstrap for next to zero cost, but not in such a way that bulk fake bootstrapping is attractive for fraudsters.
The final answer is one that Sovrin must specify as part of its governance judgments, and will be codified through a series of public meetings in time for the Q1 2017 “go live” milestones.
I would be curious to hear if you have an opinion about the pros and cons of the various ideas–or if you have new ones to add.
I’ve also been involved in a few discussion on this topic and one approach that seemed interesting is the idea of tokens that must be spent by a party that wants to record a transaction, but the tokens are issued by the party they want to transact with and can only be used with that party.
If the relationship was between an individual and an organization (e.g. a vendor) the organization would issue the tokens required to perform the interactions with them to the individual and the individual could only spend them with that organization . The vendor themselves would need to obtain the tokens they issue somewhere along the line and they might be a cost of participating in the network. The tokens could be destroyed once used which would require the organization to purchase more when they have exhausted their tokens but would prevent a malicious party from setting up an organization and an individual account and then just cycling the tokens between the accounts in an infinite loop and creating malicious transactions.
Of course, transactions between individuals directly would probably have to be handled differently as individuals aren’t going to want to pay to participate, but a throttling approach might work here.
Using the two different approaches would allow for a higher volume throughput for different classes of network participant.
Anyway, just my brain steaming away so don’t read anything into it.
Yes, @srottem, that idea resembles another scenario we’ve discussed, where an issuer basically gives a coupon that an identity owner can spend on identity transactions. The way I heard it discussed, this would use anonymous credentials so the spending of the coupon doesn’t produce any correlatable data for the issuer or other observers. We should definitely discuss further. Do you attend the user community calls? I think the next one is on about Nov 21. Might be a worthwhile topic. Or we can continue discussing here.
I have attended the last two calls but I’m based in Melbourne Australia at the moment so given that they’re at 3am my time I don’t think I’m likely to attend too often. I’ll certainly watch any recordings though. Then again, if I have a sleepless night…
I agree that it’s a topic that should get some attention, however. Any identity solution like Sovrin that takes off is going to potentially have a large number of participants submitting some truly huge numbers of transactions and there will be others out there that have an incentive (or just a horrible inclination) to break the system (think of DDOS attacks we see on a regular basis in the news these days). Once people become reliant on a network like this an outage or degradation of service could have wide ranging consequences.