Reputational systems and "negative" claims


#1

I was just wondering if it made any sense to consider the ability to make “negative” claims about something… How should a “web of trust” like Sovrin deal with negative behavior or false claims made by a given actor? If there is a chance to attest to one’s legitimacy for building trust and reputation, how do we
model and deal with negative reputation within the system? Any ideas having been discussed on this matter so far?


#2

It seems to me that a key question is: What is a given reputation system supposed to measure? Many valid answers are possible, and they suggest different postures with regard to negative behavior and claims. A system that scores a company according to the quality of its products might reasonably suggest a bad reputation for manufacturers with products that regularly receive negative reviews. On the other hand, a system that guesses whether people are likely to be healthy may not be able to use hundreds of negative observations (“No symptoms of any kind observed”) as a useful indicator that a person is healthy today.

I spent a couple years of my career building and maintaining AI/ML-based reputation systems for cybersecurity, and we found that negative observations (“No malware observed here”) were lousy indicators that a website deserved a good reputation. Part of the reason is that lack of evidence has unknown completeness; maybe the reason there is no evidence is because someone did a poor job of gathering info, rather than a good job of gathering few negatives.

However, it certainly must be the case that malicious or abusive behavior has an impact on a person’s reputation…


#3

Carlos, my experience with reputation and specifically the Respect Trust Framework, let me say this about negative reputation statements (I would not call them “negative claims” because they are just claims like any other claims, however if the claim is about reputation, then the reputation statement can be positive or negative)

  • All reputation systems are hard to get right; negative reputation is MUCH harder than positive reputation (meaning negative reputation is easier to game or abuse than positive reputation). This is why, for example, LinkedIn does not have “negative endorsements”. However Amazon does offer the ability to give bad reviews (e.g., 1 star).
  • Negative reputation often requires some form of moderation or adjudication that positive reputation systems usually do not.
  • Negative reputation SHOULD be contextual just like positive reputation. Saying you are a bad singer does not make you a bad person.

That’s just the tip of the iceberg. There are shelves full of books on the subject—a good starting place is Building Web Reputation Systems by Randy Farmer and Bryce Glass.


#4

Thank you very much for your replies, they’re pretty much spot on.

In any case, I completely agree that a generic claim system design allows for many types of claims, whether they are reputational or not, and whether they are “negative”, “positive” or any other sort of qualitative category.

A reputational system (with “negative” votes, for example) can perfectly be implemented by using the general claim system that is already in place on Sovrin. How these reputational scores are to be used is up to the relying parties.

However, getting a bit more into the realm of governance: is there any work or consideration being done on how to measure reputation for the relevant actors within the system (trust anchors, stwards, etc.) and possibly by utilizing the very platform to do so? This pretty much touches the subject of Privacy Vs. Accountability, specifically the accountability aspect of it…


#5

Carlos, you are practically reading my mind—or at least reading the mind of the Sovrin Trust Framework Working Group. There is indeed a very important role that a basic reputation system can play in the establishment of what I call the “public trust graph”—the set of Sovrin trust anchors whose trust relationships need to be explicitly in the public view: trustees, stewards, agencies, etc.

I’m trying to do a short writeup on this topic called The Sovrin Public Trust Graph before next Tuesday’s Trust Framework Working Group meeting. As soon as I finish it I’ll publish a link to it here on the Forum.


#6

Reputation is going to be a vital part of Sovrin’s growth. There is an approach which is based on positive reputation, whereby you are able to publish your “score”. If you don’t publish it, that could be taken as a good indicator that your score is not very good.

A nice example is the food hygiene ratings that restaurants have here in the UK. It’s a 5-star system. Displaying the score (restaurants get a nice sticker for their window) is not compulsory. And amazingly you see very few restaurants displaying a sticker with 3 or fewer stars. So you can take absence of a positive score as a negative.

I love the idea of a public trust graph. I am often asked “but how will I know that The Bank I am dealing with on Sovrin is actually The Bank, and not “Andy’s Dodgy Bank” set up from a corner shop?”. The answer is that if The Bank published their public trust graph I’d be able to see information about who (or how many people/orgs) trusts The Bank, who The Bank trusts, how active they have been etc. Being based on Sovrin’s ledger, this will be extremely hard to fake and there’s no way Andy’s Dodgy Bank could create a similar public trust graph.


#7

Andy, I couldn’t agree with you more about the power of a public trust graph. I’ll be posting more about that soon.

But one cautionary note about interpreting “lack of reputation” as negative reputation. The downside of that interpretation is that it also taints anyone who does not have a reputation yet. A newcomer to a reputation system needs to have a way to earn a positive reputation without the stigma of having no reputation to begin with. This bootstrap problem is well-known in reputation systems and I’ve yet to see an easy answer. So we’ll need to be creative about how we handle that.