-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Own trust should override Web of Trust #329
Comments
Hmmm... That's a tough one. Seem a bit like special casing. And then what about transitive trust etc. Hmmm... Very similar to #264 . In cases of organizations where there's a root of trust and employees are one step away from it, this doesn't even work. I was generally hoping for a WoT that has a reasonably simple rule that is consistent. Maybe the problem is with the pure additivnes of trust level, which isn't flexible enough. Maybe the rule should be that the effective trust level is the trust level reported by the highest trust-level reporter. Then the graph traversal would be just highest effective trust-level first, no adding anything. Seems simple enough. I guess there's still a question of ties between people with same trust level. I guess there we could make it additive, just so the result is always deterministic and doesn't depend on random order of traversing. We started with the root, mark all direct neighbors, then repeatedly from sorted list of pending nodes take everyone at the highest available trust level, mark their direct neighbors that weren't yet visited (using max level if there are multiple reported). This would probably even solve distrust without special casing it. |
So as far as I understand, the intent was that if I set But in this case I'm confident I don't like someone's reviews, so my WoT doesn't give extra assurance, but rather it disagrees with me. So the missing piece here is that there are two dimensions:
The single trust level is a mix of these two, so I can't express my high confidence that someone should have low trust. Adding confidence as an extra field would be a huge complication for users, so I'm not sure how to resolve this problem. |
As per dimensions: lack of confidence is a sign of lack of trust, so it all comes down to one metric. It makes sense. It's just sometimes we disagree even with people we trust and it's important to us to get our own way. It's rare, but it happens. I think in both overriding the trust and overriding package review, a simple flag that says "my judgment should take precedence over anyone else is saying" ( We could get such behavior with the algorithm I described above, except the trust level that was set with |
I was also thinking about adding more negative levels of trust. With just one In technical terms, it could be "I don't trust this id" vs "nobody should trust this id". For an additive trust algorithm, it could have a negative trust value. |
@kornelski There used to be |
Was it removed for a reason? I was thinking about some neutral-sounding level between |
@kornelski There was |
How about a compromise — the WoT can increase my level by 1 step? So if I explicitly set |
I have a mixed feelings about that. :D . We could maybe change the formula to an |
You mean the whole WoT algorithm, or just the step that caps trust based on user's own trust proofs? |
@kornelski: I thought about it in these few spare moments that I sometimes have. I'm leaning towards adding a By default the
It would affect effective trust level calculation: the effective trust level would be additive withing same confidence level, but trust level from higher confidence level would overwrite the trust level from lover one. This might be hard to understand so couple of examples might be in order. The new algorithm is a superset of the existing one, where everyone use On the other hand - if you don't feel strongly about someone's trust level, you can make it
I still would have to attempt to implement it and spend some time researching how it behaves, but intuitively it seems to me it works quite well and is not a big computational problem. For every ID (node in the graph) I'd just have to keep track of the Feedback & opinions very welcome and appreciated. |
If this works well, we might want to add I think right now |
|
It's true that understanding and thoroughness are already kind of like |
The confidence system introduces sort-of-a-negative-trust: a previously visited node can now get lower trust - which messes up the graph traversal algorithm a lot. :/ . Pretty much any form of general trust override do. Right now the WoT algorithm is quite simple and efficient and almost |
I guess even regular distrust can cause a paradox:
if A trusts B, then A trusts C, then A distrusts B, so A doesn't trust C. |
Yes. Thinking more about it - we're just missing forest for the trees. Tracking particular user's effective trust level works only in a tiny network that is being bootstrapped. If the project is to succeed, there will be hundreds of thousands of nodes in the WoT, and couple of users having a bit higher trust level because someone else with higher trust level has higher trust in them then you ... is just not a problem. What is supposed to protect you is the The main effective trust level calculation needs to be simple and quick, and give good deterministic results. So I would say ... if you gave A lower trust level, then B bumped it higher ... consider lowering the trust level of B if it really bothers you. :) . Because sooner or later B is going to give that trust level to C and D, and you have better things to do than tracking everyone in your WoT. |
If I don't trust one person that much, lowering trust of everyone around them is not a good solution. It's very heavy-handed and I will be lowering trust in many more reviewers than I wanted. It isn't even reliable, because other people in the WoT could increase trust in the unwanted person at any time. I don't have a way to permanently record my certainly-low trust, and that trust may change without me doing anything. For now I've just set explicit |
How about using high-confidence proofs of the user as a cap on maximum trust. Keep the existing algorithm, but when evaluating trust of an Id, if there's a trust proof by the current user with For other people's trust proofs, |
But the more I think about growth of the WoT, the more I think it shouldn't have additive behavior at all. I'd rather use my trust level as the max always. If I have low trust in many people, this should never result in anyone getting high trust from this. They may all be bad at managing their WoT, which is why I have low trust in them. |
I guess that's good. This problem made you honestly re-evaluate your trust in "many more reviewers". If you trust A with high level, and they trust B with high level, and you don't want B to have high trust, then ... you have to re-evaluate your trust level in A. You might want to just lower trust in B manually, but A in 10 minutes will go and give high trust to some C that you actually don't know about and also don't approve, and everything turns into a game of Whac-A-Mole. @pimotte just suggested that crev in such situation should automatically downgrade trust in A. And I guess I disagree with both of you. I think it's best if crev uses the simplest and easiest to implement, scale and reason about approach, and the user is forced to manage their trust decisions somewhat more carefully and has to face consequences of their own decisions.
Yeah. This is not right. Distrust is for explicitly banning people. And if you ban people that other people value, you will eventually get banned. In a way I'm actually happy that tensions like this arise. As much as it seems inconvenient for the user, IMO, it is good that the users are kept in state of tension and they have to face dilemmas like this. I also want to avoid "having one trust-set privately" and "one trust-set publicly". Part of the reason why I want to avoid having separate "transitive-trust" vs "direct-trust" and other things. I thought that confidence level does not break any of these principles and allows someone to honestly express their lack of confidence, but since it doesn't work well on the implementation side, I have to drop it, it seems. So my answer to: "A caused me to trust B higher than I want" is: "should have trusted A less". :) |
We could have tools helping with all this. Eg. a tool that lists all users that caused increase in an effective trust level of people that you rated directly lower. Just so you can re-evaluate trust in them ... or maybe ... just talk with them about it. Personally, I don't believe in some magical smart algorithm that is going to eliminate all the problems, and smoothly calculate perfect "trust scores" for everyone. Because at scale such system slowly degrade with people doing lazier and lazier job, because "the system is going to take care of it". My perspective is that complex systems are complex and messy, and what keeps a balance in them and makes them functional are tensions, conflicts and need for communication. Self-organization. I rather have a system in which sometimes people have to be unpleasant to each other and say "hey A, you gave B high trust, and they are doing a crappy job, so I'm downgrading trust in you" and keep each other honest and careful, than a system with false security and/or anti-social system where everyone just keep a private list of trust. |
I don't like all-or-nothing situation here. "A" may be an important user who is writing good reviews, and such disagreement would be mainly my loss. If "B" is some central user, I may be unable to use crev at all. Trust is not objective, and is not context-free. "Trust" is a simplified symbolic representation of multiple dimensions of complex relationships. These reasons still exist, and may get more and more misaligned the more degrees you hop across WoT. "A" may have a great reason to highly trust "B", and I don't. People may trust each other because they're friends IRL, or because they're coworkers. But I may want to lower trust, because one of them misunderstood how to use crev, or they work for my competitor, or I just don't their face. With additive WoT everyone is forced to express their different reasons on the same scale. But I don't want to impose my perspective and my trust requirements on my entire WoT. In a way, if I have something against person "B" that's my problem, and I don't need to rally the whole world against them. Another way to look at it is that "additive" WoT is oriented towards growth. As members pull in more people, the WoT grows automatically, and trust levels rise everywhere. I may end up with high trust in people that are several degrees away from me. I may end up highly trusting people I've never heard of. That's useful for a public site like lib.rs, because there's little curation of the WoT needed to pull in almost everyone, and have an trust score about every member from other members. It's more like PageRank than a friend list. But I'd prefer "multiplicative" WoT where distance from me can only lower the trust, and never raise it. (roughly, if calculating trust of With a "multiplicative" WoT if I add someone with a |
Please excuse if anything is incoherent, or sounds rude or something. I was just distracted, and had to write it down in rush.
This has to be the case for a wide-scale WoT to work. Otherwise the ratings can be arbitrary and not very useful as a global source of trust.
You either trust A and people A trusts at certain level or not. All people that have you in their WoT depend on your decision, and that's why it's important for you to make up your mind. It's OK for you to go cautiously lower if you're not sure, since other people with more certainty can increase the effective trust level for people that trust them high enough.
Then don't. Accept the fact that the world generally sides with them. Don't let the perfect be an enemy of the good enough.
Yes. For the whole network to be usuable it has to be wide. There is thousands (millions?) of crates in many versions to cover, and we're to build a lose network of thousands of people to take a closer look. I think there's a fundamental dissagreement here what's the purpose. The fact that you even look in such a details as "why does B have trust level medium, instead of low" suggest that you're trying to optimize for your small personalized network. But I don't think it's practical. I don't want "1 carefully selected person looked at this crate, so must be OK". I want "10 people that had low trust level, and 1 with medium looked at it, so maybe it's OK, and I'll review one that wasn't reviewed by anyone with medium level yet".
It does grow automatically, but trust levels don't rise everywhere. They do graduatelly get lower, because if you haven't made someone's trust level "high", they can not produce anyone else with trust level high. If you haven't made "A's level high, B wouldn't be high". The trust level degrades as further from the root we go, and the fact that it's additive is just making it a bit slower.
Exactly. I don't want a friend list. I want something that can filter out spamers, fraud, sockpuppets - systematic problems and leave all other reviews somewhat trustworthy for everyone else, so you can narrow down dependencies that need direct review most. I want people to use information provided by other people they don't know, as much as they need to (they don't have better sources).
That's already the case.
That's kind of how it works. min(trust(me, A), trust(A, B), trust(B, C)). The more further you go, the higher the chance that there was no link at all, or it was "low" and trust erroded. But you also have BTW. I don't think multiplication of 0..1 numbers works well. First - it goes downhill really quickly. 0.5 ^ 3 = 0.125. And then "this person trust level is 0.125". What does it even mean? A lot? Little? Good? Bad? And then you get multiple reviews, with different understanding etc. And it all turns quickly into meaningless, abstract aggregates. "To trust my deps I want their score to be at least 0.1552345234". Yyyy...why not 0.19? It's much concrete and simpler to work with trust in "I want at least 3 people, of at least medium-level unbroken link of trust to review my deps, with at least medium level of understanding". "A person in my WoT with effective trust level of medium, must have unbroken chain of trust to me of at least medium. I also don't understand why worry about narrowing down the WoT, when even now, when I do "cargo crev verify" on cargo-crev's source code I don't even get one review of any version of most of the dependencies. It's not like a have a choice. I have to take the review of anyone, as a better thing than nothing. After I actually have some good coverage of most crates, I can start to get picky and figure out ways to filter out people. I've recently added an explicit comment that Crev does not force any WoT implementation, so anyone is free to calculate trust in any way they really desire. So we could add a trait or something, so that anyone can plug their own implementation for their own purposes, and even in I hope that after a while, if things go well, there will be many many people to research this problem, on a wider network and figure it out. Also, note that you still have Also 2, nothing is set in stone and if we get more voices and arguments, feedback, usage etc. the simpler things are right now, the easiest it will be to change the direction in the future. |
There's a user who through my WoT got
medium
trust. I didn't trust them so much, so I made a trust proof withlow
trust. It didn't change the level of trust.I assume WoT picks the maximum trust it can find, but I think my own Id should have the final say.
It worked for
distrust
level, but that seems overly negative. I'd like to be able to override WoT's trust withnone
andlow
too.The text was updated successfully, but these errors were encountered: