News of computer scientist Luca de Alfaro’s Wikipedia trust-coloring system revived – and improved – an idea I’ve been playing with: automated reputation-management for politicians. The idea is to make the concept of honor meaningful again, by creating new social rewards and penalties for behavior that affects the rest of us. (It could, of course, also be applied to journalists, corporate leaders or other public figures.)
Chosen as a Top 10 blog post of the month by Carnival of Trust |
De Alfaro’s system, now operating in demo form on a sample of a few hundred Wikipedia pages, ranks the trustworthiness of Wikipedia authors by measuring how long their contributions last without being edited. Text contributed by the author is color-coded for trustworthiness:
Text on white background is trusted text; text on orange background is untrusted text. Intermediate gradations of orange indicate intermediate trust values.
I think it would be useful to be able to do the same thing with politicians’ names every time they appear on the web. Here’s how I think it might be spec’d:
- Our software would crawl the pages of factcheck.org, looking for the names of politicians.
- The software would check to see if each name appeared in the context of a correction of an untruth/exaggeration/”misstatement”.
- The reputation of each politician would be scored according to how many appearances his/her name made in such negative contexts.
- Any time the politician’s name appeared on a web page, it would be displayed in a box of the appropriate color. In this case white might not be the best choice for “trustworthy”, since the politician might not be trustworthy, just unranked. So we might go spectrum-wise from green for “honest” to red for “frequent liar”. (On a relative scale – I’m not enough of a Puritan to believe there are people who are 100% honest or 100% dishonest.)
- This color-coded display could be accomplished either on the client side or the server side: on the client side as a browser plug-in, or on the server side as an extension of the publisher’s content management system.
I think there would be a strong value proposition for both consumers and publishers. Imagine the impact of seeing your news presented this way:
In response to a question on why the US is in Iraq, Senator X said, “….”
vs.
In response to a question on why the US is in Iraq, Senator X said, “….”
And imagine the possible impact on politicians’ respect for the truth. Currently, if factcheck.org or some other organization calls you out on a fabrication, the impact is more or less safely sequestered within their limited reach. This way, the impact could spread everywhere, the way good or bad word on one’s reputation spreads through small real-world communities.
Why use factcheck.org as opposed to open ratings? If the reputation ranking were open, I think we could count on enormous amounts of abuse by partisans, including attempts to undermine all trust in the system. The people behind factcheck.org are journalism experts, and the site is avowedly non-partisan. But it might work to make the ranking system “porous” as opposed to fully open, like the new publish2 journalism community, or in fact like granddaddy slashdot. People who had themselves earned a reputation for honesty could be allowed to rank the honesty of others.
There would probably be claims, especially by those with names of an embarrassing color, that factcheck.org (or any other arbiter) is not in fact non-partisan. And so consumers might choose alternative arbiters, if it came to that. But here, too, some reputations would weigh more than others, as they always have.