Study of Trust in Fact-Checking Services

Petter Bae Brandtzaeg and Asbjørn Følstad write this month about Trust and distrust in online fact-checking services [1].

Everyone knows that the Internet is a perfect medium for disseminating information of all kinds, including rumors, errors, propaganda, and malicious lies. Social media have proved to be just as susceptible to misinformation, despite it’s filtering mechanisms (which are problematic in other ways).

One response to this flood of junk information is a proliferation of “fact-checking” services, which attempt to verify claims in public statements using primary and secondary sources.

The very fact that there are 100 or more such services would seem to be significant, though I’m not sure what it means exactly. This must be the ‘golden age of fact-checking’.

Bae Brandtzaeg and Følstad point out that a fact-checking service depends on establishing a reputation and the trust of users. In particular, what matters is how the user (consumer) perceives the service. There isn’t much point to a “fact checker” that you don’t believe is accurate and honest.

Their study analyzed social media discussion of selected widely used fact checking services. This data is unstructured (so say the least!), but does represent unfiltered publicly stated opinions about the fact-checking service by actual users. These sentiments were coded for statements about “usefulness” and “trust”.

One of their findings is that negative comments were often about “trust”, which positive comments were about “usefulness”.

Many negative comments complained about perceived bias in the service, which is certainly consistent with the vast research that indicates that people do not readily change strong opinions in the light of facts. In this case, they dispute the motives of the messenger, rather than their own opinions.

Positive comments about the “usefulness” indicate that the service may have achieved enough trust (or congruence with preconceptions) that the information influences the user’s opinion. This is consistent with the idea that someone who is both skeptical of a claim and trusts a fact checker will find the check useful.

The authors note that there may be a great need and desire for fact checking, but most people don’t use it. (For example, me.) If nothing else, the perceptions of these systems might well evolve if they are more widely used.

The authors point out that for many users the distrust in the fact checking service isn’t really specific to the behavior of the service itself, it is a distrust of everything. A very general disbelief in society is often highly emotional, so services should take care to present themselves in ways that try hard to engender trust.

lack of trust extends beyond a particular service to encompass the entire social and political system” (p. 70)

The long and the short of it is that fact checking needs to be highly transparent. Trust is created by knowing who is “checking” and how they do it. The authors also suggest that reliance on “expert” opinion should be minimized, and that “crowd sourced” verification may be especially useful.

Reading about this study, I am struck by the contrast with the widely held dogma of “trustlessness” of the cryptocurrency and blockchain world. Nakamotoan blickchains are a cure for everything, including fake news, some say.

It is thought that these “trustless” systems “can’t be evil”. Furthermore, in a medal winning rhetorical judo throw, the anonymous (or at least unaccountable) blockchain is considered “transparent”. Some even explicitly imagine that such “trustless” systems can fix journalism (generally through “market based” processes).

The Brandtzaeg and Følstad study makes pretty clear that the key to trustworthy information is transparent and accountable processes. I don’t see how you can hope to build trusted information on a foundation of “trustless” technology. Frankly, I think blockchain and other technologies are largely irrelevant to the problem of “fake news”.

Finally, I note that “trust” is an end-to-end property. People trust people. The technology in between the two people is relevant only to the degree that it obscures or enhances the ability of people to trust each other.

The challenge is that digital technology is naturally opaque and it is easy to be deliberately deceptive. In order to be trusted, a digital service must work hard to make clear to the human users who the human and other sources really are, and what their motives really are.

This is surprisingly difficult, and I think that “trustless”, peer-to-peer systems make it even harder to establish this trust.

  1. Petter Bae  Brandtzaeg and Asbjørn Følstad, Trust and distrust in online fact-checking services. Commun. ACM, 60 (9):65-71, 2017.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.