the zucc

How Facebook Fact-checking Can Backfire

Photo: Steve Eason/Getty Images

Since Facebook has come under substantial fire for the sheer amount of cruddy, misleading, or outright wrong information on its platform, the company has tried a number of interventions to help make users more aware of what they’re seeing. One of those solutions has been to establish a network of third-party fact-checkers, and allow those checkers to slap labels on articles that are false or “disputed.”

It makes sense — if something is wrong, tell users that it’s wrong — but a new study released last week has found that the effects have a significant downside. The report (from academics Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand) points out that the labels generate what researchers have named an “implied truth effect.” That is, when certain posts are labeled fact-checked and false, users also believe that content without the label has been fact-checked and is true.

“This is a huge problem, because fact-checking is hard and slow, whereas making up fake stories is fast and easy,” David Rand, an associate professor of Management Science and Brain and Cognitive Sciences at MIT, said over the phone. “Fact-checkers only ever wind up fact-checking a tiny fraction of all the bad content that’s out there. And we show evidence that, indeed, in the condition where we put warnings on some of the false headlines, it makes people believe those headlines less, but it makes them believe all the other false headlines more.”

Rand said that he wasn’t particularly surprised by the findings. “I think the thing that we found surprising is that nobody had pointed this out before,” he added. “This ‘implied truth effect’ is actually totally rational and reasonable. It’s not some weird cognitive bias or something.”

Much of the misinformation floating around Facebook has partisan leanings, but the study found that, contrary to conventional wisdom, the warning labels actually made people less likely to share false headlines that they agreed with. “When people saw the false warnings on headlines that align with their ideology, the warning had a much bigger effect than when it was put on headlines that didn’t align with the ideology,” Rand said. Some of that might be, he theorized, because “if you’re decreasing belief in sharing, the more belief in sharing there is in the first place, the more room you have to decrease it.”

The implied truth effect makes intuitive sense, and it poses a challenge for every large platform. Just this weekend, Twitter marked a video posted by White House social-media director Dan Scavino, and retweeted by President Trump, as “manipulated media.” The conclusions also put these large platforms in a tough spot, because they operate at a scale and speed at which fact-checking can never keep up with the deluge of user-generated content. It is impossible to verify everything.

“I respect the criticism and the suggestions a lot but claiming that users believe/share more when something is not fact-checked, does seem to sacrifice the role and the value of fact-checking for Facebook’s policies and journalism’s scalability issues,” Baybars Örsek, the director of Poynter’s International Fact-Checking Network, wrote on Twitter.

What can Facebook do then? Rather than simply abandon fact-checking altogether, Rand suggested that large platforms could rely on the “wisdom of the crowd.” The idea there, supported by various studies, is that “even if each individual is not very good, you get a bunch of people to do it, and you average their responses, the result actually comes out surprisingly well.” This is a more scalable solution than trained fact-checkers, requiring less individual expertise.

Crowdsourced ratings of truthiness comes with its own obstacles. For one thing, a poorly designed system could be easily gamed by motivated and coordinated users. The solution there would be to poll users at random, rather than allow any user to provide feedback (an example of the latter is Reddit’s upvote/downvote system). Theoretically, enough feedback would be solicited that you end up with a workable average. Even if the randomly polled users voted with the intent of skewing the reliability ratings, Rand said that “most people actually don’t care about politics. I think Americans have a way exaggerated sense of the extent to which Americans care about politics, because the people on Twitter yelling really loud are very loud.”

The fact-check labels, for instance, seem like an obvious fix — tell people that what they are seeing is wrong — but that feature has a cascading effect that warps the perception of every piece of content in a user’s feed. They might think that anything not deemed false is true. On the other hand, crowdsourcing the veracity of a piece of news or a news outlet seems like a bad idea — everyone has significant biases and knowledge gaps, and the system can be gamed — but on an empirical level, it seems to work out. A considerable issue for coming up with the right solutions for combating misinformation on social media is that there are solutions that seem, as Rand put it, “intuitively compelling” but not empirically effective.

How Facebook Fact-Checking Can Backfire