global tech

WhatsApp Says It’s Too Late to Stop Far-Right Fake News in Brazil

On a smartphone, journalists watch Jair Bolsonaro, an extreme right-wing presidential candidate, speak live about the results of the first round of the election.
On a smartphone, journalists watch Jair Bolsonaro, an extreme right-wing presidential candidate, speak live about the results of the first round of the election. Photo: picture alliance/picture alliance via Getty Image

In the New York Times yesterday, three authors of a report on misinformation in Brazil begged the Facebook-owned messaging service WhatsApp to make three changes to stop the spread of fake news. Brazil is currently in the midst of electing its next president and is a heavy user of WhatsApp: 44 percent of Brazilians get their political news from the service. Right now, right-wing politician Jair Bolsonaro — who has a history of supporting torture, military dictatorships, and violent repression of minorities — is heavily favored to win. And part of his potential victory can be credited to a campaign waged on WhatsApp, some of it reportedly orchestrated by “a multimillion-dollar ‘anti-Workers’ party campaign,” per a report in the Guardian. Brazil isn’t the only country that has seen this problem. As fake news rumors on WhatsApp in India began causing riots and deaths, the government decided that the only solution was to shut off the entire internet.

The New York Times op-ed writers Cristina Tardáguila, Fabrício Benevenuto, and Pablo Ortellado analyzed popular public chats on WhatsApp. What they found was disturbing:

From a sample of more than 100,000 political images that circulated in those 347 groups, we selected the 50 most widely shared. They were reviewed by Agência Lupa, which is Brazil’s leading fact-checking platform. Eight of those 50 photos and images were considered completely false; 16 were real pictures but used out of their original context or related to distorted data; four were unsubstantiated claims, not based on a trustworthy public source. This means that 56 percent of the most-shared images were misleading. Only 8 percent of the 50 most widely shared images were considered fully truthful.

They asked WhatsApp to make three changes as quickly as possible: Restrict the number of times a message can be forwarded to five, reduce the number of users that a WhatsApp user can contact at once from 256, and limit the size of new groups over the next two weeks.

A WhatsApp spokesperson, when asked for comment by New York, said, “It’s not technically possible for WhatsApp to make a product change in a week. It requires time to roll out the change and then each user has to upgrade their app. This process usually takes months to complete.”

In the Brazilian newspaper Folha de S.Paulo, WhatsApp vice-president Chris Daniels wrote: “While the desire to spread and consume sometimes harmful sensational information pre-dates the Internet, it certainly makes it easier. And because information — both good and bad — can go viral on WhatsApp even with these limits in place, we have a responsibility to amplify the good and mitigate the harm.”

The authors of the New York Times op-ed responded to these claims simply: “In India, it took only a few days for WhatsApp to start making adjustments. The same is possible in Brazil.”

On a larger scale, the idea of actually moderating WhatsApp is a true nightmare. WhatsApp is fundamentally a one-to-one messaging service, and a heavily encrypted one at that. Most of the activity on WhatsApp is either two people chatting or, at most, talking to groups of 256 people. It’s much more akin to Messages on iOS or text message than the one-to-many platforms like Facebook, Twitter, or YouTube, the preferred method of political manipulation here in the United States.

While this means bad actors on WhatsApp can’t gin up a half-million followers to broadcast fake news or propaganda, the fake news stories that do propagate are much, much harder to stop. To take it to a high-school level, Facebook and its ilk are people putting something mean in the high-school yearbook. WhatsApp is whispered gossip in hallways — harder to track and perceived by recipients as fundamentally more trustworthy.

It’s easy to imagine how Facebook and Twitter and YouTube could moderate their platforms better, though the answer would cost each company potentially billions of dollars: hire many, many more human moderators. When I look at the structure of WhatsApp, though, I don’t see many easy answers — just human nature, cancerous political actors, and the ability of technology to amplify both of those to a terrifying new degree.

It’s Too Late to Stop Far-Right Fake News in Brazil