The Guardian has published a fascinating investigation by the journalists Micah Loewinger and Hampton Stall about how the violent far-right rioters who breached the U.S. Capitol last week used the walkie-talkie app Zello to both plan the attack and, in the case of at least two of them, communicate during it.
Of course, the invaders used many different apps to plan and post and message during the attack. What makes the Zello angle noteworthy is that for a long time, the app had been seen as a haven for far-right extremism but had refused to take any but the most perfunctory steps to address the problem. “Zello ultimately banned some ‘boogaloo’ and outright white nationalist groups and users,” report Loewinger and Stall. The app also de-indexed militia groups so they wouldn’t be easily accessible via Google, and “block[ed] terms such as ‘Oath Keeper’ from its in-app search function.” But there was still a lot of genuinely ugly and scary stuff on there, and the app had staked out a rather assertive pro-free-speech stance that was at odds with the perspectives of many journalists and activists who monitor online hate.
The Guardian story, then, gives rise to a pretty straightforward accusation that seems to have caught on online: Zello contributed directly to last week’s terrible events. This is part of a larger story line, which is that online radicalism has been fueled by lax content-moderation policies on the part of the major platforms where hundreds of millions or billions of people communicate. Hence, the flurry of activity to clamp down on such activity over the last week, including Donald Trump’s banning from Twitter and other platforms and Apple and Amazon cutting off Parler.
It goes without saying that people often misunderstand the connection between private platforms and the First Amendment — in that there isn’t one. Private platforms can ban whom they want, for pretty much any reason they want. There are strong arguments to be made that they should ban certain content, even if there’s fierce disagreement over where the lines should be drawn.
But in the wake of a historically terrifying attack on the seat of U.S. government, it’s possible some people, understandably fearful of the threat of far-right terror encouraged by a deranged outgoing president with nothing to lose, are overstating the impact — and underestimating the potential unintended adverse consequences — of banning far-right users and platforms.
Let’s imagine that, prior to the attack on the Capitol, the outrage over Zello reached a boiling point and the platform had, as a result of legal action or the pullout of a hosting provider, been knocked offline entirely. What effect would this have had on the ability of the radical Trumpians to plot their attack and communicate during it? Approximately zero. To communicate, they could have simply picked another walkie-talkie platform — there are many to choose from — and set up fresh accounts. Or they could have bought actual walkie-talkies. Or they could have bought burner phones. They would have had too many options for communication untethered from their legal identities to count.
As for the plotting, they would have had a similar multitude of options. They could have done some of it on Facebook, for example. As the New York Times’ Sheera Frenkel told Michael Barbaro on a recent episode of The Daily, Facebook hosted Stop the Steal, which was likely the single internet group most responsible for congregating and riling up conspiracy-addled pro-Trump dead-enders. It was shut down just two days after it launched, but that was more than enough time for it to set last week’s events in motion:
And in its short life of 48 hours, it had managed to attract 320,000 people. But more importantly, it spawned hundreds of other Stop the Steal groups on Facebook and on Twitter. And now they’ve got Reddit boards. And so that inaction by Facebook, that two days that it took them to notice the group, to shut it down, was enough time to get their followers united against this banner of Stop the Steal.
This group was hosted, albeit briefly, on the most mainstream social-media site of all. Surely Facebook continues to host far-right content in private groups that exist beyond the reach of overstretched moderators.
The real pros — the scary types with designs on committing actual terror plots, rather than just LARPing — can just organize underground, anyway. As Frenkel said in another episode of The Daily, the recent clampdowns have driven countless right- and far-right social-media users to encrypted services like Signal and Telegram (Zello itself offers encrypted channels, but the messaging that took place during the Capitol attack was public).
In these venues, any hint of restraint tends to fall away:
And so their language, their rhetoric, just becomes, day after day, more extreme. They start to see themselves as the true believers, the true soldiers of Trump, who have been kicked off and banned and silenced by the mainstream platforms and have now found their way to these encrypted channels where they can plan what they say is real revolution.
Again, none of this is to say platforms shouldn’t ban at least some radical content. But if two minutes of reflection reveals that a given instance of non-moderation currently in the news would have had no measurable effect on the sort of thing everyone wants to prevent, it suggests we might not be looking at this problem the right way.
In the rush to deplatform, it’s also possible to neglect unintended consequences. There has been disturbingly little research or journalistic coverage of the possibility implied by Frenkel’s reporting that deplatforming might make real-world acts of terror more likely to occur. One reason for that is simple and practical: Law enforcement has a tougher time monitoring groups that flee underground. “[I]magine you’re a local police officer in a state like Ohio or Pennsylvania, and you now have to follow dozens of Signal groups and perhaps hundreds of Telegram channels to figure out exactly what these militias are planning next,” she told Barbaro in yesterday’s episode. “By dividing their efforts like this, they’re really making it as hard as possible for law enforcement to decide what to do ahead of these rallies.”
But there’s also the psychology of radicalism to consider. We don’t actually know enough about radicalization to be able to say for sure that, when it comes to the luckily tiny group of Americans who are considering violent acts, it is better to have them in public (by “public” I’m including pseudo-public groups, like large “private” Facebook groups that are easy to join via invite) or truly private social-media groups. This is all speculation, but there are certainly reasons to think the answer might be “public.” For one thing, a public group is more likely to have a mix of relative moderates (someone who is suspicious of the election results but not 100 percent convinced Biden stole the election) and true radicals (someone who is so furious over the “steal” they are ready to bring a rifle to Washington). In such a setting, it might be the case that the moderates can temper the worst impulses of the radicals, providing strong social sanction against (for example) explicit threats.
Underground channels, though, are much more likely to purge or to bar entry for moderates. As Frenkel notes, that’s where things get much scarier — that’s where you go to actually plot something. It’s reasonable to argue that it might be the case that the more underground far-right channels are scattered across the internet landscape, the more peril we face from their denizens. There could be real advantages to developing moderation practices that have the effect of corralling at least some of these would-be plotters into larger, more diverse (again, relatively speaking) groups that are public.
I’m leavening this story with a lot of italicized mights for a reason. People are far too confident, and develop far too deterministic theories, about the nature of online radicalization. There is not always a straightforward correlation between the amount of public online radicalism and the amount of resultant real-world violence. Keeping the latter at bay is the important challenge here — and cracking down on public speech probably feels to a lot of well-intentioned people like a useful step toward this goal. But it’s important not to mistake that feeling for strong, well-founded policy. Countering extremist attacks requires an intelligent, data-based approach rather than a knee-jerk fixation on myopic short-term solutions.