select all

Can 10,000 Moderators Save YouTube?

Spider-Man gives Princess Elsa an ultrasound (not canon). Photo: Spiderman TV/YouTube

YouTube’s status as a breeding ground for wannabe far-right media personalities is fairly well-known at this point, but recently, a weirder corner has come to light: Violent and disturbing videos apparently aimed at, and sometimes even about, children. Hundreds of thousands of channels and millions of videos exploiting children, either as viewers, or in some cases, as video subjects have been removed over the last few weeks — everything from animations of bootleg versions of famous children’s characters engaging in violent activities to live-action videos featuring sobbing children. Nervous advertisers have been pulling campaigns, and so, last night, YouTube CEO Susan Wojcicki announced that the company would hire up to 10,000 human moderators to keep tabs on videos and comments submitted to the site.

If this sounds familiar, it’s because it’s directly from the megaplatform playbook: Whenever you’re criticized for serving up awful content submitted by users, announce that you’re hiring four or five digits’ worth of moderators who will make the site a safer place or whatever. Facebook did it earlier this year after a spate of news stories about crimes being committed on its much-hyped live-video platform, and then again in the midst of criticism over its role in last year’s election.

In almost all cases, these new hires are contract workers for third-party firms, generally in the global south. They’re getting paid to view heinous things — from bizarre videos about children’s characters to actual murder, suicide, and sexual assault — so you don’t have to. The narrative the tech companies like to sell is that the work these human moderators perform will be used to train computers to eventually take over the job. Consider the Rube Goldbergian New York Times headline: “YouTube Hiring More Humans to Train Computers to Police the Site.” This is a central premise of the megaplatform business model: It has low overhead because complex, self-teaching software handles most of the day-to-day and minute-to-minute functions, like flagging offensive content. Hard to make such an immense profit if you’re paying tens of thousands of humans with human brains and human living standards to mind the store. (It’d be harder still if those humans were protected by American labor laws.)

But the fact that these megaplatforms seem unable to stop hiring moderators forces a question: Is this auto-moderated future actually coming? And if it isn’t, is it physically possible to effectively moderate a site with billions of users and petabytes of data using humans? Some 400 hours of video are uploaded to YouTube every minute, and that’s about 65 years of video going online every day. How can human beings — even 10,000 of them — properly process that much video with the kind of thoughtfulness and cultural nuance necessary for good moderation? Moderation is a much more difficult job than people would assume. Not only are there contextual and social barriers that need to be overcome (is a comment to a friend different from a general statement? How can I tell if something is sarcastic?), but also it can just be generally exhausting — watching violent, hateful, exploitative, and disturbing content for hours each day is a lot to ask of hourly workers who probably don’t also receive adequate mental-health resources. There’s a reason moderation jobs get shipped overseas.

It’s not just YouTube: Facebook faced a similar question in a congressional hearing last month when Louisiana senator John Kennedy pointed out that there’s no way in hell Facebook has authenticated all 5 million of the advertisers it does business with every month. The immense scale that makes these megaplatforms so useful (and profitable) also makes them uncontrollable. Moderators, even ones that know precisely what they’re looking for, are a drop in the bucket playing a cosmic-scale game of Whac-a-Mole.

YouTube, until now, has primarily relied on user reporting as a first line of defense, the idea being that a community, even at a global scale, will be able to self-police. This strategy, as Ben Thompson wrote earlier today, is “Pollyannish.” The assumption that web tools will usually be used for the collective benefit of all is, at this point, a farce. And who defines what “benefit” is? White supremacists believe that preserving the white ethno-state is working toward a beneficial end. Few seeking out videos about the benefits of ethnic cleansing are going to flag them as harmful. In March, users discovered that videos concerning LGBTQ issues were being hidden when users had the site’s “Restricted Mode” enabled. In August, videos documenting violence in Syria were taken down by the automated system. Last month, an ad for Google’s Chromebook made by Google was flagged as spam.

Arguably worse, though, is that the very automation that is supposed to solve this bias problem actually works against the moderators, by drawing a link between things that shouldn’t be or usually wouldn’t be linked together. At the core of the most recent moral panic over YouTube was a strategy video-uploaders were using to game the recommendation process. Parents plop their kid down in front of the screen, put on a Peppa Pig video, and let the algorithm do its thing — theoretically displaying related videos until the sun burns out. But by creating fake videos involving popular characters like Peppa Pig, Elsa from Frozen, and Spider-Man, the bootleg YouTube creators — whose content ranged from merely weird to deeply unsettling — were able to place their inappropriate content in the autoplay and related-video queues. YouTube’s automated system couldn’t tell the difference between a Spider-Man video that teaches counting and one that features Spider-Man operating on a pregnant Elsa (to name one persistent theme of the disturbing bootleg YouTube videos).

This is the contradictory nature of moderation at immense scale. As they’re designed now, Facebook and Google need you to explicitly inform them that something is bad, but are more than happy to interpret any type of content engagement as endorsement. A hundred reports flagging a video as child exploitation tells Google that it’s child exploitation. Commenting the same thing below the video tells Google that the video is getting a lot of community interaction, which is great for the company. They’ve created algorithms that make it incredibly easy — too easy, in fact — to spread and share information, even without actively thinking about sharing it. A like, or a comment, and suddenly a video gets boosted for other people to see. But these same systems are somehow unequipped to take content out of the database. That’s a technical problem, a loftier strategic problem, or both. And throwing more people at the problem — even 10,000 — isn’t going to help.

Can 10,000 Moderators Save YouTube?