On Thursday, ProPublica published a report demonstrating that Facebook advertisers were able to buy ads to reach “the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or ‘History of why jews ruin the world.’” ProPublica paid $30 to test the ads, which were immediately approved and published by Facebook.
When the news organization contacted Facebook, the company removed those categories from its ad targeting. But enterprising journalists at other publications began to undertake their own experiments. A few hours after ProPublica’s story, Slate found that even if Facebook had removed a host of anti-Semitic categories, you could still buy ads targeted to people who’d expressed interested in “killing Muslim radicals” and the “Ku-Klux-Klan.”
On Friday morning, BuzzFeed tested Google’s ad targeting using a similar method, and found similarly disheartening results: If you try to buy ads pegged to an offensive search term or phrase, Google will suggest that you buy ads against other similar phrases. For example, BuzzFeed reported that “Why do Jews ruin everything” prompted suggestions like “the evil jew” and “jewish control of banks.” Later in the day, the Daily Beast completed the tech-advertising trifecta and reported that you could use Twitter to successfully target your ads to people who would likely engage with the N-word.
Google and Facebook have said they’ve blacklisted the offensive terms highlighted by ProPublica and BuzzFeed. Twitter did not offer comment to the Daily Beast, but it seems likely that they’ll follow suit. But, of course, the question is: How is it possible that all of them have the same problem?
On a technical level, the answer is fairly simple: Facebook, Google, and Twitter rely on user input to create the categories advertisers can target. In other words, you were able to target “Jew haters” on Facebook because people on Facebook have described themselves as “Jew haters,” and Facebook has scooped up that personal description, along with hundreds of millions of others, and placed it in a database waiting to be selected by an advertiser (or, in this case, a reporter). Google’s algorithms have determined that people who search for things like “Why do Jews ruin everything” also search for “the evil Jew,” and therefore recommended the latter as an associated target for any advertiser choosing the format.
On an organizational level, the answer is a bit more difficult. As is often the case, Facebook and Google want to hide behind the technical answer, blaming the algorithms for their lack of moral sentiment or historical understanding. But blaming algorithms is another way of saying you didn’t think that examining them for blind spots was worth the time — it’s essentially an admission of laziness.
Are Google and Facebook lazy? Maybe. It certainly seems unlikely that the problem is incompetence — that the companies are incapable of building their systems to reflect moral values. Facebook has previously said that its algorithms are capable of identifying hate speech; Google has similarly emphasized AI, over humans, as the solution to hate speech on YouTube. These platforms talk constantly about AI and machine learning and neural networks as their future. If they can’t stop hate speech on the profit-generating portion of their systems, why do we trust that they’ll be able to do it elsewhere?
But as unlikely as they both seem, laziness and incompetence are each better than the third option: greed. It seems highly unlikely that these platforms knowingly allow offensive language to slip through the cracks to turn a profit — Facebook says, “We have looked at the use of these audiences and campaigns and it’s not common or widespread” — but it’s also hard to understand why the companies didn’t prevent it long ago. Facebook and Google want to hide behind their tech, but the buck has to stop somewhere, and they’ll need to figure out which is worse: being lazy, being incompetent, or being greedy.