![](https://pyxis.nymag.com/v1/imgs/407/a02/c1abb293fece94151a5bb9783d3845cb51-GettyImages-1073149198.rsquare.w400.jpg)
On Thursday, the Financial Times outlined how Facebook-owned messaging app WhatsApp — which is among the world’s biggest messaging platforms — has become an open market for child pornography, with groups dedicated to trading illegal images that are both easy to find and easy to join.
Israeli nonprofits Netivei-Reshet and Screensaverz warned Facebook in September that it was a problem, with groups of 256 users trading images of child exploitation using easy to search for terms like “cp” or avatars with explicit images.
The Times found that the groups are still very much active, even after Facebook was warned. And there may be a reason for that. There are only 300 WhatsApp employees in total, and it’s unclear how many are assigned to moderate content on the platform. Even if WhatsApp were to task every single one of its employees with monitoring for child pornography, with 1.5 billion users, every employee would need to watch over 5 million WhatsApp users each.
The problem of moderation is not unfamiliar to Facebook. After facing criticism for allowing its own platform to run amok, Facebook beefed up its moderation team, doubling it in size from 10,000 to 20,000 over the past year. Still, these are contract positions, usually based outside of the United States, requiring employees to look at extremely unpleasant material and make often bizarre judgement calls.
And if WhatsApp were to follow the same model as Facebook and build out its own (low-paid, quickly trained) moderation force, it would be even more difficult. Users’ communications are encrypted end-to-end by design on WhatsApp, meaning that even if WhatsApp employees wanted to look at what messages were flying through the service, they couldn’t.
“WhatsApp has a zero-tolerance policy around child sexual abuse. We deploy our most advanced technology, including artificial intelligence, to scan profile photos and images in reported content, and actively ban accounts suspected of sharing this vile content,” the company said in a statement. It said it had banned hundreds of thousand of users for sharing child pornography. Still, the Times was easily able to find examples of its ongoing sharing.
But the problems discovered by the Israeli nonprofits and the Financial Times weren’t ferreted out by going undercover into secret rings of child pornographers — they just used the search functionality built into WhatsApp.
“Crimes against children are getting worse and worse, the kids are getting younger and younger, and the acts are getting more violent. It’s all being fueled by these platforms,” said Hany Farid, a professor of computer science at Berkeley who developed the PhotoDNA program used by hundreds of companies to prevent the spread of child pornography, speaking to the Times. “The problem is deep-rooted in these companies. It’s the ‘move fast and break things’ model.”