A consortium of media organizations are reporting revelations from a trove of leaked internal documents from Facebook. Most of the company documents were provided to Congress, the Securities and Exchange Commission, and to a consortium of news organizations by lawyers representing Facebook whistleblower Frances Haugen. She recently testified before Congress about a range of troubling issues and policies at the social-media giant and has filed at least eight whistleblower complaints — about the company putting profit over public safety — with the SEC. In addition, the Washington Post has reported that a new Facebook whistleblower, who is a former employee like Haugen, has submitted a sworn affidavit to the SEC that makes similar allegations. Facebook, which is getting ready to announce a company name change, has been pushing back against all reports. The company has denied that it values profit over public safety, emphasized the effectiveness of its safeguards, and claimed the leaked documents present a cherry-picked, negative view of the company’s internal operations. Below, a guide to the latest revelations from the leaked Facebook papers.
.
Facebook is facing an existential crisis over younger users
Facebook founder and CEO Mark Zuckerberg announced on Monday that the company was undertaking a long-term “retooling” to make engaging younger users its “north star.” As our Verge colleague Alex Heath explains, the leaked documents detail why:
Teenage users of the Facebook app in the US had declined by 13 percent since 2019 and were projected to drop 45 percent over the next two years, driving an overall decline in daily users in the company’s most lucrative ad market. Young adults between the ages of 20 and 30 were expected to decline by 4 percent during the same timeframe. Making matters worse, the younger a user was, the less on average they regularly engaged with the app. The message was clear: Facebook was losing traction with younger generations fast. The “aging up issue is real,” the researcher wrote in an internal memo. They predicted that, if “increasingly fewer teens are choosing Facebook as they grow older,” the company would face a more “severe” decline in young users than it already projected.
And young adults really don’t like Facebook:
“Most young adults perceive Facebook as a place for people in their 40s and 50s,” according to [a March presentation to the company’s chief product officer, Chris Cox.] “Young adults perceive content as boring, misleading, and negative. They often have to get past irrelevant content to get to what matters.” It added that they “have a wide range of negative associations with Facebook including privacy concerns, impact to their wellbeing, along with low awareness of relevant services.”
The March presentation to Cox showed that, in the US, “teen acquisition is low and regressing further.” Account registrations for users under 18 were down 26 percent from the previous year in the app’s five top countries. For teens already on Facebook, the company was continuing to “see lower or worsening levels of engagement compared to older cohorts.” Messages sent by teens were down 16 percent from the previous year, while messages sent by users aged 20–30 were flat.
Heath notes that there are warning signs for Instagram, too:
Instagram was doing better with young people, with full saturation in the US, France, the UK, Japan, and Australia. But there was still cause for concern. Posting by teens had dropped 13 percent from 2020 and “remains the most concerning trend,” the researchers noted, adding that the increased use of TikTok by teens meant that “we are likely losing our total share of time.”
The company also estimated that teenagers spend two to three times more time on TikTok than Instagram.
.
Facebook did not crack down on some of its most toxic and prolific individual users
Some individuals who operate multiple Facebook accounts (which the company calls Single User Multiple Accounts, or SUMAs) have been responsible for a lot of the most divisive and harmful content on Facebook. But as Politico reports, the leaked documents indicate the company failed to address the problem after identifying it:
[A] significant swath of [SUMAs] spread so many divisive political posts that they’ve mushroomed into a massive source of the platform’s toxic politics, according to internal company documents and interviews with former employees. While plenty of SUMAs are harmless, Facebook employees for years have flagged many such accounts as purveyors of dangerous political activity. Yet, the company has failed to crack down on SUMAs in any comprehensive way, the documents show. That’s despite the fact that operating multiple accounts violates Facebook’s community guidelines.
Company research from March 2018 said accounts that could be SUMAs were reaching about 11 million viewers daily, or about 14 percent of the total U.S. political audience. During the week of March 4, 2018, 1.6 million SUMA accounts made political posts that reached U.S. users.
.
Facebook’s software thought Trump violated the rules, but humans overruled it
One of Trump’s most inflammatory social media posts came on May 28, 2020, when he issued a warning to those protesting George Floyd’s murder in Minneapolis that “when the looting starts the shooting starts!” The AP reports that Facebook’s automated software determined with almost 90% certainty that the president had violated its rules. But Trump’s post, and account, stayed up, even as the company found that things began rapidly deteriorating on Facebook immediately after his message:
The internal analysis shows a five-fold increase in violence reports on Facebook, while complaints of hate speech tripled in the days following Trump’s post. Reports of false news on the platform doubled. Reshares of Trump’s message generated a “substantial amount of hateful and violent comments,” many of which Facebook worked to remove. Some of those comments included calls to “start shooting these thugs” and “f—- the white.”
On May 29, CEO Mark Zuckerberg wrote on his Facebook page that Trump had not violated Facebook’s policies, since he did not “cause imminent risk of specific harms or dangers spelled out in clear policies.” The company told the AP that its software is not always correct, and that humans are more reliable judges.
.
Politics has often informed internal decision-making
The Wall Street Journal notes that there has been contentious internal debate about the far right’s use of Facebook, and that political considerations loom large within company management:
The documents reviewed by the Journal didn’t render a verdict on whether bias influences its decisions overall. They do show that employees and their bosses have hotly debated whether and how to restrain right-wing publishers, with more-senior employees often providing a check on agitation from the rank and file. The documents viewed by the Journal, which don’t capture all of the employee messaging, didn’t mention equivalent debates over left-wing publications.
Other documents also reveal that Facebook’s management team has been so intently focused on avoiding charges of bias that it regularly places political considerations at the center of its decision making.
.
Facebook’s efforts to address harmful content in the Arab world failed
Politico reports that the internal documents show that in late 2020, Facebook researchers concluded that the company’s efforts to moderate hate speech in the Middle East were failing, and not without consequence:
Only six percent of Arabic-language hate content was detected on Instagram before it made its way onto the photo-sharing platform owned by Facebook. That compared to a 40 percent takedown rate on Facebook. Ads attacking women and the LGBTQ community were rarely flagged for removal in the Middle East. In a related survey, Egyptian users told the company they were scared of posting political views on the platform out of fear of being arrested or attacked online. …
In Afghanistan, where 5 million people are monthly users, Facebook employed few local-language speakers to moderate content, resulting in less than one percent of hate speech being taken down. Across the Middle East, clunky algorithms to detect terrorist content incorrectly deleted non-violent Arabic content 77 percent of the time, harming people’s ability to express themselves online and limiting the reporting of potential war crimes. In Iraq and Yemen, high levels of coordinated fake accounts — many tied to political or jihadist causes — spread misinformation and fomented local violence, often between warring religious groups.
.
How Facebook’s amplification of provocative content backfired
The leaked documents reveal more information about Facebook training its platform algorithms to boost engagement in 2017 by promoting posts that provoked emotional responses. The effort was an attempt to reverse a decline in how much users were posting and communicating on the site. Per the Washington Post:
Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content — including content likely to make them angry. Starting in 2017, Facebook’s ranking algorithm treated emoji reactions as five times more valuable than “likes,” internal documents reveal.
Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience. The power of the algorithmic promotion undermined the efforts of Facebook’s content moderators and integrity teams, who were fighting an uphill battle against toxic and harmful content.
The “angry” emoji itself has also prompted internal controversy.
.
There are more details on Facebook’s emotional-manipulation experiments
The Washington Post also makes note of new information revealed in the leaked documents regarding Facebook’s efforts to research manipulating users’ emotions:
The culture of experimentation ran deep at Facebook, as engineers pulled levers and measured the results. An experiment in 2014 sought to manipulate the emotional valence of posts shown in users’ feeds to be more positive or more negative, and then watch to see if the posts changed to match, raising ethical concerns, The Post reported at the time. Another, reported by [whistleblower Frances] Haugen to Congress this month, involved turning off safety measures for a subset of users as a comparison to see if the measures worked at all.
A previously unreported set of experiments involved boosting some people more frequently into the feeds of some of their randomly chosen friends — and then, once the experiment ended, examining whether the pair of friends continued communication, according to the documents. A researcher hypothesized that, in other words, Facebook could cause relationships to become closer.
.
Facebook appears to have been reluctant to quickly implement measures combatting COVID-vaccine misinformation
The Associated Press reports that according to the leaked documents, last March, as the U.S. vaccine rollout was picking up steam, Facebook employees researched ways to counter anti-vaccine claims on the platform, but the solutions they suggested weren’t implemented either quickly or at all:
By altering how posts about vaccines are ranked in people’s newsfeeds, researchers at the company realized they could curtail the misleading information individuals saw about COVID-19 vaccines and offer users posts from legitimate sources like the World Health Organization.
“Given these results, I’m assuming we’re hoping to launch ASAP,” one Facebook employee wrote, responding to the internal memo about the study. Instead, Facebook shelved some suggestions from the study. Other changes weren’t made until April. When another Facebook researcher suggested disabling comments on vaccine posts in March until the platform could do a better job of tackling anti-vaccine messages lurking in them, that proposal was ignored.
And Facebook had already been struggling to detect and address user comments expressing opposition or hesitancy toward the vaccines:
[C]ompany employees admitted they didn’t have a handle on catching those comments. And if they did, Facebook didn’t have a policy in place to take the comments down. The free-for-all was allowing users to swarm vaccine posts from news outlets or humanitarian organizations with negative comments about vaccines.
“Our ability to detect (vaccine hesitancy) in comments is bad in English — and basically non-existent elsewhere,” another internal memo posted on March 2 said.
.
Facebook has struggled to address the negative impact of its like button, share button, and groups feature
The New York Times reports that according to internal documents, the company has scrutinized some of its core features and how they could cause harm:
What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.
“The mechanics of our platform are not neutral,” they concluded.
The Times adds that while the internal documents do not reveal how Facebook acted in response to the research, most of the platform’s core experience has remained the same, and “Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said.”
.
Facebook uses an opaque tier-based system to designate which countries get the most harm-prevention resources
Our Verge colleague Casey Newton explains that one theme that stands out from the leaked documents is “the significant variation in content moderation resources afforded to different countries based on criteria that are not public or subject to external review”:
Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems. Germany, Indonesia, Iran, Israel, and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election. In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.”
The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene.
.
Facebook failed to address its language gaps, leaving harmful content undetectable abroad
The Associated Press reports that the internal documents reveal that Facebook did not dedicate the necessary resources to tackle hate speech and incitements to violence in numerous countries around the world — and knew it:
An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages. …
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
.
Facebook’s human-trafficking problem
The company has known about human traffickers using its platforms in this way since at least 2018, the documents show …
Facebook documents describe women trafficked in this way being subjected to physical and sexual abuse, being deprived of food and pay, and having their travel documents confiscated so they can’t escape. Earlier this year, an internal Facebook report noted that “gaps still exist in our detection of on-platform entities engaged in domestic servitude” and detailed how the company’s platforms are used to recruit, buy and sell what Facebook’s documents call “domestic servants.”
.
Apple threatened to ban Facebook over maids being sold and traded on the platform
In 2019, Apple threatened to remove Facebook and Instagram from its app store, citing the reported sale and trade of women as maids on Facebook in the Middle East. CNN reports that according to internal documents, Facebook employees then “rushed to take down problematic content and make emergency policy changes avoid what they described as a ‘potentially severe’ consequence for the business.” The Associated Press adds that Facebook acknowledged internally that it was “under-enforcing on confirmed abusive activity.” And it’s a still a problem, per the AP:
Facebook’s crackdown seems to have had a limited effect. Even today, a quick search for “khadima,” or “maids” in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. That’s even as the Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.
.
Mark Zuckerberg sided with an authoritarian crackdown in Vietnam
When Facebook’s CEO was hit with an ultimatum from Vietnam’s autocratic government — censor posts from anti-government pages or be forced to cease operations in the country — he placated the autocrats. The Washington Post reports that ahead of an election in the country, Zuckerberg personally gave the okay to bend to government demands. As a result, “Facebook significantly increased censorship of ‘anti-state’ posts, giving the government near-total control over the platform, according to local activists and free-speech advocates.” The country has strict rules around dissent in social media, and its authorities often detain and prosecute citizens who run afoul of them.
Zuckerberg’s decision illustrates how Facebook’s stated commitment to free speech shifts drastically across countries. It is also illustrative of the crucial role the social network plays in disseminating information around the world — a reality often overlooked in the conversation around its American operations.
.
It took two days for Facebook to start recommending QAnon content to a new conservative user in an internal experiment
In 2019 and 2020, a researcher at Facebook created fictitious user accounts on the platform in order to test how the company’s recommendation systems fed misinformation and polarizing content. One test user, created in the summer of 2019, was a conservative mother named Carol Smith from North Carolina who expressed an interest in politics, parenting, and Christianity. Within two days, Facebook was already recommending QAnon groups to the woman. That continued even after the test user did not follow the suggested groups. In a report titled “Carol’s Journey to QAnon,” the researcher concluded that Facebook ultimately supplied “a barrage of extreme, conspiratorial, and graphic content.” Facebook has since banned QAnon groups from the platform, but NBC News reports: “The body of research consistently found Facebook pushed some users into ‘rabbit holes,’ increasingly narrow echo chambers where violent conspiracy theories thrived. People radicalized through these rabbit holes make up a small slice of total users, but at Facebook’s scale, that can mean millions of individuals.”
The researcher left the company in 2020, citing Facebook’s slow response to the rise of QAnon as a reason in her exit letter.
.
There were alarming warning signs following the November 3, 2020, election
On November 5, 2020, a Facebook employee alerted colleagues that election misinformation had proliferated in comments responding to posts and that the worst of these messages were being amplified to the tops of comment threads. On November 9, a Facebook data scientist informed his colleagues that about 10 percent of all U.S. views of political content on the platform were of content alleging there had been election fraud — as much as one in every 50 views on Facebook at the time. He added that there was “also a fringe of incitement to violence” in the content, according to the New York Times.
.
Facebook policies and procedures failed to stem the growth of Stop the Steal groups
Facebook dismantled some of the safeguards it had put in place to counter misinformation ahead of and immediately following the 2020 election, according to the leaked documents. The New York Times reports that three former employees said that Facebook, in part concerned about user backlash, began winding down some of those safeguards in November. It also disbanded a 300-person “Civic Integrity” team in early December just as the Stop the Steal movement was gaining more and more momentum, including on Facebook. Some Stop the Steal Facebook groups enjoyed record-breaking growth compared with any other group on the platform up to then, and it was apparent that group organizers were actively trying to get around Facebook’s moderation efforts.
The documents, including a company postmortem analysis, indicate that Facebook failed to address the movement as a whole and thus didn’t do all it could have to counter the spread of Stop the Steal on the platform. Facebook was then left scrambling to implement emergency measures on January 6 when the movement became an insurrection at the U.S. Capitol.
.
On January 6, outraged Facebook employees called out the company in an internal discussion
On January 6, after Facebook CEO Mark Zuckerberg and CTO Mike Schroepfer posted notes condemning the Capitol riot on the company’s internal discussion platform, some employees responded with outrage. Among their comments:
- “I’m struggling to match my value to my employment here. I came here hoping to affect change and improve society, but all I’ve seen is atrophy and abdication of responsibility.”
- “Leadership overrides research-based policy decisions to better serve people like the groups inciting violence today. Rank-and-file workers have done their part to identify changes to improve our platforms but have been actively held back.”
- “This is not a new problem. We have been watching this behavior from politicians like Trump, and the — at best — wishy-washy actions of company leadership, for years now. We have been reading the [farewell] posts from trusted, experienced, and loved colleagues who write that they simply cannot conscience working for a company that does not do more to mitigate the negative effects on its platform.”
- “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence? We’ve been fueling this fire for a long time, and we shouldn’t be surprised it’s now out of control.”
- “I wish I felt otherwise, but it’s simply not enough to say that we’re adapting, because we should have adapted already long ago. There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions.”
.
The leaks are a partial view on what happened
As the New York Times emphasized in its report on Friday:
What the documents do not offer is a complete picture of decision-making inside Facebook. Some internal studies suggested that the company struggled to exert control over the scale of its network and how quickly information spread, while other reports hinted that Facebook was concerned about losing engagement or damaging its reputation. Yet what was unmistakable was that Facebook’s own employees believed the social network could have done more, according to the documents.
.
Facebook has been in over its head in India, where it has struggled to address misinformation, hate speech, and other toxic content
Facebook’s largest national user base is in India, where 340 million people use one of the company’s social-media platforms. But the New York Times reports that the leaked documents “provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.” One leaked document indicated that only 13 percent of Facebook’s global budget for time spent classifying misinformation was set aside for markets beyond the U.S., despite the fact that 90 percent of Facebook’s user base is abroad. (Facebook told the Times those figures were incomplete.)
According to the documents, Facebook has struggled to address the spread of misinformation, hate speech (including anti-Muslim content), and celebrations of violence on the platform in India. The company’s efforts have been hampered by a lack of resources, a lack of expertise in the country’s numerous languages, and other problems like the use of bots linked to some of India’s political groups. As one stark example, a Facebook researcher, who ran an experiment in 2019 where a test user in India followed all the recommendations made by the platform’s algorithms, later said in a report, “Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total.”
EU politicians warned Facebook about political polarization on their site in 2019
The Washington Post reported that a team from Facebook traveled to the European Union and heard critiques from politicians about a change to the site’s algorithm that was made in 2018. They said that the adjustment had changed politics “for the worse,” according to an April 2019 document. The team noted particular concerns in Poland where political party members believed that Facebook was contributing to a “social civil war” where negative politicking received more weight and attention on the platform:
In Warsaw, the two major parties — Law and Justice and the opposition Civic Platform — accused social media of deepening the country’s political polarization, describing the situation as “unsustainable,” the Facebook report said.
“Across multiple European countries, major mainstream parties complained about the structural incentive to engage in attack politics,” the report said. “They see a clear link between this and the outsize influence of radical parties on the platform.”
This post has been updated.
More on the facebook papers
- Planet Facebook
- Zuckerberg Pivots to Creators and Renames Facebook Meta
- The Facebook Leaks Have Caught the FTC’s Attention