technology

When the Alt-right Loves Your App

Illustration: Intelligencer

Last fall, the video and image-editing start-up Kapwing was just starting to hit its stride. After almost a year of self-funding, it had raised seed money from investors and was beginning to attract paying customers. By January, users were creating more than 20,000 clips a day on the platform, the vast majority of them being relatively innocuous edits of social events or animal GIFs.

Then came the fake news.

One video shared on Twitter had “thousands of views and the Kapwing watermark on it,” said Julia Enthoven, Kapwing’s CEO. “Someone tagged us in the comments saying, ‘This video has been doctored.’ It was some Armenian politician talking so I had no idea what he was saying, but it made me realize people could be using our tools to combine videos with someone else’s voice.” Kapwing, which offers a Photoshop-like interface with which users can create and embellish photos, videos, and GIFS, was inadvertently facilitating a deception.

This was an uncomfortable position for a small start-up to be in. Much of the conversation around tech and political responsibility has centered on behemoths like Facebook, Google, and Twitter. With their size, power, and visibility, the assumption is that they have both the responsibility and the resources to police content that could be potentially destructive to a liberal democratic process. But their much-smaller peers, too, are wrestling with many of the same issues, from fake news to free speech, and doing it without billions in dollars in cash reserves, vast user-bases, and years of built-up goodwill. These companies often face costly and ambiguous trade-offs.

Enthoven and her co-founder Eric Lu found out recently, for example, that one of Kapwing’s most loyal longtime paying customers runs a popular alt-right Facebook page and uses Kapwing to edit all her videos. “They’re pretty much always divisive and antagonistic. They’re not necessarily lies or dishonest, but it’s very much a gray area. There are some videos where she’s overgeneralizing or straw-man-ing,” Enthoven says. The discovery triggered hand-wringing at the company: banning the user would mean the loss of valuable feedback and revenue. But to allow her to continue to use Kapwing’s tools felt like a loss of integrity, some contribution toward a movement that Enthoven and her co-founder vehemently disagreed with. “I think we wish that people [would] use Kapwing entirely for noble purposes, but they don’t; we are such a small company that it’s hard for us to investigate every report. And we’re so early on that we really need revenue; we need every dollar that comes in, so there’s some conflict here between making money and upholding ethical standards for creators.”

Complicating the balancing act is the fact that the ethical standards for creators are hardly clear-cut. The tech landscape today is one where hundred-billion-dollar companies with teams of lawyers and lobbyists find themselves struggling to define and defend what role they have to play in public discourse. For some, banning a user who makes borderline racist videos is a matter of course. For others it’s political censorship of the variety that recently landed Facebook in front of a congressional committee. While small start-ups have historically looked toward bigger companies for guidance on hiring and engineering practices, when it comes to content, the bigger companies are still figuring things out themselves. Facebook only made its content policies public this last year. YouTube has appended so much to its policies so many times that their summary now runs at several web pages long. None of these companies have been able to reliably preempt potential issues, lurching instead from one bad PR incident to another. Nevertheless, in some ways that’s exactly the point — for all the PR has, indeed, been very, very bad, big tech remains robust business. Revenue is up 23 percent year over year at Google, and Instagram gained 200 million users even as Mark Zuckerberg said “sorry” more times than Justin Bieber. These hard numbers give Facebook and Google leeway when it comes to dealing with social responsibility issues. At the end of the day, no matter what, the companies will probably be fine.

For start-ups, though, the landscape is much more fraught. One reason is limited resources. Startups are often cash- and manpower-strapped, fighting fires in their code bases and competitors online, all while trying to make payroll. They don’t have content moderators, legal teams, or the time to check every photo, message, and comment. Sometimes, as with the doctored video of the Armenian politician, they have little recourse from a technical perspective even when the content is flagged; at that point the clip has been downloaded onto the user’s own computer and is no longer in the company’s custody.

A second reason is that start-ups tend to attract more of a fringe crowd to begin with. Social and communications networks and the services that support them, like communities in real life, each have their own tone and ethos. Many of the start-ups which have struggled with the propagation of fake news and hate speech — companies like gaming chat app Discord and encrypted chat app Signal — have explicitly positioned themselves as alternatives to the Facebooks of the world. Their communities are often more outré and tech-savvy, more likely to voice opinions on computer gaming, cryptocurrency, and politics. This edgier nature is exactly their appeal, and while their users and founders may not have set out to build places where nefarious actors can create virulent videos, they often view it as an inevitable consequence of freedom.

Finally — and this is perhaps the biggest reason why issues like fake news and free speech hit start-ups extra hard — the highest imperative in Silicon Valley today is growth. This is doubly true for early- and mid-stage start-ups that still need to prove themselves. One of the best ways to growth-hack is to be associated with viral content, whether that’s media coverage, a blog post, or some controversy. This creates unholy incentives: the sort of content that trades in negativity and falsehoods can actually benefit start-ups. For instance, while Kapwing’s paying users receive unmarked videos, most users access the video editor for free, and the company slaps a Kapwing watermark on the finished product. These videos, when shared on users’ social networks, organically serve as marketing. Despite the team’s idealism, Kapwing gains publicity when videos are inflammatory and thus likely to be widely viewed. “Right now, the honest answer is that we don’t have any policing or enforcement processes or a system for banning users or creators,” Enthoven says. “Definitely the watermark helps us grow, and in more than 99 percent of cases, the watermark is being shared on great or harmless videos, but when shared on stuff that’s really negative — I mean obviously it’s great for us if stuff is shared and reshared on the internet, but it also can associate our brand with stuff that’s negative-energy.”

Yet despite all these challenges and misalignments, today’s start-ups may be best-positioned to lead the fight against internet toxicity. Unlike Facebook, Twitter, and Google, which have had to play catch-up, these companies have grown up in an age of misinformation. The coping mechanisms that the FAANG companies developed only in their adulthoods — transparent content policies and community guidelines, for example — are now what start-ups cut their teeth on. While often rough and inconsistent, veering wildly from legalese to feel-good exhortation — “Be positive, be cool, make the world a better place,” one reads — they nevertheless provide a framework to work off of.

Perhaps most crucially, startups are motivated to compromise in a way that the bigger companies are not. While Google can unilaterally ban 100 thousand YouTubers and not blink an eye, for reasons both cultural and financial, start-ups are more beholden to their communities. Free from the pressure brought by the spotlight, they’re better able to craft a moderation and appeals process that works in concert with, instead of over the heads of, their users. One example that comes to mind is Patreon, a crowd-funding platform for creators where patrons can pledge a recurring monthly payment. For many writers, comedians, and artists, Patreon has been transformative, allowing them to pursue their creative passions while ensuring something of a steady paycheck. This puts Patreon in a position of unusual power — if, as a company, it decides that a certain creator is making content that violates its policies, it can remove the financial wherewithal that allows that to happen.

Perhaps as a result, Patreon’s approach to policing content is bespoke. While Facebook and Google use machine-learning algorithms to take down accounts, Patreon has a human Trust and Safety team that responds to complaints and flags problematic accounts for review. While YouTube has a three-strikes rule, noncompliant Patreon creators are reached out to with specific explanations of their violations, and are offered reformation plans that will allow their accounts to be reinstated, typically involving public apologies. This process of reintegration, Patreon claims, is ultimately good for both the creators and the company: in December 2018 it allowed them to recover more than $200,000 in creator earnings. Patreon takes a 5 percent cut.

When the Alt-right Loves Your App