IE 11 is not supported. For an optimal experience visit our site on another browser.

Are Facebook and YouTube OK with being used to both support racism and undermine it?

Technology companies need to start grappling with oppression when they design their algorithms, not as an afterthought
Image: One hundred cardboard cutouts Mark Zuckerberg stand outside the U.S. Capitol
One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the U.S. Capitol in Washington on April 10, 2018. Advocacy group Avaaz is calling attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook.Saul Loeb / AFP - Getty Images

Recently in San Francisco, I observed bystanders stopping and recording public police interactions, using their phones to serve as witnesses and digital shields; it's an increasingly common sight in public and increasingly easy to watch the results from the safety of your own screen. And YouTube or Facebook Live have become the platforms of choice for bystanders to use to amplify miscarriages of justice to the world.

An example of this was seen at Yale University this week, when graduate student Sarah Braasch called the police on a classmate, Lolade Siyonbola, for falling asleep in her own dorm’s common room while studying. Siyonbola documented part of her interaction with Braasch and then her interaction with the police and broadcast both on Facebook. In the first clip, she recorded Braasch’s recording her: One party using their mobile phone as a shield from the potential outcomes of a tense racist interaction, the other a weapon in service to it.

Ironically, though, the technology platforms used by bystanders and targets to broadcast their experiences are powered by algorithms increasingly credited as the most efficient engines to radicalization and extremism, fueling hate, racial violence and bigotry into public culture.

The technology platforms perceived by the public as shields are then, in practice, consumer-facing weapons of mass discord.

The technology platforms used by bystanders and targets to broadcast their experiences are powered by algorithms increasingly credited as the most efficient engines to radicalization and extremism.

Increasingly, technology is used to navigate, facilitate and validate our existence as people of color in public life, but technology companies are ill-equipped to reckon with the complicated nuances of race and power. Without integrating the realities of racism and oppression into how we build technology, we will continually fail to face the world as it presently exists.

This constraint keeps our culture stuck in a narrow definition of innovation, places ceilings on possibilities and, ultimately, reinforces dated social standards that will prevent us from building the type of the future we all deserve.

If technology leaders seek to understand how exactly their tools will be shaped into shields or weapons, they have to turn to people from the groups and communities that are consistently and disproportionately at risk of having technology used against them unjustly. It will be impossible for technology companies to create platforms and systems that enable equity and share power if they do not incorporate principles of equity into how they create products or source leadership from the innovators among us.

But neither is happening.

This siloing means that entire divisions responsible for driving the growth and sustainability of most platforms lack any meaningful analysis on race and power.

In the technology sector, issues of race and power are usually housed in units dedicated to addressing diversity and equity that do not regularly interface, inform or influence the teams building the tools that reach the public. This siloing means that entire divisions responsible for driving the growth and sustainability of most platforms — product development, stack infrastructure, developer relations and operations, the driving forces behind most modern technology companies — lack any meaningful analysis on race and power as they host millions (and billions) of users who operate in a world constantly interfacing with the dynamics of race and power.

At its purest levels, that is by design: The principles of capitalism, not humanity, power technology companies. But now that technology powers so many aspects of civic life, we need to examine how to infuse human rights principles into the future of technology, rather than slapping makeshift solutions to unconsidered problems on top of existing platforms. And the only way to make that possible is to have inclusive organizations that seek to recognize those problems in the first place.

However, despite all the media attention on Silicon Valley's lack of diversity and the concrete problems caused by that lack of diversity, efforts to address the problem are, at best, spotty.

As the stakes and the rights for people of color to exist in public life rise, the struggle to reckon with the complicated nuances of race and power on the very platforms that shape public discourse will only continue.

Take, for instance, Facebook: more women than men use it and people of color from emerging markets make up the largest segment of the platform’s growth. Yet, in a reorganization to its technical leadership team announced this week, the technology product division will be led by 13 men and 1 woman. The move was heralded as the biggest executive shuffle in the company’s history and considered part of the fallout from reports on how the platform has been weaponized to undermine democracy — and yet, given the broad mandate to change how the company deals with its own blind spots, it decided to continue to uphold one of the largest ones.

At the end of this reshuffle and every reshuffle — and, really, Facebook is hardly alone in this — there remains scant gender and racial inclusion in the most senior technical ranks in the technology sector. Thus, as the stakes and the rights for people of color to exist in public life rise, the struggle to reckon with the complicated nuances of race and power on the very platforms that shape public discourse will only continue.

If technology leaders seek to meet this moment of necessary leadership and face systemic oppression as it materially exists now, and not just the fear of what it might become in the future, then grappling with oppression and inclusion should explicitly be built into the DNA of a company, its culture, its source code and its leadership at all levels. Or else what comes out of those companies will never live up to the promise of a world with fewer constraints for all.

Sabrina Hersi Issa is an award-winning human rights technologist and CEO of Be Bold Media. She organizes Rights x Tech, a gathering for technologists and activists, runs Survivor Fund, a political fund dedicated to supporting the rights of survivors of sexualized violence, and serves as a Venture Partner at Jump Canon.