Over the last few months, Select All has interviewed more than a dozen prominent technology figures about what has gone wrong with the contemporary internet for a project called “The Internet Apologizes.” We’re now publishing lengthier transcripts of each individual interview. This interview features Ellen Pao, who was CEO of Reddit from 2014 to 2015. In 2012, Pao brought a gender-discrimination lawsuit against her former employers, the venture-capital firm Kleiner Perkins. She is now a partner at Kapor Capital and a founder of the diversity-advocacy nonprofit Project Include.
You can find other interviews from this series here.
The tech industry is going through a difficult moment. It’s facing issues ranging from data privacy to information abuse to monopoly. Where do you see the roots of these issues? Like, what’s the poisoned well here, so to speak?
I think it’s two things. One is the idealistic view of the internet, the idea that this is the great place to share information, to connect with other people, to find like-minded people who might share your ideas and might be able to help you form communities around different ideas, especially ones where it might be harder to find people who might share those ideas, and to bring them out into the open, into the public.
The second part is the people who started a lot of these companies, those groups were very homogeneous. You had one set of experiences and one set of views that drove all of the platforms on the internet. So the combination of this belief that the internet was used as a bright, positive place and that of very similar people who all shared that view, and to have that experience, ended up creating this set of platforms that were designed and oriented around free speech.
And now we see that “free speech” is a misnomer, because now that things have changed, the free-speech argument is more, just, “protect my speech.” In the early days, free speech was a great principle because it ties to our ideals about openness and this sense that all this communication will be positive. And it also gives us this ability to do nothing on our platforms to limit speech because we don’t have to create any type of moderation, we don’t have to create any type of protection, we don’t need to staff up or build technology around eliminating speech.
The interesting thing is, though, that people did build tools for spam, so it was not always the complete free-speech approach. The idea that the non-spam speech would be all acceptable and people wouldn’t make any hard decisions around them led to a much easier platform to manage.
To what extent do you think platforms have not invested in moderation that could resolve some of these problems out of what are business concerns? Do you think that they’re subordinating the ideas of building better communities or healthier user bases because they’re interested in sacrificing it all for growth at all costs or monetization at all costs?
Reddit, when I was there, was about growth at all costs. I think some of these other platforms are similar, where their main metric is number of users, and that’s where Reddit was heading. And I don’t think resources are finite. Like when you look at how much money Facebook prints every day and how much money Google and YouTube make and how much money Twitter is now making, it was more about priorities.
And in that case, yes, building the user base was important. Building engagement was important, and people didn’t care about the nature of engagement. Or maybe they actually did in a bad way — like, the more people who got angry on those sites, the more engagement and attention you would get. On Reddit especially. The more there was an outrage, the more people would go to the site and the more press would get around it. That was an unhealthy dynamic.
Can you think of some specific decisions or events, either from your experience at Reddit or elsewhere, that you’ve observed that you think reflect this?
I’d like to think that while I was at Reddit, the decisions were made based on principles. I don’t think there was a sense of wanting to upset users so we could get more traction, but you could definitely see it happening. I think the times when we got a lot of traction were really uncomfortable for us. When we had celebrity-gate — all the unauthorized nude photos on Reddit — it was super uncomfortable for us. We got a ton of attention. We got a ton of usage. We got a ton of press, but we were not happy about it.
I don’t know the current leadership right now, so I don’t know what they’re doing, but it does seem like there is a push to get more users, and a push to focus on usage over the rules. Take the The_Donald subreddit — there have been so many violations from that subreddit.
There’s a new crop of tech executives and others coming out of the woodwork to say, “We fucked up,” or “We’re sorry,” or “This is wrong, and this is bad.” I can’t think of any time in the last decade or two in which you’ve had that kind of outpouring. To what extent do you think this reflects a seriousness among Silicon Valley to get this right? Or do you think that it’s not really reflective of what a lot of tech executives at these platforms are thinking?
I don’t try to guess what tech executives are thinking because I don’t understand it usually, so I’m going to withhold judgment there. I did think people knew what they were doing, but I don’t know if they understood the impact it would have. I don’t know that anybody wants responsibility for the election based on how people were able to manipulate their platform, right? That is something that I think is part of what’s generating the increased attention and the outrage.
If you look at the information that Facebook was sharing and offering to developers, I don’t think it’s a huge surprise. I think people knew, like, “Hey, I gave Facebook permission to share this information,” and maybe the average user didn’t understand the implications. I know I didn’t understand the implications. But the fact that the data was there and was being used isn’t a huge shock. I think it’s just the extent of the usage and the ability to have such impact with it.
What do you think allowed tech companies to get away with soaking up information like this for so many years?
I think there’s this idea that, yes, they can use this information to manipulate other people, but I’m not gonna fall for that. I’m protected from being manipulated. I think you become addicted, and they just keep taking more and more, and they keep pushing more and more fake news. And it’s slow and it’s over time, and you’re already addicted to the interactions and the engagement, so it’s hard to opt out. And I think it’s just easier not to think about the thing, right? Easier to just go about your life and assume that things are being taken care of.
For a long time, people thought that tech was this great, awesome, democratic tool for good. That was the hope and that was the messaging. And people believed it, and I think there has been a ton of good that’s come out of it, but that flip side of it being used for more nefarious purposes? People just chose not to think about it.
When you think back to the early 2000s — to the post-dotcom, pre-web 2.0 era — there were a lot of folks who you knew personally, or worked with, who were the vanguard of this new emancipatory movement. I’m curious if you ever saw any ideological or intellectual shift among them.
I think it all goes back to Facebook, where it was successful so quickly and raised so much money, and people got wealthy so fast. And they were so young and they were so admired that it changed the culture, and it went from “I’m going to build this company to change the way people do things and improve people’s lives” to “I’m gonna build this company that builds this product that everybody uses, so I can make a lot of money.”
Google went public, and all of a sudden you have these instant billionaires. No longer did you have to toil for decades. They were able to make billions in a very short amount of time.
And then in 2008, when the markets crashed, all those people who are motivated by money ended up coming out to Silicon Valley and going into tech and starting companies. And that’s when values shifted more.
There was, like, an optimism early around good coming out of the internet that ended up getting completely distorted in the 2000s, when you had these people coming in with a different idea and a different set of goals.
I’m curious, because of your work on increasing diversity and representation of minorities within tech companies, about the ways in which you think that tech’s homogeneity sort of negatively influences its actions and makes it a more destructive force.
I think that with the goal of finding the next Mark Zuckerberg, investors became oriented around finding that young white man who had dropped out of Stanford or Harvard. That became the pattern to match. And that ended up reinforcing some of the exclusion that was already there, and it made it that much harder for women and people of color, and especially women of color, to raise money.
When you have this idea that’s fundamentally wrong — like, only people who can succeed are people who look like Mark Zuckerberg — you might go and test it out on people. But if you’re only testing it out on people who look like you and believe the same thing, it’s just going to be reinforced.
So you’re building social truth in a very small bubble. I don’t know if it’s confirmation bias, or just self-fulfilling prophecy. It’s like a combination of all these different phenomena that end up just validating the homogeneity and maintaining it.
You see more and more people talking in political terms about the ways to counteract the coercive impact of Big Tech. I was kind of curious about the extent to which you think those solutions are possible, and what other things need to be done in order to end the crisis.
I don’t know about Paul Ford’s idea of a Digital Protection Agency. I do know it’s a sad day when we need to go to regulation to solve problems, but I do think we’re at a point where the current situation is not working, and it doesn’t look like there’s any change that’s meaningful in sight. So I don’t know if regulation is the right answer, but I don’t know what else there is.
So it seems like maybe a necessity, but probably not the best solution because we know regulation is often slow, and it’s often driven by a political motive. And there’s often lobbying involved, and it’s gonna be far from perfect.
And I think the point that a lot of people have been making that’s really important that I’m not sure would be addressed is that there’s this huge, huge disparity of wealth at the top levels of these companies and at the bottom. And when you look at the massive creation of wealth at the top, it has not trickled all the way down, and you’re looking at people who are making hourly wages and unable to live in Silicon Valley. But I don’t know that that’s something that any of this regulation would address.
So how do we think about these broader social issues in a way that is meaningful? And the hope is that whatever solution comes about is going to address the whole host of problems and not just cherry-pick ones that are easier or that the people who are voters care about. These are huge problems that should be troublesome across the board, but we haven’t started talking about some of these other issues yet.
After the financial crisis, people were calling, metaphorically, for the heads of bankers, and a bunch of them were forced to quit or resign. They got huge golden parachutes obviously, and they came out just fine, but there was a sense that leaders, like Travis Kalanick at Uber, need to go when the situation reveals itself to be so destructive, and so toxic, and so regressive. Do you think that we have reached that tipping point with the platforms?
I guess my concern is that the solution so far is more of the same. You’ve got these people who are incredibly innovative. They can solve huge technical issues. They can bring the internet to Africa at a low cost, but we can’t pay our employees a fair wage that allows them to live above the poverty line and feed their families and buy a home? We can’t get harassment off of our sites? You have these people who haven’t used their skill, innovation, their giant teams, or their huge coffers to solve them. What makes you think they’re going to change and care about these problems today? And that’s the piece that I’m not really sure about. And when you look at a company like Uber, and you bring somebody who looks very similar in to solve the problem, I don’t really know that the problem is going to get solved.
I’ve suggested that Facebook bring in a bunch of people who are not part of a homogeneous majority to their executive team, to every product team, to every strategy discussion, because you need the people who are living the problem to help solve it and to help people who clearly don’t understand the impact of their network and the nature and extent of the problems on their platforms. To actually be able to solve the problem, you need to understand it, and there are not enough people in powerful positions who can have an impact at those platforms.
This interview has been edited and condensed for length and clarity.