on with kara swisher

Snap’s Evan Spiegel on Why He’s Not an AI Doomer

The CEO tells Kara Swisher that he’s more worried about humans than machines.

Photo-Illustration: Intelligencer; Photo: Getty Images
Photo-Illustration: Intelligencer; Photo: Getty Images

The sudden rise of AI chatbots is upending a tech industry that’s already facing significant turbulence. Like many major companies, Snap is attempting to harness the power of large language models — in its case, with a new product called My AI — as it tries to chart a sustainable path forward. But are products like this ready for prime time? The company’s CEO, Evan Spiegel, is unsurprisingly optimistic on that front. In the latest episode of On With Kara Swisher (which was recorded before a weak earnings report bruised Snap’s stock price), Spiegel tells a skeptical Kara that sufficient guardrails are in place for a social-media AI revolution. Below is an excerpt from their conversation:

On With Kara Swisher

Journalist Kara Swisher brings the news and newsmakers to you twice a week, on Mondays and Thursdays.

Kara Swisher: You released My AI, an internal chatbot powered by ChatGPT. Explain the product, and whether AI is a game-changer or just an incremental thing. Because I know everyone’s cheerleading and rushing into it. My worry is the arms race around this.

Evan Spiegel: Our artificial intelligence is definitely a game-changer for Snapchat. If you think about the evolution of our use of AI, we’ve long used it to power our content platforms in terms of recommendations, our ad platform is powered by artificial intelligence, a lot of our lenses are powered by generative artificial intelligence. So AI is a huge driver of our business today. What’s so exciting about My AI is it’s the first time we’re bringing artificial intelligence into communication, into the chat page on Snapchat, which is really the core of our business. Snapchat’s really about communicating with your friends and family. To be able to bring artificial intelligence into Snapchat makes me believe that we could be one of the best ways, if not the best way, to communicate with AI.

Swisher: So My AI is your new friend, your new pal.

Spiegel: I think the way our community’s been using it so far is more as a creative tool than as a friend, because it actually really enriches your conversations with your friends. That’s one of the reasons why we’re so excited to release @mentions: So you can bring My AI into conversations with friends. Whether that’s coming up with an itinerary for a trip or —  I used My AI to get feedback on a wedding speech I’d written, which was a lot of fun.

Or for kids — I use it for story time. We do these generative stories — I’m exhausted at the end of the day, so I come home and it’s really fun to do that with our kids. Where I see these large language models really succeeding today is in creativity. They’re incredibly good at coming up with new ideas, new concepts, new stories. Where they’re struggling still is around informational use cases and accuracy , because some of the things that make them so good at creativity actually make them not so great at retrieving perfectly accurate information. So I think that’s why My AI is the right place for this product today.

Swisher: Let’s talk about that, because there’s two issues I see. One is, broadly speaking, IP issues around — it’s not search, it’s technology companies taking from all over, and you reusing it. And I think that’s going to be a big legal problem for all of you going forward, it’s sort of like a superpowered YouTube or, or something else. And the second thing is the creepy factor. Which one would you like to discuss first?

Spiegel: Uh…

Swisher: Say “I’ll take creepy for a hundred.”

Spiegel: That’s exactly what I was thinking. I’ll take copyright.

Swisher: Copyright. Well, we’re going to do both. But talk about copyright. Are you worried about that issue of it pulling in information? And who’s really responsible for that?

Spiegel: I think what’s interesting, to your point, is actually that it’s not reusing the content. It’s actually generating totally new content,

Swisher: From content.

Spiegel: Yeah.

Swisher: if you’re a songwriter, you’re not going to love it. Or if you took the whole Godfather and started playing with it, I’m sure Francis Ford Coppola would have a thing to say or two about that.

Spiegel: But I think as artists, we all get inspired by other people’s work, right? Their music, their paintings. That’s long been a feature of art and creativity.

And so I actually see it in a continuation of that vein. I think the way these systems have been architected, they of course learn from a lot of content that’s out there. In the case of our models, for example, we use first-party data, we use licensed third-party data. Of course we use synthetic data as well. And that’s how we think about managing the rights issues. But I think more broadly, when you look at this technology, it’s actually generating something totally new — not reusing, not copying a piece of content.

Swisher: It’s an interesting problem from a legal point of view. I’ve talked to a lot of lawyers about this — because some of it is taking and remaking. And so photographers are starting to have issues. Artists are starting to have issues. Movie makers, soon. That’s not something you’re worried about?

Spiegel: That’s why, as I mentioned, when we build our own models internally, we use licensed data, or we use our own first-party data, or we generate data sometimes. We create synthetic data that we train on. So that’s how we’re thinking about the risk management of this rights issue. But more broadly speaking, in response to your question, the thing that is so fascinating about these systems is that they are generating totally new content from what they’ve learned.

Swisher: And the issue is sometimes it’s not correct. You were talking about the misinformation —  they’re called “hallucinations.” I just call it “wrong,” but that’s fine. But let’s get to creepy. This is controversial for everybody, not just you. Aza Raskin from the Center for Human Technology did a test as a 13-year-old girl, who asked to make losing her virginity with a man 18 years older than her special. Your AI provided tips on how to set the mood with candles and music, rather than saying “call the police.” Talk about that, because every single company  — Google’s had this problem. Again, I don’t want to just blame it on you, but every one of these companies has been releasing these things and reporters are able to generate this.

Spiegel:In that specific scenario, as you mentioned, I believe that was a researcher who was adversarially using myAI to try to get it to say things that were inappropriate. And in a way, I think that sort of research is actually very helpful. That’s exactly what people should be doing with it. Whenever we come across new technology, one of the first things we try to do as humans is break it. Which is why building a service that is safe is so important to us. So if you think about how we’ve architected myAI, storing the conversations that people have with myAI and reviewing them  — 99.5% of myAI responses comply with our community guidelines.

Swisher: Right. I’m worried about the 0.5% part.

Spiegel: When we dive into that 0.5%, what we find is oftentimes that when it’s not complying with our community guidelines, it’s either repeating what somebody said to it that was inappropriate, or it could be citing an inappropriate song lyric, for example. So when we dive into the reasons why it’s breaking or not working, what we find, even in the failure cases, makes us feel comfortable with a broader rollout. And that’s not to say that it’s perfect. It’s going to make mistakes, but we can learn from that and continue to evolve the technology.

Swisher: I’m not particularly worried about Snapchat. A couple weeks ago, you had a long blog post about the early learnings from myAI on safety enhancements, where you talk about guardrails and age-appropriate design. I think you are concerned about it.

Summarize what you’re doing differently, and more importantly, tell me how you’re going to know what you don’t know, because you have the responsibility for a young audience at Snapchat. And should this be age-gated? You just released it to all the Snapchatters. I would be comfortable with my older kids using it. I definitely wouldn’t be comfortable with younger kids using it.

Spiegel: In order to use Snapchat, you have to be over the age of 13. When you’re interacting with My AI —

Swisher: I still think thirteen’s young, but go ahead.

Spiegel: — we pass the age of the Snapchatter to myAI to make sure the conversation is age-appropriate for the person communicating with My AI. So your experience does change based on your age. We’ve also built myAI on the foundation we already have —  things like Family Center, so parents can see who their teens are chatting with, or change the content settings, for example, or very quickly report an inappropriate message so that you can get help. I think the strong foundation we have of managing trust and safety at Snapchat is, again, one of the reasons why we feel confident in rolling out myAI  more broadly, even though we know it’s not perfect.

Swisher: You have, I think, more of a responsibility to a young audience than others, since your audience is so young. How do you make the calculations? Because with this rush toward AI — I do think it’s a rush. I think it’s another thoughtless rush — not necessarily you, but in general — just to win ground. Is that a dangerous attitude? Tristan Harris has talked about this and others have talked about it.

Spiegel: I see it pretty differently. I can’t remember the last time a new technology was rolled out this thoughtfully. That there’s been this much thoughtful debate this early — I think it’s actually very promising. I actually think it shows we’ve learned a lot about the evolution of the internet, for example, or the evolution of social media. So I actually think people are asking a lot of harder questions sooner than with any other piece of technology I’ve ever seen in my life. So that actually makes me quite optimistic about how thoughtful folks are being in rolling out these products.

So I’m not sure that I agree with the rush narrative, but I do think the sense of excitement people are feeling is very real because folks all over the world are embracing this technology.

Swisher: What’s your worst-case scenario here, though? I remember when Facebook Live came out, I said, “What if people start killing people and putting it on Facebook Live? What are your tools to stop it?” And literally the room was like, “Kara, you’re a bummer.” And I was like, “Yeah, I’m a bummer. People are terrible.”

Spiegel: I’m much more concerned about the way humans will misuse this technology than I am about a response that myAI might provide.

Swisher: I see.

Spiegel: And I can’t imagine necessarily what they might do, but for example, fraudsters might use this GPT technology to write a really convincing phishing email, and be able to do that at scale. So I worry that this sort of technology will be very useful to bad actors, and that’s why it’s so important for us, again, to monitor the conversations people are having with myAI so that we can detect that behavior, learn from that misuse. We’ve rolled out timeouts, so that if people are misusing myAI, we can slow that conversation down.

Swisher: So AI is not killing people, people are killing people. It’s true, though. It’s true, though — I think you’re right.

This interview has been edited for length and clarity.

On With Kara Swisher is produced by Nayeema Raza, Blakeney Schick, Cristian Castro Rossel, and Rafaela Siewert, with mixing by Fernando Arruda, engineering by Christopher Shurtleff, and theme music by Trackademics. New episodes will drop every Monday and Thursday. Follow the show on Apple PodcastsSpotify, or wherever you get your podcasts.

More From 'on with kara swisher'

See All
Snap’s Evan Spiegel on Why He’s Not an AI Doomer