screen time

The Scarlett Johansson Incident Makes OpenAI Look Desperate

Photo-Illustration: Intelligencer; Photo: Getty Images

On May 13, OpenAI demonstrated a new model with a series of live conversations between its staffers and an AI voice assistant. The chatbot felt familiar in both senses of the word. Its responses were casual, agreeable, and sometimes uncomfortably flattering. There was also the way it sounded — like a voice you might have heard before in a related context. During the presentation, Sam Altman cleared up any doubts as to what his company was going for with a post on X:

AI companies have a funny relationship with sci-fi. On one hand, they’re in the rare and useful position of operating in a space that’s been explored in speculative fiction for years. Rather than explaining what they do, how they do it, or why they’re doing it, they have the option of simply referencing ideas and concepts that have existed in the popular imagination for generations as harmless cartoon robots, fearsome invisible superintelligences, and iconic disembodied voices. It’s a bit like saying you’re going to Mars or building a flying car. People know what you’re talking about. Some of them might feel like you’re fulfilling an overdue promise; others might point out that a lot of the thinking people have done about these things is complicated. On balance, though, it’s a pretty good deal for companies like OpenAI, unless of course they take it too far.

For example: It’s one thing to release a product that invites comparisons to movies like Her, in which the protagonist falls in love with an AI persona voiced by Scarlett Johansson. You’re playing with tropes and expectations, sure, but you’re also confident that your chatbot sounds fluent enough that such comparisons come easily, at least to some viewers. It’s quite another to insist that the public make such a comparison — to say, “Hey, you know that movie with the AI voice? Well, we did that here, see?” This is altogether sweatier and less confident behavior, and an admission, perhaps, that OpenAI is in some way in the speculative-fiction business, too. It’s something else entirely when the possessor of the voice on which your product was evidently based — according to you! — claims that you asked her permission to use it, were told no, and decided to go ahead anyway:

Setting aside the legal questions here, such behavior would align with some of the harshest criticism of Sam Altman and OpenAI — that it’s a company with little regard for the value of creative work led by a scheming, untrustworthy operator. This episode also complicates the company’s preferred narrative of unstoppable inevitability: You’re either the company harnessing the barely controlled phenomenon of imminent self-replicating machine intelligence, leading humanity into its next technological epoch, or you’re a mid-stage start-up that for some reason really needs to copy that voice from that movie to market an incremental product upgrade.

OpenAI’s lopsided public dispute with one of the most recognizable human beings on earth is the latest in a string of episodes in which OpenAI and Sam Altman have struggled to keep their stories straight. Shortly after OpenAI’s demo, news broke that the leaders and much of the staff associated with its “superalignment” team — a group that was established last year and tasked with figuring out how to “steer and control AI systems much smarter than us” — had resigned, among them OpenAI co-founder and former chief scientist Ilya Sutskever. With the exception of superalignment head Jan Leike, who said that he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time,” most of the resignations were quiet or tersely announced — by necessity, it turned out, as Vox reported that they’d signed an “extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions” that “forbids them, for the rest of their lives, from criticizing their former employer.” Altman responded that he was “genuinely embarrassed” about how this had played out, but also pled ignorance:

If you’re fully invested in the narrative that OpenAI is the tech company to end all tech companies — the one firm that can bring about artificial general intelligence, a rightful steward of a future that most people can’t comprehend — then you might worry that OpenAI has turned away from safety and alignment toward uninhibited AI development, consequences be damned. But the company’s recent behavior, and that of its CEO, who was briefly deposed by a board that accused him of a lack of “candor” before clawing his way back to power, is also consistent with that of a firm that’s facing a lot of competition whose product is less differentiated than ever, and which pronounced its commitment to safety mainly just to imply how powerful it would one day become: a company that has raised huge amounts of money not just on the strength of its technology or a clearly defined business model, but with a series of grand — and suspiciously familiar! — stories about the future, drawn from fiction of the past.

More From This Series

See All
The Scarlett Johansson Incident Makes OpenAI Look Desperate