screen time

Sam Altman’s Mixed Signals

Photo-Illustration: Intelligencer; Photo: Getty Images

OpenAI is finishing out the year with a spectacle: the “12 Days of Openai” — a.k.a. “Shipmas, as in shipping products — began last week, promising a lineup of new tools, features, and announcements from the company.

First up were upgrades to the free and $20-a-month models, with a higher-performing $200-a-month model for heavy users with more specialized needs. Next was Sora, OpenAI’s video-generation software, now released to the general public with more to come. Online, OpenAI’s executives and staff are performing enthusiasm. The new model “is powerful but it’s not so powerful that the universe needs to send us a tsunami,” joked Sam Altman on X. “It’s exhilarating—and maybe a bit humiliating—to take direction from a model that’s clearly smarter than you,” posted an OpenAI VP. OpenAI is, as they say, “unbelievably back.”

But wait — back from where? OpenAI is still, by the benchmarks and the vibes, the leading AI lab with the most widely used generative-AI products, the company to beat to wherever one might imagine all this is going. To a casual observer, this extended event looks like a show of force for the firm, which now reports more than 300 million weekly users for ChatGPT and recently raised another $6.6 billion for a valuation of more than $150 billion. In context, though, it scans as at least a wee bit … defensive?

In AI circles, OpenAI’s tendency to release demos far ahead of actual products, like the one for Sora, has made some users skeptical of the company’s marketing and momentum, giving “Shipmas” a reactive undertone. Over the past year, it has lost a significant chunk of its senior staff and leadership, including some of the people most responsible for its early success, which is strange for a business that has repeatedly suggested it’s on a glide path to world-altering AGI. The company is fighting with its primary partner, Microsoft, over computing resources. In recent months, a series of stories have suggested that, at least by some metrics and using some techniques, AI-model development progress has hit some obstacles and slowed. OpenAI, for the first time in its relatively short life, isn’t acting like a company that believes it can’t lose. It’s acting instead like a late-stage startup with problems to solve and fights to win. It’s not just releasing products and watching the world chase — it’s pushing back, reframing, justifying, and explaining itself.

OpenAI has been emitting a lot of mixed messages lately, many of them through its CEO. At a recent Dealbook summit, Altman told the audience that his guess is “we will hit AGI sooner than most people in the world think and it will matter much less,” echoing an argument he’s been making, in between indulging more flattering and spectacular narratives about superintelligence, for a couple of years. Again, you can read this two ways. Perhaps progress has just been incredible, AGI is imminent, but our capacity to take things for granted is so profound that we’ll barely notice. Or maybe we’re just watching a brazen downward redefinition of the industry’s most important marketing term, prepping it for fresh deployment. Privately, according to the Financial Times, OpenAI is considering a related change in its relationship with Microsoft:

Under current terms, when OpenAI creates AGI — defined as a “highly autonomous system that outperforms humans at most economically valuable work” — Microsoft’s access to such a technology would be void. The OpenAI board would determine when AGI is achieved.

The start-up is considering removing the stipulation from its corporate structure, enabling the Big Tech group to continue investing in and accessing all OpenAI technology after AGI is achieved, according to multiple people with knowledge of the discussions.

Again, mixed signals! Are we looking at a company moving faster than expected, or are we watching a firm wriggle out of a bizarre arrangement — the largest investor in a technology loses access to a startup’s technology whenever the startup says so — so it can claim to have achieved AGI and get more resources from Microsoft, while also converting itself into an explicitly for-profit enterprise. Perhaps the future of OpenAI’s messaging is aligned with this post from one of its engineers:

Among the other reasons OpenAI might want to pivot away from accelerationist rhetoric is another piece of recent news: The company is now working with the defense contractor Anduril to “rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness” for an unelaborated “air defense” initiative. It’s the sort of potentially lucrative contract that would have been wildly controversial in the safety-obsessed AI discourse of just a couple of years ago; in 2024, Altman is nervously wondering aloud whether Elon Musk, a former collaborator and investor at OpenAI turned nemesis and competitor, will use his newfound political influence to deprive the company of future government contracts.

To recap: OpenAI has maybe achieved AGI, and has not hit a wall, but AGI also isn’t that big of a deal, and you might not even notice it now that it’s here, so we don’t have to worry so much about a for-profit entity having access to the latest models, or, for that matter, what the defense industry might do with them. It’s fine! Everything is fine. The future will be glorious, but also there’s nothing to see here.

What OpenAI is saying might be incoherent, but at the same time what it’s doing is a little easier to grasp. It’s a fast-growing, debt-laden company that needs more investment to cover immense and ballooning costs. While it has a lot of users, what it needs most are customers, the bigger the better. This incoherence isn’t evidence of OpenAI becoming strange or heading down a surprising path — it’s evidence above all of normalcy. OpenAI needs investment, and it needs clients. Now, as then, it will say whatever it must to get some.

Sam Altman’s Mixed Signals