On Monday, U.S. markets were shaken by DeepSeek, a relatively new Chinese AI firm that poses a significant threat to its American competitors. After the company revealed its model, which answers questions and solves equations with the quality of OpenAI’s ChatGPT but at a fraction of the computing cost, U.S. tech stocks like Nvidia — which provides the computing power for American AI companies — plummeted. On the most recent episode of Pivot, co-hosts Kara Swisher and Scott Galloway discuss how DeepSeek is disrupting the burgeoning industry.
Subscribe on:
Kara Swisher: This is a really interesting story. I think we have discussed the amount of spending on AI that U.S. companies do — the price of chips, the run-up of Nvidia. But there’s a new AI model on the scene that’s smart, cheap, and made in China. It’s called DeepSeek, and it’s causing a panic in Silicon Valley, which is paying a lot of attention, and also on Wall Street. DeepSeek has reportedly outperformed models from OpenAI, Meta, and Anthropic in some third-party tests, and it operates at a fraction of the cost of those models using fewer high-end chips. The markets are not reacting well to DeepSeek as of this recording.
There’s a lot to talk about, and I’ve seen different analyses of exactly what DeepSeek does. Yann LeCun from Meta was saying they’re doing a cheap and dirty version. The stuff the U.S. companies are doing is much more advanced. We’re going to talk about Meta’s AI plans in a bit — they’ve reportedly set up several war rooms to dissect and analyze DeepSeek. It’s currently No. 1 on Apple’s free Top Apps chart — China invading this country in a very different way.
So, thoughts on this situation?
Scott Galloway: Well, first, you just have to temper this. Nvidia has shed something like a half a trillion dollars, which, basically, if you take out Tesla, is the value of the entire global automobile industry. So this is pretty dramatic. But, at the same time, that just takes it back to its valuation in October.
And when you look at market dynamics, when these companies have experienced these types of run-ups, it is like a balloon inflating beyond its natural capacity and the slightest touch can pop it. In some ways, the market was probably looking for an excuse to take these stocks down a bit. What’s interesting is Nvidia will have a pretty interesting argument on Capitol Hill, saying, “When you refuse to let us sell into these countries, they come up with workarounds and, in this case, this workaround might tank the U.S. economy.” Supposedly, OpenAI’s models, their LLMs, cost a hundred million to train, and DeepSeek is claiming this thing — and they’ve been public, it’s open-source — costs a little over $5 million to train. So whereas the majority of LLM’s AI companies have been taking this brute-force strategy where it’s buy as many chips as possible, this is saying maybe you don’t need as many chips.
The thing I find equally interesting is the second-order effects here. Constellation Energy and some of these nuclear stocks have skyrocketed because the choke point was supposedly going to be energy. But now with this model, which appears to have chips speaking to each other in a more efficient, less energy-consumptive way, nuclear stocks are crashing. Electric, Constellation Energy, all these companies that have had incredible run-ups are saying, “The assumptions we made about the supply chain in terms of the brute force of chips that we’re going to need, the amount of energy — it’s all now coming into a little bit of question.” But to be clear, the correction here is that it’s taken them back three months. And all of the stocks that have crashed, quote-unquote “crashed,” are only up 70 percent for the year now, not 98 percent.
I think you have to put it in context. The smart analysts I’ve read have said that like every community or any sector, this is going to bifurcate into the cheap layer and the high-end layer, which will still go hard at massive computing and massive energy and do more sophisticated things. Everything eventually goes Walmart/Tiffany’s. And they’re saying this might be the Walmart. But it’s fascinating to see that the conventional wisdom that you would need massive GPUs and massive energy may not be written in law that we thought it was going to be.
Swisher: Let me read from Yann LeCun, who’s the head of Meta. I just recently interviewed him, and you can go listen to that long interview about this. But he wrote, “To the people who see the performance of DeepSeek and think China’s surpassing the U.S. in AI, you’re reading this wrong. The correct reading is open-source models are surpassing proprietary ones. DeepSeek has profited from open research and open-source. For example, PyTorch and Llama from Meta. They came up with new ideas and built on top of other people’s work. Because their work is published and open-source, everyone can profit from it. This is the power of open research and open-source.”
Galloway: He’s talking his own book.
Swisher: That’s correct. I was just going to make that point.
Galloway: Llama is open-source.
Swisher: Yes, that’s correct. That’s what I was going to say. But it’s interesting. He’s having really interesting arguments. Gary Marcus, this guy who’s somewhat of a crank, was saying that “Congress needs to bring in Zuckerberg and LeCun to discuss how their unilateral open-sourcing decision rapidly undermined the U.S. advantage in generative AI.” LeCun goes, “An absolutely hilarious take revealing the complete misunderstanding of the fact that open research/open-source accelerates progress for everyone from someone who’s repeatedly claimed that deep learning was hitting a wall.”
But one of the things he just wrote, again because he’s getting in there very deeply, “Major misunderstanding about AI infrastructure investments: Much of those billions are going into infrastructure for inference, not training. Running AI assistance services for billions of people requires a lot of compute. Once you put video understanding, reasoning, large-scale memory, and other capabilities into AI systems, inference costs are going to increase. The only real question is whether users will be willing to pay enough (directly or not) to justify CapEx and OpEx.” He thinks these reactions are woefully unjustified and, at the same time, he’s sort of arguing that they aren’t, right? Which is interesting.
Galloway: It’s just so typical of the Chinese — the entire Chinese economy was sorta built on more for less. My guess is they had a mandate or they’ve said, “All right, we’re not going to have access to the same level of high-end chips. We need work-arounds.” And it appears to respond to really interesting innovation.
Swisher: Using open-source?
Galloway: Yeah, using open-source. I mean, the scary thing — in typical Meta fashion, you can download a version of Llama with absolutely no guardrails and you can request information on anything. The most politically correct I find of them is Anthropic. If I start asking questions about insider trading from Speaker Emerita Pelosi, it immediately gives me all these things back. “We cannot endorse nor promote strategies around insider trading.” ChatGPT goes straight into it, and I think Llama will say, “Well, here’s what you do. You call your cousin.”
Swisher: You’re right. These open-source models have been a boon for China for sure, in keeping up.
Galloway: It’ll be interesting to see what happens to the stock. These companies have already let some air out. The question is, and I don’t know the answer, is this the beginning of a massive correction that will infect the entire NASDAQ, the entire S&P? And quite frankly, now, these companies, I don’t want to say become too big to fail, but if they sneeze, the U.S. economy’s going to catch a cold.
Swisher: That’s correct.
This interview has been edited for length and clarity.