Nvidia looks to expand its AI reign with robots, personal supercomputers
Published in Science & Technology News
Nvidia Corp., looking to cement its place at the heart of the artificial intelligence boom, laid out plans for more powerful chips, a model for robotics, and “personal AI supercomputers” that will let developers work on desktop machines.
Speaking at the company’s annual GTC event in San Jose, California, Chief Executive Officer Jensen Huang unveiled a platform called Isaac GR00T N1 that will “supercharge humanoid robot development.” Nvidia is working with Walt Disney Co. and Google’s DeepMind on the project, which will be open to outside developers.
The GTC conference, once a little-known gathering of developers, has become a closely watched event since Nvidia took a central role in AI — with the tech world and Wall Street taking its cues from the presentation. Huang introduced a variety of hardware, software and services during his roughly two-hour speech, though there were no bombshell revelations for investors. The stock closed down more than 3% on Tuesday.
During the speech, Huang said Nvidia was working with General Motors Co. to use AI in next-generation cars, factories and robots. He also unveiled a separate wireless project involving companies such as T-Mobile U.S. Inc. and Cisco Systems Inc. Nvidia will help create “AI-native” wireless network hardware for new 6G networks, the successor to today’s 5G.
Dell Technologies Inc., HP Inc. and other manufacturers, meanwhile, will make the new personal supercomputer systems. Huang also introduced a successor to Nvidia’s flagship AI processor called the Blackwell Ultra. That chip line, due in the second half of 2025, will be followed by a more dramatic upgrade called “Vera Rubin” in the latter half of 2026.
Follow The Big Take daily podcast wherever you listen.
It’s a pivotal moment for Nvidia. After two years of stratospheric growth for both its revenue and market value, investors in 2025 have begun to question whether the frenzy is sustainable. These concerns were brought into focus earlier this year when Chinese startup DeepSeek said it had developed a competitive AI model using a fraction of the resources.
DeepSeek’s claim spurred doubts over whether the pace of investment in AI computing infrastructure was warranted. But it was followed by commitments by Nvidia’s biggest customers, a group that includes Microsoft Corp. and Amazon.com Inc.’s AWS, to keep spending this year.
The biggest data center operators — a group known as hyperscalers — are projected to spend $371 billion on AI facilities and computing resources in 2025, a 44% increase from the year prior, according to a Bloomberg Intelligence report published Monday. That amount is set to climb to $525 billion by 2032, growing at a faster clip than analysts expected before the viral success of DeepSeek.
But broader concerns about trade wars and a possible recession have weighed on Nvidia’s stock, which is down 14% this year. The shares fell 3.3% to $115.53 by the close of New York trading Tuesday.
The GM news hurt shares of Mobileye Global Inc., which develops self-driving technology. The stock fell 3.5% to $14.44. The company, majority-owned by Intel Corp., had its initial public offering in 2022.
The weeklong GTC event is a chance to convince the tech industry that Nvidia’s chips are still must-haves for AI — a field that Huang expects to spread to more of the economy in what he’s called a new industrial revolution. Huang noted that the event has been described as the “Super Bowl of AI.”
The most important issue facing Nvidia is whether AI capital spending will continue to climb in 2026, Wolfe Research analyst Chris Caso said in a note previewing the event. “AI stocks have been down sharply on recession fears, and while we think AI spending is the last place cloud customers would wish to trim budgets, if the areas that fund those budgets suffer, that could put some pressure on capex.”
On that front, Huang didn’t seem to soothe investors’ concerns. But he offered a road map for future chips and unveiled a breakthrough system that relies on a combination of silicon and photonics — light waves.
Nvidia also announced plans for a quantum computing research lab in Boston, aiming to capitalize on another emerging technology.
Nvidia, based in Santa Clara, California, has suffered some production snags as it works to rapidly upgrade its chips. Some early versions of Blackwell required fixes, delaying the release. Nvidia has said that those challenges are behind it, but the company still doesn’t have enough supply to meet demand. It has increased spending to get more of the chips out the door, something that will weigh on margins this year.
Huang said the top four public cloud vendors — Amazon, Microsoft, Alphabet Inc.’s Google, and Oracle Corp. — bought 1.3 million of Nvidia’s older-generation Hopper AI chips last year. So far in 2025, the same group has bought 3.6 million Blackwell AI chips, he said.
After Vera Rubin debuts in the second half of next year, Nvidia plans to release a version a year later called Rubin Ultra. Vera Rubin’s namesake was an American astronomer credited with helping discover the existence of dark matter.
The generation of chips after that will be named Feynman, Huang said. The name is a likely reference to Richard Feynman, an American theoretical physicist who made contributions to quantum mechanics.
©2025 Bloomberg L.P. Visit bloomberg.com. Distributed by Tribune Content Agency, LLC.
Comments