Those tracking Nvidia had much to celebrate at CES this week, with word that the company’s latest GPU, Vera Rubin, is now . These powerful AI chips—the tools driving the AI boom—are, after all, what helped establish as the world’s most valuable company.
But in his keynote address, CEO Jensen Huang once again made plain that Nvidia does not view itself as just a chip firm. It is also a software company, with its influence spanning nearly every layer of the AI stack—and with a significant focus on physical AI: AI systems that operate in the real world, including robotics and self-driving cars.
In a press release promoting Nvidia’s CES announcements, a quote from Huang stated that “the ChatGPT moment for robotics is here.” Breakthroughs in physical AI—models that grasp the real world, reason, and plan actions—“are enabling entirely new applications,” he said.
In the keynote itself, however, Huang was more reserved, noting the ChatGPT moment for physical AI is “nearly here.” It may seem like a trivial difference, but the distinction carries weight—especially when considering Huang’s prior comments, when he introduced Nvidia’s Cosmos world platform and described robotics’ “ChatGPT moment” as merely “around the corner.”
So has that moment truly arrived, or is it still just out of reach?
Huang himself appeared to recognize the gap. “The challenge is evident,” he said in yesterday’s keynote. “The physical world is varied and unpredictable.”
Nvidia is also not a fleeting player in physical AI. Over the past decade, the company has built a foundation by developing an ecosystem of AI software, hardware, and simulation systems for robots and autonomous vehicles. But it has never been about building its own robots or AVs. As Rev Lebaredian, Nvidia’s vice president of simulation technology, told last year, the strategy remains centered on providing the essential tools.
There is no doubt that Nvidia has made progress in this area over the past year. On the self-driving front, it unveiled today the Alpamayo family of open AI models, simulation tools, and datasets designed to help AVs operate safely across a range of rare, complex driving scenarios—considered among the most difficult challenges for autonomous systems to master.
Nvidia also released new Cosmos and GR00T open models and data for robot learning and reasoning, and highlighted companies including Boston Dynamics, , Franka Robots, Humanoid, Electronics, and NEURA Robotics, which are launching new robots and autonomous machines built on Nvidia technologies.
Even with increasingly capable models, simulation tools, and computing platforms, Nvidia is not constructing the self-driving cars or robots themselves. Automakers must still transform these tools into systems that can safely operate on public roads—navigating regulatory scrutiny, real-world driving conditions, and public acceptance. Robotics companies, meanwhile, need to translate AI into machines that can reliably interact with the physical world at scale and at a commercially viable cost.
That work—integrating hardware, software, sensors, safety systems, and real-world constraints—remains extremely difficult, slow, and costly. And it is far from certain that faster AI progress alone will overcome these obstacles. After all, the ChatGPT moment was not just about the underlying model; such models had existed for years. It was about the user experience and a company that managed to capture something extraordinary.
Nvidia has achieved such a breakthrough before—GPUs proved to be the unexpected yet ideal engine for modern AI. Whether that kind of success can be repeated in physical AI, a far more complex and less standardized field, remains an open question.
