Nvidia’s new $3,500 AI ‘brain’ is poised to supercharge automation—and could give rise to what Jensen Huang calls the final phase of AI

1 hour ago 1
  • Nvidia’s new $3,499 robot “brain” aims to make humanoid machines as quick-thinking and sure-footed as experienced workers, bringing more AI decision-making out of distant data centers and directly onto factory floors, warehouses, hospitals, and farms.

When Nvidia CEO Jensen Huang delivered the keynote address at the company’s annual GTC conference in March, he outlined the “four waves” of the AI revolution: perception AI was the first wave, which started about 10 years ago and focused on recognizing speech and classifying images. Generative AI, the second wave that’s dominated the past five years and characterized by large-language models like ChatGPT, creates text and images based on predictive patterns. The current wave we’re in, agentic AI, allows for models to reason and perform tasks independently. But the next and final wave, according to Huang, will be “physical AI,” where AI is integrated into real-world applications and advanced automation systems, including human-like robots.

On Monday, Nvidia brought Huang one step closer to making that fourth wave a reality. The company announced a new “brain” for robots: a $3,499 developer kit that starts shipping next month. The company’s stock rose slightly on the news, and has leaped higher as of Tuesday morning. Powered by the company’s top-end Blackwell chips, which are sought-after by most countries trying to build AI at scale, Nvidia says Jetson Thor promises “unmatched performance and scalability” to deliver a massive amount of power needed to run generative AI models. Compared to this chip’s predecessor, Jetson Thor “provides up to 7.5x higher AI compute and 3.5x better energy efficiency,” according to the company. 

The entire system is pitched as a foundation for robots that can perceive their surroundings and respond in real time, a capability Nvidia frames as essential for the next leg of AI adoption in the physical world.

What is Jetson Thor?

Jetson Thor is a compact computer designed to sit inside a robot and run multiple AI models at once—seeing, understanding, and acting without round trips to the cloud. In plain terms, it’s like putting a seasoned foreman, safety officer, and navigator into the same hard hat, so the machine can recognize a loose cable, reroute around a spill, and still keep working.

Nvidia’s argument is that Thor’s combination of on-board power and software lets a robot handle many senses and skills simultaneously—like a driver checking mirrors, listening for a siren, and changing lanes—without lag, which should translate into smoother movement, faster decisions, and reliable operation under pressure.

Why investors should care

Nvidia is busy extending its AI franchise from servers that train chatbots to the bleeding edge where AI must interact with the messy physical world. If successful, this broadens Nvidia’s total addressable market into logistics, manufacturing, healthcare, construction, retail, and autonomous systems—sectors that prize uptime and safety. In its announcement post, Nvidia highlighted early adopters and evaluators across blue-chip names in e-commerce, industrials, and tech, suggesting near-term pilots that could convert into volume orders if returns pencil out.

For people and engineers actually working in robotics, Nvidia’s pitch boils down to fewer pauses and fewer mistakes: Thor enables robots to react in milliseconds, the difference between dropping a package and catching it, or between bumping a pallet and steering around it. That responsiveness matters because every second saved compounds across thousands of picks, scans, or steps, and because safety incidents are expensive and disruptive. Nvidia also emphasizes power efficiency and the ability to run several AI tasks at once, which can reduce the number of computers per robot and simplify system design.

Nvidia says established robotics companies and large enterprises are adopting Jetson Thor now, with a developer ecosystem built around its Isaac tools to speed prototyping and deployment. For business leaders, the near-term takeaway is that trials can start with the developer kit, then scale via production modules if pilots show efficiency or safety gains. The longer-term story is Nvidia’s push to make robots as common—and as dependable—as other capital equipment, with AI handled locally to keep them fast, capable, and controllable.

For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. 

This story was originally featured on Fortune.com

Read Entire Article