Day two of GTC. If you missed day one, Jensen Huang Just Told Every Restaurant Owner to Pay Attention covers the keynote.
I was watching a live panel on robotics simulation. Five NVIDIA researchers and the guy behind Two Minute Papers, talking about how they train robots to move through the real world. Not motivational speakers. Scientists.
Here is the problem they solved. A self-driving car pulls up to an intersection. A pedestrian is ten feet away. The car stops. Good. But what happens when the pedestrian runs? What happens when they freeze and change direction? What happens when a piece of space junk falls out of the sky at the same time? What happens when you introduce a new menu item on a Friday night and walk out the door?
Real-world data cannot cover every scenario. You would have to drive every road on earth a million times and still miss edge cases. So NVIDIA did something else. They simulated it. Thousands of permutations of every scenario. Simulations generated inside physics-accurate digital environments, then used to train the AI. The same physics that make your character die from falling in an MMORPG.
The robots trained on imagined experience outperform real-world-only trained robots. You had robots performing tasks for the first time and doing it well.
That is visualization. That is what every coach who ever said "see the shot before you take it" was teaching. That is what sports psychologists have been publishing research on for decades. Close your eyes, rehearse the scenario, perform better when it counts. NVIDIA built the computational version of visualization. They proved it works at a scale no human brain could match.
The inverse is also true. Bad data in, bad data out. If you feed the simulation garbage, the model degrades. Your brain runs simulations too, just without a physics engine to keep it honest. Negative self-talk, catastrophic thinking, replaying the miss over and over. That is training yourself on bad synthetic data. The principle does not care whether the processor is silicon or biological.
The part that should keep you up tonight: they ran out of real-world data. So they had AI generate new scenarios. Then they trained AI on those AI-generated scenarios. And it worked. The models learned from scenarios they never physically experienced. The reason it did not spiral into nonsense is that every simulation runs inside a physics engine. Gravity does not negotiate. The data can imagine anything it wants, but it cannot imagine its way past the laws of the universe.
The machine is teaching itself by imagining. We have a word for that in humans. We call it thinking.
Zig Ziglar told you to visualize success before you could achieve it. NVIDIA just proved he was right with a billion dollars in GPU compute.
The leather jacket is still optional.