Jensen Huang, CEO of Nvidia, hit loads of excessive ideas and low-level tech communicate at his GTC 2025 keynote speech final Tuesday on the sprawling SAP Heart in San Jose, California. My huge takeaway was that the humanoid robots and self-driving vehicles are coming sooner than we understand.
Huang, who runs one of the precious corporations on earth with a market worth of $2.872 trillion, talked about artificial information and the way new fashions would allow humanoid robots and self-driving vehicles to hit the market with sooner velocity.
He additionally famous that we’re about to shift from data-intensive retrieval-based computing to a special type enabled by AI: generative computing, the place AI causes a solution and gives the knowledge, fairly than having a pc fetch information from reminiscence to offer the knowledge.
I used to be fascinated how Huang went from topic to topic with ease, and not using a script. However there have been moments once I wanted an interpreter to inform me extra context. There have been some deep matters like humanoid robots, digital twins, the intersection with video games and the Earth-2 simulation that makes use of loads of supercomputers to determine each international and native local weather change results and the each day climate.
Simply after the keynote discuss, I spoke with Dion Harris, Nvidia’s senior director of their AI and HPC AI manufacturing facility options group, to get extra context on the bulletins that Huang made.
Right here’s an edited transcript of our interview.

VentureBeat: Do you personal something specifically within the keynote up there?
Harris: I labored on the primary two hours of the keynote. All of the stuff that needed to do with AI factories. Simply till he handed it over to the enterprise stuff. We’re very concerned in all of that.
VentureBeat: I’m at all times within the digital twins and the Earth-2 simulation. Lately I interviewed the CTO of Ansys, speaking concerning the sim to actual hole. How far do you assume we’ve come on that?
Harris: There was a montage that he confirmed, simply after the CUDA-X libraries. That was attention-grabbing in describing the journey by way of closing that sim to actual hole. It describes how we’ve been on this path for accelerated computing, accelerating functions to assist them run sooner and extra effectively. Now, with AI introduced into the fold, it’s creating this realtime acceleration by way of simulation. However in fact you want the visualization, which AI can be serving to with. You’ve gotten this attention-grabbing confluence of core simulation accelerating to coach and construct AI. You’ve gotten AI capabilities which might be making the simulation run a lot sooner and ship accuracy. You even have AI helping within the visualization components it takes to create these reasonable physics-informed views of complicated methods.
Whenever you consider one thing like Earth-2, it’s the end result of all three of these core applied sciences: simulation, AI, and superior visualization. To reply your query by way of how far we’ve come, in simply the final couple of years, working with of us like Ansys, Cadence, and all these different ISVs who constructed legacies and experience in core simulation, after which partnering with of us constructing AI fashions and AI-based surrogate approaches–we predict that is an inflection level, the place we’re going to see an enormous takeoff in physics-informed, reality-based digital twins. There’s loads of thrilling work taking place.

VentureBeat: He began with this computing idea pretty early there, speaking about how we’re shifting from retrieval-based computing to generative computing. That’s one thing I didn’t discover [before]. It looks like it might be so disruptive that it has an influence on this house as nicely. 3D graphics appears to have at all times been such a data-heavy form of computing. Is that by some means being alleviated by AI?
Harris: I’ll use a phrase that’s very up to date inside AI. It’s referred to as retrieval augmented technology. They use that in a special context, however I’ll use it to elucidate the thought right here as nicely. There’ll nonetheless be retrieval components of it. Clearly, in case you’re a model, you wish to keep the integrity of your automotive design, your branding components, whether or not it’s supplies, colours, what have you ever. However there might be components throughout the design precept or apply that may be generated. It will likely be a mixture of retrieval, having saved database belongings and courses of objects or photos, however there might be plenty of technology that helps streamline that, so that you don’t should compute every thing.
It goes again to what Jensen was describing at first, the place he talked about how raytracing labored. Taking one which’s calculated and utilizing AI to generate the opposite 15. The design course of will look very related. You’ll have some belongings which might be retrieval-based, which might be very a lot grounded in a selected set of artifacts or IP belongings it’s good to construct, particular components. Then there might be different items that might be utterly generated, as a result of they’re components the place you should utilize AI to assist fill within the gaps.
VentureBeat: When you’re sooner and extra environment friendly it begins to alleviate the burden of all that information.
Harris: The velocity is cool, but it surely’s actually attention-grabbing if you consider the brand new varieties of workflows it allows, the issues you are able to do by way of exploring completely different design areas. That’s if you see the potential of what AI can do. You see sure designers get entry to among the instruments and perceive that they will discover 1000’s of prospects. You talked about Earth-2. One of the crucial fascinating issues about what among the AI surrogate fashions assist you to do isn’t just doing a single forecast a thousand instances sooner, however with the ability to do a thousand forecasts. Getting a stochastic illustration of all of the doable outcomes, so you’ve gotten a way more knowledgeable view about making a call, versus having a really restricted view. As a result of it’s so resource-intensive you may’t discover all the probabilities. It’s a must to be very prescriptive in what you pursue and simulate. AI, we predict, will create an entire new set of prospects to do issues very in another way.

VentureBeat: With Earth-2, you would possibly say, “It was foggy right here yesterday. It was foggy right here an hour in the past. It’s nonetheless foggy.”
Harris: I’d take it a step additional and say that you’d be capable of perceive not simply the influence on the fog now, however you possibly can perceive a bunch of prospects round the place issues might be two weeks out sooner or later. Getting very localized, regionalized views of that, versus doing broad generalizations, which is how most forecasts are used now.
VentureBeat: The actual advance we now have in Earth-2 as we speak, what was that once more?
Harris: There weren’t many bulletins within the keynote, however we’ve been doing a ton of labor all through the local weather tech ecosystem simply by way of timetable. Final 12 months at Computex we unveiled the work we’ve been doing with the Taiwan local weather administration. That was demonstrating CorrDiff over the area of Taiwan. Extra just lately, at Supercomputing we did an improve of the mannequin, fine-tuning and coaching it on the U.S. information set. A a lot bigger geography, completely completely different terrain and climate patterns to study. Demonstrating that the know-how is each advancing and scaling.

As we have a look at among the different areas we’re working with–on the present we introduced we’re working with G42, which is predicated within the Emirates. They’re taking CorrDiff and constructing on prime of their platform to construct regional fashions for his or her particular climate patterns. Very similar to what you have been describing about fog patterns, I assumed that almost all of their climate and forecasting challenges can be round issues like sandstorms and warmth waves. However they’re really very involved with fog. That’s one factor I by no means knew. A number of their meteorological methods are used to assist handle fog, particularly for transportation and infrastructure that depends on that data. It’s an attention-grabbing use case there, the place we’ve been working with them to deploy Earth-2 and explicit CorrDiff to foretell that at a really localized degree.
VentureBeat: It’s really getting very sensible use, then?
Harris: Completely.
VentureBeat: How a lot element is in there now? At what degree of element do you’ve gotten every thing on Earth?
Harris: Earth-2 is a moon shot undertaking. We’re going to construct it piece by piece to get to that finish state we talked about, the total digital twin of the Earth. We’ve been doing simulation for fairly a while. AI, we’ve clearly finished some work with forecasting and adopting different AI surrogate-based fashions. CorrDiff is a singular method in that it’s taking any information set and tremendous resolving it. However you need to practice it on the regional information.
If you consider the globe as a patchwork of areas, that’s how we’re doing it. We began with Taiwan, like I discussed. We’ve expanded to the continental United States. We’ve expanded to EMEA areas, working with some climate companies there to make use of their information and practice it to create CorrDiff variations of the mannequin. We’ve labored with G42. It’s going to be a region-by-region effort. It’s reliant on a few issues. One, having the information, both the noticed information or the simulated information or the historic information to coach the regional fashions. There’s plenty of that on the market. We’ve labored with loads of regional companies. After which additionally making the compute and platforms accessible to do it.
The excellent news is we’re dedicated. We all know it’s going to be a long-term undertaking. By way of the ecosystem coming collectively to lend the information and produce the know-how collectively, it looks like we’re on a great trajectory.
VentureBeat: It’s attention-grabbing how onerous that information is to get. I figured the satellites up there would simply fly over some variety of instances and also you’d have all of it.

Harris: That’s an entire different information supply, taking all of the geospatial information. In some instances, as a result of that’s proprietary information–we’re working with some geospatial corporations, for instance Tomorrow.io. They’ve satellite tv for pc information that we’ve used to seize–within the montage that opened the keynote, you noticed the satellite tv for pc roving over the planet. That was some imagery we took from Tomorrow.io particularly. OroraTech is one other one which we’ve labored with. To your level, there’s loads of satellite tv for pc geospatial noticed information that we are able to and do use to coach a few of these regional fashions as nicely.
VentureBeat: How will we get to a whole image of the Earth?
Harris: One among what I’ll name the magic components of the Earth-2 platform is OmniVerse. It permits you to ingest various various kinds of information and sew it collectively utilizing temporal consistency, spatial consistency, even when it’s satellite tv for pc information versus simulated information versus different observational sensor information. Whenever you have a look at that problem–for instance, we have been speaking about satellites. We have been speaking with one of many companions. They’ve nice element, as a result of they actually scan the Earth on daily basis on the similar time. They’re in an orbital path that enables them to catch each strip of the earth on daily basis. But it surely doesn’t have nice temporal granularity. That’s the place you wish to take the spatial information we would get from a satellite tv for pc firm, however then additionally take the modeling simulation information to fill within the temporal gaps.
It’s taking all these completely different information sources and stitching them collectively by way of the OmniVerse platform that can in the end enable us to ship towards this. It received’t be gated by anyone method or modality. That flexibility gives us a path towards attending to that objective.
VentureBeat: Microsoft, with Flight Simulator 2024, talked about that there are some instances the place nations don’t wish to hand over their information. [Those countries asked,] “What are you going to do with this information?”
Harris: Airspace positively presents a limitation there. It’s a must to fly over it. Satellite tv for pc, clearly, you may seize at a a lot larger altitude.
VentureBeat: With a digital twin, is that only a far less complicated downside? Or do you run into different challenges with one thing like a BMW manufacturing facility? It’s solely so many sq. toes. It’s not your complete planet.

Harris: It’s a special downside. With the Earth, it’s such a chaotic system. You’re making an attempt to mannequin and simulate air, wind, warmth, moisture. There are all these variables that you need to both simulate or account for. That’s the actual problem of the Earth. It isn’t the dimensions a lot because the complexity of the system itself.
The trickier factor about modeling a manufacturing facility is it’s not as deterministic. You’ll be able to transfer issues round. You’ll be able to change issues. Your modeling challenges are completely different since you’re making an attempt to optimize a configurable house versus predicting a chaotic system. That creates a really completely different dynamic in the way you method it. However they’re each complicated. I wouldn’t downplay it and say that having a digital twin of a manufacturing facility isn’t complicated. It’s only a completely different form of complexity. You’re making an attempt to attain a special objective.
VentureBeat: Do you’re feeling like issues just like the factories are fairly nicely mastered at this level? Or do you additionally want increasingly computing energy?
Harris: It’s a really compute-intensive downside, for certain. The important thing profit by way of the place we are actually is that there’s a reasonably broad recognition of the worth of manufacturing loads of these digital twins. We have now unbelievable traction not simply throughout the ISV neighborhood, but in addition precise finish customers. These slides we confirmed up there when he was clicking by way of, loads of these enterprise use instances contain constructing digital twins of particular processes or manufacturing amenities. There’s a reasonably normal acceptance of the concept in case you can mannequin and simulate it first, you may deploy it far more effectively. Wherever there are alternatives to ship extra effectivity, there are alternatives to leverage the simulation capabilities. There’s loads of success already, however I feel there’s nonetheless loads of alternative.
VentureBeat: Again in January, Jensen talked so much about artificial information. He was explaining how shut we’re to getting actually good robots and autonomous vehicles due to artificial information. You drive a automotive billions of miles in a simulation and also you solely should drive it 1,000,000 miles in actual life. it’s examined and it’s going to work.
Harris: He made a few key factors as we speak. I’ll attempt to summarize. The very first thing he touched on was describing how the scaling legal guidelines apply to robotics. Particularly for the purpose he talked about, the artificial technology. That gives an unbelievable alternative for each pre-training and post-training components which might be launched for that entire workflow. The second level he highlighted was additionally associated to that. We open-sourced, or made accessible, our personal artificial information set.
We consider two issues will occur there. One, by unlocking this information set and making it accessible, you get far more adoption and lots of extra of us choosing it up and constructing on prime of it. We expect that begins the flywheel, the information flywheel we’ve seen taking place within the digital AI house. The scaling regulation helps drive extra information technology by way of that post-training workflow, after which us making our personal information set accessible ought to additional adoption as nicely.
VentureBeat: Again to issues which might be accelerating robots in order that they’re going to be in every single place quickly, have been there another huge issues price noting there?

Harris: Once more, there’s various mega-trends which might be accelerating the curiosity and funding in robotics. The very first thing that was a bit loosely coupled, however I feel he related the dots on the finish–it’s mainly the evolution of reasoning and considering fashions. When you consider how dynamic the bodily world is, any kind of autonomous machine or robotic, whether or not it’s humanoid or a mover or anything, wants to have the ability to spontaneously work together and adapt and assume and have interaction. The development of reasoning fashions, with the ability to ship that functionality as an AI, each just about and bodily, goes to assist create an inflection level for adoption.
Now the AI will change into far more clever, more likely to have the ability to work together with all of the variables that occur. It’ll come to that door and see it’s locked. What do I do? These types of reasoning capabilities, you may construct them into AI. Let’s retrace. Let’s go discover one other location. That’s going to be an enormous driver for advancing among the capabilities inside bodily AI, these reasoning capabilities. That’s loads of what he talked about within the first half, describing why Blackwell is so vital, describing why inference is so vital by way of deploying these reasoning capabilities, each within the information heart and on the edge.
VentureBeat: I used to be watching a Waymo at an intersection close to GDC the opposite day. All these folks crossed the road, after which much more began jaywalking. The Waymo is politely ready there. It’s by no means going to maneuver. If it have been a human it will begin inching ahead. Hey, guys, let me by way of. However a Waymo wouldn’t threat that.
Harris: When you consider the actual world, it’s very chaotic. It doesn’t at all times comply with the foundations. There are all these spontaneous circumstances the place it’s good to assume and motive and infer in actual time. That’s the place, as these fashions change into extra clever, each just about and bodily, it’ll make loads of the bodily AI use instances far more possible.

VentureBeat: Is there anything you needed to cowl as we speak?
Harris: The one factor I’d contact on briefly–we have been speaking about inference and the significance of among the work we’re doing in software program. We’re referred to as a {hardware} firm, however he spent a great period of time describing Dynamo and preambling the significance of it. It’s a really onerous downside to unravel, and it’s why corporations will be capable of deploy AI at massive scale. Proper now, as they’ve been going from proof of idea to manufacturing, that’s the place the rubber goes to hit the street by way of reaping the worth from AI. It’s by way of inference. A number of the work we’ve been doing on each {hardware} and software program will unlock loads of the digital AI use instances, the agentic AI components, getting up that curve he was highlighting, after which in fact bodily AI as nicely.
Dynamo being open supply will assist drive adoption. Having the ability to plug into different inference runtimes, whether or not it’s SGLang or vLLM, it’s going to assist you to have a lot broader traction and change into the usual layer, the usual working system for that information heart.