When NVIDIA started to move aggressively toward autonomous cars, they also built a foundation for autonomous robots of every kind.
At Dell Technologies World a few years back, they had a session on the future, and robotics was expected to be one of a few very disruptive changes the market was expected to go through this decade.
We now have robots being developed for manufacturing, telepresence, military use, law enforcement use, security, such as in malls and large buildings and event centers, and digital helpers for the disabled.
The Remaining Robotic Problem
But our road to robotics is rocky. From actuators to cameras, to power, to simply getting the things intelligent enough, there have been huge impediments to advancement. Most of which we have worked through, but one problem remains.
That problem is how to train them effectively without having to walk them through every task physically. The fix was to use simulation where days of training could be done in minutes, because you can speed up the simulation to machine speeds and well past what a physical construct could do.
But all simulations were not created equal, and while the use of gaming-based simulations looked promising, that promise didn’t translate well into practice. The simulation, to work correctly, has to reflect through the virtual cameras to the virtual robot the full richness of the physical world. Otherwise, the robot’s training will be faulty.
For instance, gaming-based simulations for training resulted in weird anomalies when the robots were programmed with what was learned. They’d do things like think shadows were solid objects and route around them, dramatically reducing the robots’ efficiency and productivity, while creating potential hazards as the robots attempted to dodge the non-material shadows of objects.
Isaac Sim
The solution, designed to work with NVIDIA’s Jetson Xavier robotics platform that 800K developers have embraced across 120 ecosystem partners for 3K customers, is becoming one of the de facto robotics solutions.
Isaac Sim was developed to solve this problem. It is a simulation built on the Omniverse platform of digital twins, which provides a realistically accurate virtual representation of the natural world, allowing the importation of various object types, including CAD files, to generate a simulated environment logically indistinguishable from the real world.
You can then alter elements and test for virtually anything you could imagine, from power outages and weather events to alien and zombie invasions if you want. You can adjust for camera and sensor suits, adjust the size and nature of the virtual robot being tested, and even create the elements using elements that may only be theoretical if you can define the attributes of those theoretical elements.
While this does use synthetic data, which requires a process to assure that simulated data is consistent with the world you are emulating, this level of flexibility is critical to creating the next generation of robotics at scale.
Potential Game
One thing that struck me was that this solution could be used in an exciting game that, in turn, could be used to develop skills with the platform.
The process of creating a robot that would fight monsters, zombies, or space aliens is potentially the exact process needed to create a robot that will do more menial tasks. If you can make something fun, more people will pick up the skills needed to use the tool.
Imagine how much fun it would be to use a realistic location like, say, your home or office building, and then build a robot, or a team of robots, to defend it from zombies autonomously? You could also have virtual robot wars and battles, while learning the necessary fundamentals to design, build, and train real robots.
I think NVIDIA is on the cusp of something exciting, and they have relations with most of the primary game companies that could help make this theoretical game possible.
Wrapping Up
As we scaled to robotics, and AI for that matter, the critical problem with training became visible.
At least with robotics, a solution was to use the Omniverse platform to create a digital twin of the places you needed to train the robot and then move to a training solution that worked at machine speeds.
That solution is called Isaac Sim, and the next phase is to train the next generation of trainers on this tool to speed the time to market the robots we’ve been promised. I think the tool also lends itself to gamified training, and I expect, eventually, this may be how most of us become familiar with the tool.
In any case, the creation of Isaac Sim helps confirm that Dell Technologies talk from years ago, making it more certain the next big technology wave, at least concerning hardware, is likely to be robotics.
See more: IBM And AI Experience