Artificial Intelligence News

NVIDIA continues to improve AI development


NVIDIA announced new features and capabilities that support the acceleration and growth of AI-based solutions for robotics and simulation.

NVIDIA Omniverse enables Amazon Robotics engineers to quickly simulate warehouse environments and train sensor data. | Credit: NVIDIA

Listen to this article

Voiced by Amazon Polly

NVIDIA CEO Jensen Huang presented the latest product announcements during his keynote at GTC 2023 event this morning.

One of the first major announcements was that NVIDIA accelerated computing together with cuOpt had solved Lee and Lim’s traveling salesman problem faster than any other solution to date. This achievement opens up a new world of robotics capabilities for creating real-time solutions to AMR path planning problems.

Isaac Sim in Omniverse

Omniverse Cloud for enterprises is now available as a platform as a service (PaaS) for compute-intensive workloads such as synthetic data generation and CI/CD. This PaaS provides access to the best hardware when you need it for processing-intensive workloads. Service launched with MS Azure.

Amazon Robotics was Isaac Sim’s early customer in Omniverse. The Proteus warehouse robot development team built a complete digital counterpart of the Proteus AMR and implemented it into Isaac SIM to assist with AMR development and programming.

(embed)https://www.youtube.com/watch?v=LUnZXBL_lqA(/embed)

The team generated hundreds of photo-realistic environments to train and test sensor processing algorithms, and AMR behavior. This allows development teams to accelerate projects without the need to build and test expensive prototypes in the real world.

Isaac Sim enables sensor simulation and interactive design and can run on AWS RoboMaker, to aid world generation. This can be implemented in your cloud service provider.

(embed)https://www.youtube.com/watch?v=g78YHYXXils(/embed)

BMW is also using NVIDIA Omniverse to speed things up new car assembly plant planning and design. BMW moves virtual factory development to Omniverse. There, manufacturing engineers can layout robotics assembly workcells and virtually modify robotics, tools, and programming to optimize workflows. Mercedes claims that the digital twin development process cuts two years of the planning cycle for the new factory.

screenshot of microsoft team of mercedes engineering team.

BMW is an early adopter of NVIDIA Omniverse for the development and programming of future automotive assembly lines and factories. | Credit: NVIDIA

The Isaac ROS DP3 release adds new perceptual capabilities and open source modules

There is a new ROS DP3 release for Issac which includes a number of new features:

  • New package of lidar based grid localizer
  • New person detection support in the NVBLOX package
    • GPU-accelerated 3D reconstruction for collision avoidance
  • Updated VSLAM and depth perception GEM
  • NITROS source release, NVIDIA’s hardware accelerated implementation of ROS 2
  • New Isaac ROS benchmark suite created
robot sensor data in warehouse.

NVIDIA Omniverse and Isaac Sim allow robotics to see the world as the sensor sees it. | Credit: NVIDIA

Update for the NVIDIA Jetson Orin Family

The Jetson Orin product line is getting updates including the new Orin Nano units which are available in a full range of systems-on-modules from hobbyist to commercial platforms:

  • Jetson Orin Nano 8GB/4GB
  • Orin NX 16GB/8GB
  • AGX Orin 64GB/32GB

New entry-level Jetson developer kit for Robotics / Edge AI

NVIDIA introduced the NVIDIA Jetson Orin Nano developer kit that delivers 80x the performance when compared to the previous generation Jetson Nano, enabling developers to run advanced robot and transformer models. It also improves power efficiency by up to 50x performance per watt, so developers starting with Jetson Orin Nano modules can build and deploy entry-level, power-efficient AI-powered robots, smart drones, and intelligent vision systems.

  • Available now for preorder for $499

In forward-looking statements, NVIDIA believes that building infrastructure will evolve in such a way that every building will be considered a “robot”. Practically speaking, this implies that buildings and other elements of infrastructure will be imbued with the ability to feel, think and act.

It started with the idea to automate infrastructure with vision-based AI as a platform for things to observe other things in motion. This is the vision the company calls “NVIDIA Metropolis”

The company announced the latest generation TAOTAO 5.0 and later versions of DeepStream that use more sensors to help automate machines and solve big computer vision challenges with APIs.

TAIO 5.0’s additional features include:

  • New transformer based pre-trained model
  • Deploy it on any device – GPU, CPU, MCU
  • TAO is now open
  • AI assisted annotation
  • REST APIs
  • Integration with any cloud — Google Vertex AI, AzureAI, AWS, etc.

NVIDIA also announced Metropolis Microservices to address difficult problems such as multi-camera tracking and human-machine interaction.

DeepStream puts AI to work in a low code interface for graphing as an extension of existing AI services. This includes multi-sensor sensor fusion and deterministic scheduling for things like PLC controllers.

New features of the DeepStream SDK include a new graph execution time (GXF) that allows developers to extend beyond the open source GStreamer multimedia framework. This update opens up a number of new applications, including industrial quality control, robotics and autonomous machines.

Editor’s note: An earlier version of this article misspelled Mercedes instead of BMW. It has been updated to accurately document Jensen’s primary references.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button