comment on this article

NVIDIA announces new AI perception for ROS developers

NVIDIA and Open Robotics have entered into an agreement to accelerate ROS 2 performance on NVIDIA’s Jetson edge AI platform and GPU-based systems.

A number of initiatives will look to reduce development time and improve performance for developers seeking to incorporate computer vision and AI/machine learning functionality into their ROS-based applications.

Open Robotics will enhance ROS 2 to enable efficient management of data flow and shared memory across GPU and other processors present on the NVIDIA Jetson edge AI platform. This will help to significantly improve the performance of applications that have to process high-bandwidth data from sensors such as cameras and lidars in real time.

In addition, Open Robotics and NVIDIA are working to enable seamless simulation interoperability between Open Robotics’s Ignition Gazebo and NVIDIA Isaac Sim on Omniverse. Isaac Sim already supports ROS 1 and 2 out of the box and features an ecosystem of 3D content with its connection to a range of applications, such as Blender and Unreal Engine 4.

Ignition Gazebo has a long track record and is used widely by the robotics community, including in high-profile competition events such as the ongoing DARPA Subterranean Challenge.

“As more ROS developers leverage hardware platforms that contain additional compute capabilities designed to offload the host CPU, ROS is evolving to make it easier to efficiently take advantage of these advanced hardware resources,” said Brian Gerkey, CEO of Open Robotics. “Working with an accelerated computing leader like NVIDIA and its vast experience in AI and robotics innovation will bring significant benefits to the entire ROS community.”

With the two simulators connected, ROS developers will be able to move their robots and environments between Ignition Gazebo and Isaac Sim to run large-scale simulation and take advantage of each simulator’s advanced features such as high-fidelity dynamics, accurate sensor models and photorealistic rendering to generate synthetic data for training and testing of AI models.

Software resulting from this collaboration is expected to be released in the spring of 2022.

Author
Neil Tyler

Comment on this article


This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:


Add your comments

Name
 
Email
 
Comments
 

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles

Graphene sensors

The Cambridge graphene component specialist, Paragraf, is making available a ...

A critical issue

With more cyber-attacks than ever on critical infrastructure, how can utilities ...

Changing the paradigm

Innovating in semiconductors is seen as being a critical component of the UK’s ...

Change based testing

A major cause of software bugs is inefficient and incomplete testing. This ...

What is EMC testing?

Testing of products under EU guidelines to ensure they don't either pollute the ...

NI Trend Watch 2014

This report from National Instruments summarises the latest trends in the ...

DAQ µModule

Analog Devices (ADI) has introduced a 16-bit, 15 MSPS data acquisition mModule ...

T&M challenges

An interesting piece of research from Keysight Technologies shows that nearly ...

Getting real with VR

Professor Robert Stone has been involved in the world of virtual, augmented and ...