Do robot cars dream of virtual jeeps?

5 mins read

At the Consumer Electronics Show (CES), carmakers decided the way forward for dashboards was to extend them and, like military aircraft, project their graphics onto the windscreen as head-up displays.

Although few manufacturers want to use the term augmented reality, it represents an approach that design teams are using as they try to build greater intelligence into navigation and control systems.

Augmented reality of another kind is extending the kinds of experiments that researchers can perform on prototype vehicles at the University of Michigan’s Mcity 16-acre testbed in Ann Arbor. It makes it possible to enact scenarios that would be far too dangerous to risk on regular roads. Even under relatively benign conditions, experiments with vehicles on public highways have ended in death and injury.

Tucked away behind the university’s north research complex, there is little risk of a car running over a pedestrian in the mockup of a town centre, several roundabouts and a short section of freeway. With a metal canopy simulating the RF interference properties of obstacles such as overhead bridges, the test track provides researchers and clients with a way of testing sensor rigs and higher-level controls but it is too quiet to represent the chaotic environment of real-world driving.

To bridge that gap, Professor Henry Liu and assistant research scientist Yiheng Feng of the University of Michigan have built an augmented-reality system for the vehicles. They use messages relayed by RF to the vehicle sensors to make it seem as though other cars are moving around the track with them. This lets them model situations such as another driver running a red light on the real hardware without putting the operators and other researchers on the track in danger.

Widespread communications is likely to be important to more efficient ADAS and autonomous control. At the analyst conference organised by Siemens Digital Industries last autumn, Alexandra Francois-St-Cyr, portfolio development executive, explained how the company had expanded ADAS testing to include communications. “We have been running validation using our own fleet of vehicles. With vehicle to infrastructure we could increase speed of a turning vehicle to 16mph. With just vehicle-to-vehicle, it can only turn a corner at 10mph.”

Even with the help of augmented reality, testing with physical hardware remains a slow process simply because of a lack of prototypes and test tracks are limited in the range of scenarios they can support.

Although there are attempts to improve the efficiency of training AI to cope with road conditions, automakers for the moment are resigned to putting their designs through millions of miles of driving. A report by Tirias Research claimed estimates for training neural-network models for automated driving range from 10 billion to 20 billion hours of driving. “This alone would require hundreds of years to accomplish in the real world.”

Moving into the virtual domain
Moving much of the process into the virtual domain entirely makes it possible not just to test multiple systems in parallel, as long as you have the server capacity to support it, but create many more scenarios. Companies are using a variety of mechanisms to build those scenarios, even turning to gaming technology in some cases thanks to its pursuit of presenting virtual reality to humans.

In the late 1990s, Epic Games developed its UnrealEngine for use in the first-person shooter Unreal but chose to license the software to competitors. Unity Technologies followed suit in the mid-2000s and both companies have since pursued applications outside gaming, from surgery simulation to designing city infrastructure.

Around five years ago, while finalising his PhD at the University of Barcelona, Germán Ros started work on an autonomous-vehicle simulator called Synthia but after Intel expressed interest in seeing the work being applied more widely converted the software to run on UnrealEngine so that it could be distributed as open source.

Today, CARLA has close to 2000 users across academia and industry and is the basis for an annual driving challenge. Participants upload their autonomous driving agents as Docker images so they can be run on any cloud server. The agents run through scenarios based on pre-crash information compiled by the US NHTSA to try get to a programmed destination without incident.

Last year’s challenge saw competitors run through more than 6000km of driving across ten Amazon Web Services nodes, each armed with eight nVidia K80 GPUs. The number of infractions, including several collisions with pedestrians and twenty with other vehicles, underlines the importance of doing this work in a virtual space. Ros sees the open-source nature of CARLA being important because it supports customisation.

Yao Zhai, automotive product manager at Unity, says programming interfaces to the company’s gaming engine are vital for the engineering users. They make it possible to integrate real-time from simulated sensors and ECUs. “The Unity engine is designed in a very open and flexible way so that it is able to accommodate many of these non-gaming requirements through plugins and extensions, without changing the engine. However, the Unity engine is also evolving constantly to adopt requirements if we believe the change may benefit our broader user base across industries.”

Above: The development of autonomous vehicles require situations to be modelled via simulation

Vehicle simulation
Although engineers such as Ferenc Pintér, product lead for the AIsim tool at AImotive, have worked with game engines, the autonomous-driving specialist decided to build its own simulator to support its vehicle development and to offer to other teams.

“We know how these game engines work internally but their primary purpose is to generate visuals for the human eye, with effects such as lens flare. In self-driving cars we are giving control to the machine,” he says.

With machine-vision systems, the images presented to the computer look radically different to the typical 3D scenes game engines create. The image sensors use fish-eye lens to capture as much of the surroundings as possible. They also have far less dynamic range than the human eye, which can make a dramatic difference in whiteout conditions or during nighttime.

“You need to be able to control the parameters that make the simulation look close to what the real sensors would perceive. We also want to support situations where you don’t need the full visuals and work much faster on laptops if they don’t need the complete stack,” says Pintér, who notes that a visual simulation may use on the order of ten GPUs to render everything for a full sensor suite.

The ability to break down the simulation in simpler modules supports what AImotive sees as a vital aspect of simulation for autonomous driving and ADAS: the ability to ship software updates safely. A simulation environment makes it possible to support continuous-integration workflows without having to go through extensive hardware-based testing for each point release.

“You want to make sure it works before it gets to the vehicle. As developers send in their code, the simulation can see if they have broken anything using incremental software validation,” Pintér explains. “We want to make it something you can test again and again, and quickly identify whether the error comes from simulation or the ADAS or self-driving code.”

The PAVE360 environment created by Siemens provides another route to simulating ADAS in lifelike conditions and is using it to help prototype not just software but hardware controllers. Following its acquisition of Mentor Graphics in 2017, the German engineering giant has integrated the emulation hardware used to test out chip designs at higher speeds than are possible with pure software simulation before they make it to the fab. Joe Sawicki, executive vice president of IC EDA at Mentor, says: “We are big believers in building digital twins with emulators to simulate large electronics systems.”

Under pressure to deliver safer systems without hurting development lifecycles, vehicle design teams seem likely to continue to exploit both simulation and hardware acceleration.