Key technologies for autonomous cars (3/6)

4 mins read

If autonomous vehicles are to help us in our day-to-day lives, and reduce the unacceptable number of accidents that still occur on our roads, they clearly need to see what is ahead of them.

More than that, they must have awareness of the environment that surrounds them in its entirety – a 360° view of everything that could potentially affect them.

In order to achieve that objective, integration of a multitude of sensing mechanisms will be mandated. Some of these are already used in today’s advanced driver assistance systems (ADAS), but others are still emerging and are specific to the realm of self-driving cars. Without them vehicle autonomy will not even get out of the slow lane, let alone into the mass market. The different levels taking us from human-driven cars to full autonomy were detailed in the previous blog, and we will now look at supporting sensor technologies.

Three core technologies will be responsible for enabling autonomous vehicles to “see.” These are LiDAR, cameras and radar. Currently, each is at a different stage of its development roadmap.

Perhaps the simplest of those technologies is radar. This is already found in some vehicles – where it supports certain functions, such as adaptive cruise control. However, it has an important role to play in the progression of autonomous vehicles too, especially in low-speed scenarios such as parking, or in slow-moving traffic. It will also have potential use in other tasks executed at higher speeds, like lane changing on motorways, for example.

The latest mmWave automotive radar systems use shortwave electromagnetics to determine the range, velocity and relative angle of detected objects. Typically operating in the 77GHz frequency band, they are able to distinguish very small movements.

There are many advantages associated with radar – as it is a proven technology that offers ongoing reliability, regardless of changes to environmental conditions. The hardware needed is compact and comparatively cheap, since it already benefits from economies of scale that come with being an established technology. It does have some inherent limitations, though. The most important of these is the amount of data that it can provide. This is why autonomous vehicles will need to rely on a suite of sensors, rather than just one sensing mechanism in isolation.

LiDAR is a technology that nearly all automobile manufacturers are currently including in their development programs, and will be pivotal in complementing vehicle radar systems. Here pulsed light waves are emitted from a laser source, and subsequently bounce off surrounding objects. From the time it takes for each pulse to return to the source’s accompanying sensor, it is possible to calculate the distance it has traveled. The process is repeated millions of times per second to create a real-time 3D map of the environment. This can indicate shape and depth of vehicles, road infrastructure, cyclists and pedestrians, thereby making it easier to navigate around any obstacles as they appear. A key plus-point of LiDAR, when compared with other sensor options, is that it can produce a “bird’s eye” view – resulting in a more comprehensive perspective.

Ford has already invested heavily in the technology. The company is using Velodyne’s HDL-64E LiDAR system in current autonomous vehicle development and testing activities, with initial models featuring it expected to arrive in the 2021 timeframe.

The HDL-64E is a 64-channel system with a 360° horizontal field-of-view (FoV) and 26.9° vertical FoV, along with an overall range of up to 120m. The number of channels supported is critical in relation to potential vehicle speed. According to Velodyne, a car equipped with a 32-channel system could only drive autonomously up to 35mph (57kph), but by doubling the number of channels far higher speeds can be addressed.

One of the biggest hurdles holding back LiDAR adoption (and why industry luminaries like Elon Musk have deemed it unnecessary) is the high cost involved. Currently systems have a price tag equating to several tens of thousands of euros each. Even as unit volumes increase, these will still be very expensive items.

It isn’t only the cost, though. As accurate as it is at building a map of a vehicle’s surroundings, LiDAR cannot produce the detail required for other tasks, such as road sign recognition. For that, and numerous other image recognition and classification tasks, autonomous vehicles will need to depend on high-definition camera systems instead.

Placing front-, side- and rear-facing cameras onboard a vehicle, for example, will enable it to stitch together a 360° real-time view of its environment. Through this the presence of blind spots can be minimized, notifications about changes to the speed limit can be given, and lane retention can be assured. The number of cameras required will be dependent on the FoV of the system (which can be up to 120°), and whether automotive manufacturers decide to specify “fish eye” cameras (which contain super-wide lenses that provide panoramic views).

Like any sensor technology, the benefits have to be balanced against any limitations. With camera systems, although they can distinguish details of the surrounding landscape, depth and distance can prove to be problematic, and object distancing needs to be calculated for knowledge of the exact location of a detected object. It is more difficult for cameras to determine objects in low-visibility conditions too (such as when there are adverse weather conditions, or at night).

One other issue that system developers are also starting to consider is the impact of sensor systems on other autonomous vehicles. Recently discussions have been taking place in relation to whether LiDAR can impinge on the operation of digital cameras – something that could become a serious (potentially life-threatening) issue when autonomous vehicles come face-to-face.

It must be concluded that the sensor suite that will be incorporated into future autonomous vehicles is still very much a work in progress. What does seem clear, however, is that it will need to be multifaceted – drawing on various different mechanisms (like radar, image sensing and LiDAR) and the beneficial attributes that can be derived from each. When combined, these will provide the breadth of functionality (and also the redundancy) needed to ensure that self-driving cars don’t pose a danger.