Transforming the automotive arena (1/6)

3 mins read

Around the world, automobile manufacturers are racing to become leaders in autonomous vehicle technology.

It is acknowledged that it will be far from easy to move from the advanced driver assistance system (ADAS) implementations we have today to highly sophisticated systems capable of negotiating busy junctions where both machine-based drivers and human drivers meet. There are also far more than purely technological challenges to consider. In addition to these, numerous far-reaching societal issues will need addressing.

The intention of this six-part blog series from Mouser, is to look at the key aspects of autonomous driving. In the upcoming instalments, we will describe each of the stages that must be passed through before finally reaching full autonomy, the innovations currently taking place in both sensor technology and communication infrastructure that will support it, the obstacles that could potentially hamper its market acceptance, then finally the various ethical dilemmas involved.

Autonomous driving will not just be about the performance of individual vehicles. A major attraction of putting machines in charge of transportation is that it could massively reduce the number of road fatalities (with the current annual death toll on EU roads being over 25,000 on average). In principle, self-driving vehicles can take advantage of a variety of sensing modalities to detect risks more quickly and reliably than humans. On top of this, they can communicate with each other to avoid the misunderstandings between human drivers that lead to collisions – but first the systems need to learn about the ways in which human road users actually behave.

With today’s machine learning technologies, such as deep neural networks (DNNs), the more data they have access to, the better they will subsequently learn. As a result, autonomous driving systems will need to be able to call on data from many different sources.

It is worth noting that machine learning doesn’t just apply to the autonomous vehicles themselves – it is also something that the supporting cloud-based management systems can make use of. Each vehicle is not only going to be a recipient of information but will also provide a constant stream of data to such systems, thus adding to the stored knowledge base. The ability for case experiences to be derived from individual cars and shared across a manufacturer’s entire fleet will create an always-improving “collective intelligence” that can be applied to both future and existing cars (via software updates). For example, Tesla has been gathering data from its cars since the company’s inception, and continues to compile data to feed its various analytical activities.

There are many moral questions that this brings up though. What regulations need to be put in place with regard to capturing and subsequently using acquired data to ensure that a person’s privacy is not violated? Who should be allowed to collect, analyze, store and distribute vehicle data? What should the nature of the data obtained be? Should there be a layer of abstraction applied so that possible trends can be examined, but without allowing access to the identity of individuals? All these issues will need to be decided and addressed at a legislative level.

Part of the problem that the engineering teams building autonomous vehicles need to overcome is that everyday driving tends to be fairly uneventful. Accidents, fortunately, are a rare occurrence compared to the enormous number of hours that vehicles spend moving along road networks. However, the systems need to be able to recognize and then rapidly react to these uncommon events. It is clearly impractical – and unethical – to initiate accidents on the road in order to learn from them. This is where simulation-based training comes into the picture. Researchers at Saarland University are among those constructing complex simulation environments designed to train artificial intelligence (AI) systems. These will present unusual and problematic combinations of events for autonomous vehicles’ software to deal with.

Even with apparently comprehensive databases, a key problem with simulation-driven verification is determining how realistic each scenario is. There may be unforeseen sensor-failure modes that convince a real-world system that nothing is in its way, while the simulation using more reliable sensor data results in an entirely different reaction. Autonomous vehicles must learn from their mistakes – and not just their own mistakes, but those of others on the road. For this reason, vehicles will record their experiences and, as long as they are in range of a wireless base station, upload them to servers in the cloud – where the AI modules are constantly being retrained. Potentially every night each vehicle could receive a new model for the next day’s journeys that should, in turn, be that little bit safer. However, the routine recording of daily journeys has clear implications in relation to privacy. For the reasons just outlined above, the content acquired may need to be anonymized, to prevent the identification of individuals sitting inside the cabin. This requirement could become even more important if vehicle manufacturers move from their own distinct and highly secretive approaches to development, and begin to work more collaboratively. If they continue to work in separate siloes, each will have to come up with its own set of scenarios, and hope that it has caught the ones that needed most attention. Alternatively, if they determine that safety overall will improve with more comprehensive data, governments and companies could decide to come together and develop common training packages.

How the training alters the behavior of the system may depend on more than what the sensors alone tell the vehicles’ AI systems. Higher-level decisions are likely to come into play, and the underlying framework on which these decisions are based is certain to be a source of controversy (in situations where lives seem certain to be lost, whose safety will actually be prioritized?). Creating a universal framework is set to be a philosophical minefield, which we will look at in more detail further into this blog series.