The underlying ethics of autonomous driving (6/6)

4 mins read

When we get behind the wheel of a car, we don’t really think about anything other than where we are going, how long the journey is likely to take, and which will be the best route to avoid traffic congestion.

However, it may be that something unforeseen happens during our journey, in which case we must react accordingly – for example if a car pulls out in front of us, or a child steps into the road. We can then quickly decide (based on the situation, and our prior experience) to take measures that will minimize the seriousness of the outcome, either by hitting the brakes or swerving in order to avoid a collision.

Thanks to access to advanced 3D imaging technology (LiDAR and ToF) discussed in our earlier blog, as well as data received from nearby infrastructure and other road users in the vicinity (via V2V/V2I communication), it will be easier for autonomous vehicles to avert potential dangers than would be the case for human drivers. The errors of judgment that we all can make (regardless of the number of years we have been driving) will be eliminated, as well as the possibility that we are distracted at the time of the incident occurring. There are, however, certain situations in which loss of life may be unavoidable. Under such circumstances, deciding which course of action an autonomous vehicle should take will raise difficult moral dilemmas.

Vehicle manufacturers, their tier one suppliers and chip makers have all invested heavily in designing and developing the hardware that will eventually make fully autonomous driving possible. With systems becoming more sophisticated and the degree of autonomy involved being elevated, a whole new set of ethical questions will now need to be addressed. These will have implications for road users, legislators and the industry as a whole.

There is certain to be a great deal of debate about exactly how the artificial intelligence (AI) technology being incorporated into the next generation of vehicles will deal with numerous crash scenarios – and in particular, where the vehicle is faced with different critical choices, which of these will be the most appropriate.

Life or Death

There are many potential examples that can be played out here. For instance, imagine if the vehicle had to choose between hitting a bus carrying a large number of people that had careered into the oncoming lane, or alternatively mounting the pavement and hitting a mother and child. How should it react? It is an extreme scenario, but one that the vehicle’s algorithms will have to be able to cope with and come to an instant solution.

So far Germany seems to have been most proactive in addressing these kinds of issues, and has already drawn up an ethical code of conduct for autonomous vehicles. In 2017, the country’s Federal Ministry of Transport and Digital Infrastructure’s ethics commission compiled a comprehensive report on automated and connected driving. This outlined the approach autonomous vehicles should take when faced with seemingly impossible situations like the one just described. It states that the protection of human life is the top priority in a balancing of legally protected interests. So, the systems must be programmed to accept damage to animals or property in a conflict if it means that personal injury can thereby be prevented.

In the future, this hierarchical approach could well be taken forward as the baseline system for universal implementation, with humans at the top and inanimate objects at the bottom. Through it greater emphasis would be placed on the vulnerability of road users – pedestrians first, then cyclists, followed by cars carrying passengers and then commercial vehicles, for example.

In the German model, should an unavoidable event occur, any distinction based on age, gender, or physical or mental constitution would not be allowed, though, and it is also forbidden to offset potential victims against one another. Should a scenario happen where a collision with humans cannot be prevented, then the system would have to act to minimize the severity of the outcome (i.e. the number of lives lost).

This egalitarian approach may not be favored by other geographic regions though, and there is the prospect that ethical choices could be very different depending on the part of the world in which they are applied. In a survey conducted by MIT, researchers looked at how vehicles in numerous markets around the globe might place greater emphasis on the safety of one group over another. The Moral Machine Experiment included feedback from over 2 million people located in 200 different countries.

The results showed general consensus in that there was a preference (as would be expected) to spare the lives of humans over animals, the lives of many people rather than a few, and the young over the old. Regionally there were certain distinctions that arose, though. For instance, while elsewhere in the world there was a strong tendency to protect younger people ahead of the elderly, in Far Eastern countries it was the latter that took precedence. Based on this, the core AI algorithms used in autonomous vehicles may need to be adjusted to meet international cultural/ethical variations.

Who’s at Fault?

One area that will need greater clarity is liability after an accident actually occurs. This could potentially be the automobile manufacturer, the software company that created the AI algorithms, the telecom service provider responsible for V2V/V2I communication, or any number of other stakeholders that have in some way participated in either the vehicle’s development or the operation of the supporting infrastructure.

Despite the layers of intricacy involved, and the difficulties these will present when investigations are undertaken, the advantage that autonomous vehicles will have in answering any question raised is the amount of data they record about their surroundings and their operational parameters in the build-up to any event. Californian authorities already require companies running autonomous test vehicles to provide data from the onboard sensors for the 30s leading up any accident to the Department of Motor Vehicles. This makes reconstructing incidents far simpler than with today’s vehicles, and it is therefore easier to apportion liability.

Of course, the ultimate long-term goal for autonomous technology is to reduce accidents to zero (or at least very close to that figure), which becomes more likely when such vehicles are the only ones on our road and are able to all communicate with one another. Nevertheless, bugs in code, service interruptions, the threat of hacking, and all manner of other things could still have implications with respect to road users’ safety.