Can autonomous vehicles ever be smart enough?

1 min read

It was, perhaps, inevitable that a car being controlled automatically would be involved in a fatal accident. It happened last week in the US, when Joshua Brown’s Tesla was under autopilot control and neither the driver nor the autopilot detected a truck in the car’s path.

The accident raises the question of just how safe can automated vehicles be? Tesla was quick to point out this was the first fatality associated with its cars which, it claimed, have covered 130million miles. ‘Normal’ cars, it asserted, are involved in a fatality every 60m miles.

Earlier this year, one of Google’s autonomous vehicles had a low speed accident when it pulled out in front of a bus. Its software, wrongly, decided the bus would let it out.

A truly autonomous vehicle needs its software to operate correctly – which implies safely – at all times. There’s already a huge amount of software in a modern car; a driverless car will require far more. So how can such amounts of code be verified to work in all circumstances?

Driverless cars were the subject of a well attended session at the 2014 Electronics Design Show Conference. One attendee asked the question: “Who in this room has written bug free software?”