It appears from the video that the vehicle, which was in autonomous mode, erred on one of its most basic functions, that is, it didn’t appear to slow down or detect the woman who was clearly visible in front of the car before being hit.
That suggests that the Lidar technology being employed by Uber failed but also that the “safety driver” inside the car did not seem to be monitoring the road.
That raises doubts about the testing systems being employed by Uber and by other companies currently looking to develop self- driving vehicles. It also raises questions about the speed at which the technology is being rolled out.
Research has found that humans monitoring an automated system can become bored and disengaged, making semi-autonomous testing particularly dangerous. In this case the driver does appear to have had sufficient time to have either have stopped the vehicle or try to avert the collision.
Has Uber been rushing its technology and taking short cuts?
If there’s been a complete failure of the technology should all testing on public roads be halted, at least while this accident is investigated?
In the UK, testing of self-driving cars on public roads is set to continue. However, a Government-backed trial of autonomous vehicles in south London, which has just ended, has attracted public unease, both in terms of road safety and cybersecurity
New technology will always involve blips, and that's not to downplay the tragedy of this accident and the impact it will have on the victim's family. But, with something like driverless cars much of its credibility has been built on the fact that it will be a safe technology, much safer than a driver only vehicle.
Autonomous vehicles provide a great opportunity to make our roads much safer, but the technology needs to mature.
While more accidents are likely the worry has to be that if there are too many, might the public begin to turn against this technology?