Tesla safety questioned

1 min read

A former Tesla employee speaking to the BBC has said that the firm’s technology, as used in its self-driving vehicles, is not safe enough to be used on public roads.

According to Lukasz Krupski, who leaked data to the German newspaper Handelsblatt earlier this year, the company had received numerous customer complaints about its braking and self-driving software, but when he raised his concerns was ignored.

Tesla’s owner, Elon Musk, has repeatedly claimed that the company’s AI is the best real-world example of the technology but in his interview with the BBC, Krupski said he had concerns about the AI being used suggesting that neither the hardware nor the software was ready.

"It affects all of us because we are essentially experiments in public roads. So even if you don't have a Tesla, your children still walk in the footpath."

According to Krupski company data suggested that requirements relating to the safe operation of autonomous vehicles had not been followed and that vehicles were experiencing "phantom braking".

Tesla's own data suggests it has a pretty good record in terms of safety, but those figures aren’t independently verifiable, and the US Department of Justice is investigating Tesla over its claims relating to its assisted driving features.

These claims are worrying and raise issues around the use of AI in the real world.

Too much remains hidden when it comes to AI and its development and as we’ve seen with too many companies in this field there is a lack of openness and transparency.

If we’re to have confidence in how AI is being developed then we need to throw the doors open, or at least keep the doors ajar, so that we can get a slightly better understanding of how the technology is being conceived, developed, deployed and in Tesla’s case tested.