The race to develop self-driving cars reached a tragic and likely inevitable milestone last week when a self-driving car operated by Uber struck and killed a pedestrian in the early morning hours of March 19th. Dash cam footage released after the incident showed that the accident could have just as easily happened if a human had been behind the wheel, as the pedestrian, Elaine Herzberg, emerged from a shadowy side of the road just a fraction of a second before the vehicle stuck her. Still, the accident highlighted many critics’ concerns about self-driving vehicles. Those concerns became amplified this week with the release of a New York Times report claiming that Uber was struggling to control its cars safely in the months leading up to the accident. Should Uber have seen Elaine Herzberg’s death coming?
In the article, the New York Times claims to have obtained 100 pages of Uber internal documents from “two people familiar with the company’s operations in the Phoenix area but not permitted to speak publicly about it.” Sounds like Uber’s got a few disgruntled employees who think this fatal accident should have been avoided. The documents reveal that Uber struggled to keep its self-driving vehicles under their target of needing human intervention every 13 miles on average. Other self-driving vehicle companies have much higher averages, with Google’s Waymo recording a human intervention every 5,600 miles.
The report also reveals that Uber was in such a rush to develop its system that scaled human backup drivers down to one per car as opposed to the usual pair, and engineers were pressured above all else to instill confidence in their executives as opposed developing a system which was actually safe.
While critics will likely use this case to argue against the development and implementation of autonomous vehicles, it also serves as one more indication that things aren’t well behind the scenes at Uber. Can Uber turn things around before it’s too late?