Last week, the hilarious Jason Calacanis
excerpted a chapter from Angel, a book he wrote about investing in Silicon Valley and about why he passed on Theranos, the blood testing company whose technology was recently revealed to be fraudulent garbage. He explains how investors were tricked and placated by great public relations, and the whole farce wasn’t exposed until one skeptic went to a Walgreens to find that the machines didn’t work at all. In fact, Walgreens brought Theranos into stores without even testing the hardware.
Sound familiar?
Two weeks ago, The Drive published “The Human Driving Manifesto,” in which I claimed there was absolutely no evidence self-driving cars were safer than humans—at least not yet—and that we have a moral obligation to improve human driving safety even regardless.
Little did I know how prescient that would turn out to be.
Yesterday I wrote “Elaine Herzberg’s Death Isn’t Uber’s Tragedy. It’s Ours,” in which I called out the hypocrisy of a country that tolerates 100 deaths by human drivers a day, but won’t tolerate one by machine. I was referring, of course, to the tragic death of Elaine Herzberg, who was struck and killed by a self-driving Uber test vehicle this past Sunday in Tempe, Arizona, just one of ten pedestrians killed in that state last week.
I was trying to give Uber the benefit of the doubt. I was wrong.
Not only was I wrong, but The Human Driving Manifesto—which I jokingly wrote in response to the ever increasing storm of self-driving clickbait—was more accurate than I ever could have guessed, because now that the Tempe police have release dashcam footage of the fatal crash, all of the following points are perfectly clear:
- Uber is guilty of killing Elaine Herzberg.
- Uber’s hardware and/or software failed.
- Many people at Uber need to be fired.
- The Arizona officials who greenlit testing need to resign.
- One or more people need to be prosecuted.
- The SAE Automation Classification System is vague and unsafe.
- Uber is the Theranos of self-driving.
- Volvo—one of the few car makers that truly cares about safety—is innocent and shouldn’t be in bed with their craven opposites.
Even if you believe self-driving cars may someday reduce road fatalities—and I do believe that—this dashcam video is an icepick in the face of the argument that anyone at Uber gives a damn about anyone’s safety, including that of their own test drivers.
I’ve long suspected that 99% of claims from self-driving companies were BS, but I didn’t think it was this bad:
This is a catastrophe for Uber. It’s also a catastrophe for the Tempe police, who were wrong when they said Herzberg’s death was “likely unavoidable” because she “abruptly darted out in front of the car.”
A slow moving pedestrian at night—well beyond human line of sight—is precisely what radar and Lidar sensors are supposed to see. This is precisely the type of crash self-driving cars are designed to prevent.
Uber blew it.
Almost $80 billion has been invested in self-driving cars, billions of it by Uber (a company that knows no shame), all of it based on the narrative that machines will be safer than people. Self-driving companies like Uber—none of which want to share data on their actual progress—have been flowing like water to states like Arizona and Nevada, where the risk-friendly and regulation-averse whores in government have welcomed them with open wallets.
If safety is the goal, then one would assume the development of safety technologies would maximize safety at every turn, literally and figuratively.
Not at Uber.
The Fatal Dashcam Video
Based on the video, Uber’s self-driving car—in which a “safety” driver was present—is actually less safe than the average human driver in a stock Volvo fresh off the showroom floor.
The worst part? The crash can’t be blamed on one or even two people. It’s clearly the fault of countless people whose names we don’t yet know, all of whom should go down in the history of self-driving as supporting actors in the most expensive and disastrous show of the self-driving season. In the theater of shame that has been Uber, that’s saying a lot.
Let’s start at the site of the accident, and ask some questions about responsibility. There’s a lot to go around.
The Victim
Elaine Herzberg was jaywalking. It was dark. She was struck and killed by a car at or near the speed limit. Had it been any other car driven by a human, legal responsibility would almost certainly fall on the victim.
But that’s not what happened. Elaine Herzberg was killed by a machine presumed to meet a higher standard, a standard its creators refuse to divulge, and its supporters take on faith.
Sound familiar? Then I’ve got some blood-testing technology to sell you.
Human standards are pathetic in America, but at least they exist. We could raise them tomorrow for a fraction of the $80 billion spent on self-driving cars. We could also invest in safety automation that works brilliantly in aviation, but I’ve already written that article.
Uber needs to answer for this.
How about that safety driver?
The “Safety” Driver
What is the purpose of a safety driver? To take control—whether it’s steering or braking—in order to prevent an impact the self-driving car cannot. That didn’t happen here. Why not? Partially because it was at night and the headlights may not have illuminated Herzberg until it was too late, and partially because the safety driver wasn’t paying attention. The safety driver doesn’t appear to have applied the brakes until after the impact, further indicating lack of readiness. I’m not convinced this particular “safety” driver could have done better even in daylight. Her eyes are glued to whatever device is in her hand.
The safety driver certainly bears some moral responsibility, and depending on the nature of her employment contract, she may bear some legal responsibility as well.
And that’s before we know anything about what kind of training, if any, Uber gives its “safety” drivers.
Oh, did I mentioned that the driver had a history of traffic violations dating back to 1998? And that Uber claimed she passed all background checks? Uber, you’ve got a minimum standard problem.
The New York Times suggested the driver might have been at fault because her hands weren’t hovering above the steering wheel, “which is what most backup drivers are instructed to do because it allows them to take control of the car quickly in the case of an emergency.”
I’ve never heard that, although I often do it while using Tesla Autopilot, which is only a semi-autonomous system. If you don’t place your hands on the Tesla’s wheel periodically, Autopilot disengages.
What is supposed to happen in a self-driving Uber test car? I can’t wait for that courtroom revelation.
Uber’s Self-Driving Division
Some days I really feel for new CEO Dara Khosrowshahi, who seems to be a good guy who perhaps deserves better than the deepening pit left him by Kalanick. On the other hand, Wikipedia says he’s paid $96.5M a year, so one would think Uber can afford a sufficient number of experts, ethicists, and engineers to run a professional self-driving car program. One would also think Dara would know what questions to ask of his own team, and whom to hire.
Think again. If you want ethics, hire Sterling Anderson, co-founder of Aurora.
The more you know about self-driving cars, the more pathetic the video appears, and the worse it looks for Uber’s management, all the way to the top.
If the Uber safety driver’s job is to monitor the road to prevent an impact, what measures did Uber take to ensure they do so? The answer would seem to be none, despite having an interior video camera. Is this video ever reviewed, even after an uneventful drive? If not, why not? Is the video live-streamed back to Uber? If so, is anyone watching it? If not, why not? Uber obviously has some image recognition and machine learning capabilities. Are they using it on the footage to determine safety driver awareness? If not, why not?
Software from startups like Affectiva can interpret human facial activity in real time. Autoliv demonstrated similar software at CES three months ago. Cadillac offers a Driver Monitoring System (DMS) on the CT6 as part of their excellent SuperCruise semi-autonomous system. When Dave Maher and I broke the Cannonball Run record cross-country, we had a great analog safety solution: the second person monitors the driver’s condition and looks for objects in the road with a pair of gyro-stabilized binoculars.
Our Cannonball safety system looked like this:
It did not look like THIS:
When lives are at stake, it pays to be prepared.
Any of these might have solved the Uber’s safety driver awareness problem, and may even have saved Herzberg’s life. Maybe. We’ll never know. Did the Uber test vehicle include such a system? It clearly didn’t have a second person. If not, why not?
The Uber Car
Did the Uber brake? It doesn’t appear that it did. If not, why not?
The Volvo XC90 has Automatic Emergency Braking (AEB), which — under optimal conditions — would be triggered by some combination of forward radar and camera. I’ve tested it, and it’s pretty good. It would appear to have been deactivated so as to allow for unimpeded testing of Uber’s self-driving suite. If not, why didn’t it work? If so, what measures did Uber take to ensure braking in the event the self-driving system being tested failed?
The Uber self-driving system being tested has radar, Lidar and camera sensors;
- The camera should have seen Herzberg right before impact. If not, why not?
- The Lidar should have seen Herzberg beyond line-of-sight. If not, why not?
- The radar should have seen Herzberg beyond line-of-sight. If not, why not?
- If any of the sensors were defective, did they have backups? If not, why not?
- If none of the sensors saw Herzberg, why doesn’t the vehicle have a FLIR thermal night vision camera? The Cadillac CT6 has a great one. I installed one on my Cannonball BMW M5 12 years ago.
NEWS FLASH: Lidar isn’t “the secret sauce.” FLIR cameras are, and self-driving cars are probably going to need them.
Assuming the radar and lidar sensors saw Herzberg—which is almost certain, unless one or both were defective—that would have been sufficient to achieve “quorum”, which is when a sufficient number of sensors in a self-driving car agree that an object lies in its path and decision must be made.
An object like a pedestrian, for example.
Was there a quorum? If not, why not? If so, why didn’t the vehicle brake or steer away? It certainly would appear to have been able to do one or the other.
But hold on a minute. Every one of those sensors has individual sensitivity settings. In other words, two cars with identical sensors might behave completely differently. Was this vehicles’ sensor sensitivity settings different from others? If so, why?
Are you starting to get the picture? No ONE thing went wrong. A LOT of things went wrong. A LOT of people signed off on a series of decisions, with cascading and terrible consequences.
But wait. There’s more.
The brilliant AV attorney Jim McPherson—or @SafeSelfDrive on Twitter—suggested that the Uber did see Herzberg, and might have made a decision based on the Trolley Problem. In other words, the Uber may have determined that striking Herzberg was less dangerous to the safety driver than attempting an evasive maneuver.
What is going on? Who is in charge at Uber? Has anyone at Uber determined what best practices are supposed to be for testing self-driving cars? If so, is there an actual handbook? If not, why not?
How many other self-driving companies are testing with flawed practices? Or no practices at all?
The City of Tempe
How about those Tempe city planners and their anti-pedestrian designs? Why are the crosswalks so far apart? Why are there so few street lights? Have they made any attempts to reduce/prevent similar crashes?
The State of Arizona
What about the state of Arizona? Who greenlit any of this? Did they ask the questions I’ve posed here? I’m not an engineer, but I know what questions to ask. If state officials didn’t, why didn’t they? How much are Uber and the rest of the industry investing in states like Arizona?
How much are they donating to the campaigns of the officials making allowing them to operate on public roads?
The Department of Transportation
Let’s take it to the top. Why doesn’t Elaine Chao, the head of the US Department of Transportation, know the SAE Automation levels off the top of her head? Why are so many members of Congress buying gold-plated kneepads for their meetings with self-driving lobbyists?
The Future
How many more people do self-driving cars have to kill before we have common sense regulation?
A lot more, I think. A society that tolerates 40,000 deaths a year due to human driving will probably put up with a lot more collateral damage, as long as the dead are carless, or better yet homeless. In this country, that’s practically the same thing.
I hope someone delivers self-driving cars, someday. I suspect that day just became a lot further off than anyone hoped.
All those people cool with justifying additional deaths trying to get there seem to have forgotten history. There’s a reason it’s illegal to perform medical experiments without patient consent. Heard of the Nuremberg Code? It was a response to German and Japanese testing on POW’s during World War 2.
If you want to test on public roads, you need to be transparent about it. We already have millions of humans honing their wretched skills. Keep the machines off the streets until they’re ready. Invest in better simulation. Declare a safety standard, and prove you can meet it.
Uber is the Facebook of transportation and the Theranos of self-driving cars, and I suspect their technological house of cards is a lot weaker than even this video suggests. I also suspect they’re not alone, which is why I’m taking the Human Driving Association out of the hobby phase.
If we want to support technology that can help save lives, today, read our Manifesto and join our mailing list. We’re just getting started, but someone has to fight for transparency, safety and common sense regulation for cars, whether human or machine-driven. The industry is clearly incapable of policing itself, and the alternative is more Elaine Herzbergs.
Solutions For Uber’s CEO
Uber bears moral, ethical, civil and potentially criminal responsibility for Elaine Herzberg’s death. So do the politicians who allowed this self-driving theater to unfold.
Dara, cut your losses, cancel your program and license Aurora’s tech. Or Waymo’s. Then Uber needs to write a HUGE check to a good cause. Start with Elaine Herzberg’s family. How about the homeless of San Francisco you must see every day? Or the homeless of Tempe? Maybe take a trip there in between the upcoming trial and the congressional hearings you’ve got coming.
Here’s an idea: professional driving training for all human Uber drivers. Show us your safety record is better than anyone’s. You could save lives within months. You could even raise prices. If the government won’t raise human licensing standards, maybe the private sector should.
Now there’s an idea I can get behind.
Alex Roy—angel investor, Editor-at-Large for The Drive, Host of The Autonocast, co-host of /DRIVE on NBC Sports, author of The Driver and Founder of the Human Driving Association—has set numerous endurance driving records, including the infamous Cannonball Run record. You can follow him on Facebook, Twitter, and Instagram.