If language is destiny, then the language of self-driving cars is broken. People are dying because of it, and whatever benefits we might see—from safety to pollution to traffic—will be delayed unless we solve this problem. Everyone is guilty, from the media to automakers, tech companies, marketers and investors, all of whom—whether they know it or not—muddle terms like autonomy, automation, autopilot, driverless and self-driving to suit their own narratives.
Everyone wants to believe, but no one knows what anyone else is talking about.
There are no self-driving or autonomous cars available today, which means that responsibility for every road death—whether in a Tesla or any car whose driver misunderstood the limits of technologies that aren’t yet remotely autonomous—falls on anyone slinging the current language.
The engineers understand what machines can do, but they’re not in the education business. Get too many of them together and definitions are reduced to the lowest common denominator, which are the primary ingredients in the self-driving word soup we slurp every time we open our news feeds. We can’t invest in, solve, buy or use what we don’t understand, and that requires a new set of definitions. Any solution has to be as simple as possible, because the inevitable regulation of self-driving requires not only that regulators understand it, but that everyone does.
I’ve got one potential solution, but to get there we need to deconstruct the language that is the problem.
The Problem With The SAE Automation Levels
The root of all confusion is the Society of Automotive Engineers (SAE) taxonomy of automation, commonly referred to as the SAE levels, which have become the global standard for defining self-driving. The SAE taxonomy was well intentioned, but intentions don’t solve problems, solutions do. The SAE taxonomy attempted to classify vehicular automation by defining four (and eventually five) levels; but here’s what happens when you Google “SAE Automation Levels“:
Those are some pretty fancy charts, not one of which solves any problem other than how to keep graphic designers employed.
Let’s deconstruct even the simplest graphical interpretation of the SAE taxonomy:
I see the word automation, but where does self-driving start on this chart? What about autonomy? How about a driverless car? What is the difference between partial and conditional automation? How much is partial? What are the conditions? If Level 2 includes speed-sensitive cruise control, why is “Traffic Jam Chauffeur” Level 3? What’s a Parking Garage Pilot? Lots of cars will park themselves now. The above chart doesn’t reduce confusion, it increases it.
How about this one, courtesy of the National Highway Traffic Safety Administration (NHTSA), who blindly adopted the SAE taxonomy?
A lot more information, none of it helpful. Where do systems like Tesla Autopilot and Cadillac SuperCruise fit? Their manufacturers aren’t saying. I’d say between 1.5 and 2.5, which means the chart isn’t good enough, or the systems don’t work well enough, or maybe they work too well. I recently heard someone talk about a 4+ car, and I have no idea what that is.
What about Advanced Driver Assistance Systems (ADAS)? You know, the packages with Automatic Emergency Braking and the good cruise control? Are those 1 or 2? It depends on how good they are, and let me tell you, some of them really suck. (You know who you are.)
Newsflash: All of this is why no car maker wants to define their cars by SAE levels. They’ll let the media run free defining their cars’ functionality for them, or attach buzzwords to concept demonstration drives, but they’re terrified of staking ground within the popular taxonomy.
Even a noble simplification effort by Jess Dunietz of SAFE doesn’t solve the problem. Here he reduces the SAE taxonomy to this:
- Cruise control.
- Traffic-sensitive cruise control for both steering and speed.
- Self-driving, but with a human available to take over if needed.
- No driver needed, under the right conditions.
- No driver needed ever.
This distillation doesn’t line up with NHTSA’s own interpretation. How can a Level 3 car be “self-driving” if a human must be available to take over if needed? Based on the absence of Level 3 cars on the market, a lot more human input is needed than car makers want to admit. This is why terms like semi-automated and semi-autonomous are necessary, and yet still mere band-aids.
You can’t fix a taxonomy this broken.
It gets worse. I once spoke to a self-driving developer whose car had crashed. “What level is it?” investigators asked him. It’s not that simple, he answered, because what the car is during testing is different from what it is at deployment. His position? If it’s 2 or 3 during testing, it should be judged as such, even if the goal is 4. You can imagine how that went down.
What good are the levels if developers and investigators can’t agree on what they are?
The SAE levels aren’t just functionally vague, they’re also conceptually strict. The SAE taxonomy only considers “series” automation, forcing humans and machines into a zero-sum equation where one is sacrificed for the other on an inexorable path to 100% machine control. The SAE levels ignore forms of automation that don’t fit, like the “parallel” automation systems used in aviation, which you can learn about here and here.
Even VC Benedict Evans, Andreessen Horowitz’s entertaining blogger/thinker, who wrote a very insightful essay called “Steps to autonomy“, stays within the SAE conceptual framework. In it he criticizes the SAE hierarchy, pointing out that the same car may be L2 here, L3 over here, and L4 over there, which means the levels—and the language we use to describe them—must be specific to functionality and not the vehicle itself.
Only one person—technologist Brad Templeton—has successfully deconstructed why the SAE levels don’t work. For those who don’t want to read the definitive essay on the topic, his thesis boils down to this:
- No taxonomy can work if it’s defined by the role of the human within it.
- The SAE charts suggest a progression of automation that doesn’t exist, because…
- Advanced Driver Assistance Systems (ADAS) have nothing to do with self-driving.
Templeton even says “I believe a case can be made that the [SAE] levels are holding the industry back, and have a possible minor role in the traffic fatalities we have seen with Tesla Autopilot.”
Templeton isn’t just right; his argument doesn’t go far enough. Not only are the SAE levels holding the industry back through omission of parallel (and other) systems yet to be invented, of course the levels have a role in traffic fatalities, especially those involving Tesla Autopilot. SAE is the only organization capable of defining, renouncing and improving the language of self-driving. That Elon Musk has so easily exploited the generic word “autopilot” for their branding is as much an indictment of the SAE levels as it is of Tesla’s marketing department.
What’s the Solution?
There is only one solution, and that’s to toss the SAE language and start from scratch. The bar for use of the words “autonomous” and “self-driving” needs to be set so high that no media outlet can exploit them for traffic, no car company car use them in a press release to boost their stock price, and most importantly, no driver thinks they can take their hands off the wheel, even temporarily.
I’ve got an idea. It starts with a new word: Geotonomous.
Want to know more? Stay tuned for my next column: How To Replace The SAE Automation Levels.
Alex Roy—Founder of the Human Driving Association, Editor at The Drive, Host of The Autonocast, co-host of /DRIVE on NBC Sports and author of The Driver—has set numerous endurance driving records, including the infamous Cannonball Run record. You can follow him on Facebook, Twitter and Instagram.