NAVYA Shuttle Incidents Show Risks Of Even A Low-Speed Rush To Autonomy

Low-speed limited-autonomy shuttles are an attractive way to bring self-driving vehicles to city streets today, but the risks are still real.
www.thedrive.com

Share

Ever since Google showed off its “Firefly” autonomous vehicle in 2013, a “race to autonomy” mentality has pushed companies and cities alike into a competition to be the first to bring the driverless future to public streets. As headlines heralding the imminent arrival of this new paradigm have given way to stories about automated vehicle crashes and fatalities, the focus has shifted toward more limited forms of autonomy and a search for applications that are narrow enough to allow the current technology to be both useful and safe. But now, with news that such a low-speed, limited-domain autonomous shuttle collided with a pedestrian in Vienna, there’s more evidence that the race to autonomy still needs rethinking.

The incident in question, first reported by Austria’s ORF, is still under investigation by local authorities but the rough outlines seem clear enough. A 30 year-old woman was walking across a street when she collided with the side of an autonomous shuttle bus made by the French company Navya, which was traveling at about 7.5 miles per hour, resulting in an injury to her knee. The Navya shuttle has been taken off public roads while an investigation takes place.

For some autonomous vehicle advocates, this incident will likely seem like more evidence for the value of autonomous vehicles: an inattentive human seems to have been at fault here, rather than the autonomous shuttle. This, it would seem, was caused by precisely the kind of inattention that autonomous vehicles were meant to prevent. Though nothing close to the scandal caused when an Uber test vehicle fatally hit Elaine Herzberg last year, this incident still illustrates the need to learn some important lessons about autonomous vehicles.

Navya is one of the companies that has profited most from the “race to autonomy” mentality, offering a purpose-built “driverless” vehicle to cities looking for a shot of futurist prestige on its roads today. Its vehicles are remarkably crude, with a space-age greenhouse barely concealing a simple steel-tube cage, a sparse interior, clunky ride and a stripped-down sensor and software stack that stretches the definition of the term “autonomous.” Rather than a true dynamic decision-making autonomous vehicle, Navya’s shuttle follows a defined path from one GPS waypoint to the next and uses lidar and cameras to avoid collisions… or at least try to.

The Vienna incident isn’t the first time that one of Navya’s shuttles has failed to avoid a collision. In one of the vehicle’s first deployments, on a .6 mile loop in Las Vegas, a semi truck backed into an alley and struck the shuttle at low speed causing minor damage and no injuries. Despite the minor nature of the incident and the fact that the human driver struck the shuttle rather than the other way around, it did receive considerable media attention if for no other reason than that the shuttle had media members on board that day. 

Officially, the National Transportation Safety Board report [PDF] blamed the truck driver for the incident, specifically his assumption that the shuttle would “stop at a sufficient distance from his vehicle to allow him to complete his backup maneuver.” According to Jeff Zurschmeide of Digital Trends, who was on the shuttle at the time of the crash, 

The shuttle bus very obediently stopped a reasonable distance from the truck and waited for it to move. That’s where things went wrong.

What the autonomous shuttle bus didn’t expect was that the truck would back up towards it. As the driver was swinging the trailer into the alley, the tractor portion of the truck was coming right at us – very slowly. We had plenty of time to watch it happen…

…the shuttle did exactly what it was programmed to do, and that’s a critical point. The self-driving program didn’t account for the vehicle in front unexpectedly backing up. We had about 20 feet of empty street behind us (I looked) and most human drivers would have thrown the car into reverse and used some of that space to get away from the truck. Or at least leaned on the horn and made our presence harder to miss. The shuttle didn’t have those responses in its program.

Jeff Zurschmeide, Digital Trends

This eyewitness account makes an important point: Navya’s shuttle may have been on the receiving end of the collision, but it also failed to act in a way that was reasonably predictable for a truck driver engaged in a tricky maneuver. It couldn’t back up autonomously even though it had plenty of room to do so, nor could it recognize the need to signal its presence in a way that might have more easily alerted the truck driver. According to the NTSB report, the human operator was not able to access the Xbox controller used to manually drive the shuttle in time to move out of harms way and that in any case the manual mode was not intended as an emergency mode.

The Las Vegas incident revealed several important issues with autonomous vehicles generally and the Navya shuttle specifically, some of which seem to be more or less relevant to what we know about the Vienna incident. First, it reveals that despite the low speed and limited operating domain of the Navya shuttle, its relatively unsophisticated autonomous drive “stack” has fundamental shortcomings. Though both the Las Vegas and Vienna routes were doubtless chosen carefully to minimize complexity and risk, the shuttles both ultimately encountered “edge cases” that they were not equipped to properly navigate. Though it’s tempting to blame Navya’s relatively crude technology, the city bodies who were so anxious to put autonomous vehicles on their streets that they underestimated the situations they might encounter must share in the blame.

In the Vienna incident, the Navya shuttle’s lack of side-sensor capabilities and in particular its lack of pedestrian-focused sensors does seem to have been a real technological shortcoming. It’s not yet clear what sensors the Vienna shuttle used, but in the Las Vegas configuration the Navya shuttle’s camera and lidar sensors are on the front and rear, lacking dedicated side-facing camera or lidar (one lidar at each end of the vehicle is capable of 360 degree sensing). A low-speed urban shuttle seems like an ideal application for short-range ultrasonic “parking sensors” or even thermal or infrared cameras, which could have sensed a pedestrian and at least prevented her from colliding with a moving vehicle. 

But the most fundamental problem with both incidents seems to be that Navya’s shuttles have only one “failure mode”: to stop dead in its tracks. This might prevent it from causing an at-fault collision, but it’s not one that successfully mimics naturalistic human driving or takes advantage of opportunities to interact with traffic in the safest possible ways. The ability to back up opportunistically would have allowed Navya’s shuttle to avoid the Las Vegas collision, while the ability to sense potential side impacts and alert incoming vehicles and pedestrians to its presence could have prevented both incidents. The fact that the Las Vegas incident 18 months ago didn’t lead to a modification that (for example) allowed Navya’s shuttle to sense potential incoming collisions and set off a warning system using flashing lights and alert noises is not an encouraging sign about the company’s iterative development process.

Ultimately though, all these issues flow out of a much broader problem: the “race to autonomy” mentality itself. Navya has positioned itself as one of the leading vendors of low-speed “driverless” shuttles that cities have been rushing to adopt, not to solve specific problems or save money (the shuttles all require human attendants) but to boost their images. Navya’s aggressive expansion plans haven’t worked out as well as planned, resulting in a 72% valuation haircut and the firing of its CEO last December, and in order to build its business it needs to keep costs low and get shuttles on the road quickly. Having already IPO’d last year and with production facilities , Navya would face serious challenges if it had to slow down, do further development work, significantly alter its system design and add to its unit costs.

Low-speed, limited-domain shuttles are a great opportunity to build a business around autonomous drive technology in the short term but Navya’s two traffic incidents show that pitfalls abound. They show the risk of underestimating the necessary technology, overestimating the simplicity of even limited public road domains, the pressure of rapid expansion and public listing, as well as the need for constant iteration. The reality is that any short-term autonomous vehicle business is going to be part vehicle supplier business and part development and validation program, and the ability to learn from problems encountered on the road and improve vehicles based on those learnings should be baked-in. 

One would hope this kind of iterative development is implied in the term “pilot” used to describe its shuttle deployments, but given the similarities between the Las Vegas and Vienna incidents that may not be the case. But if Navya isn’t necessarily learning and adapting its vehicles from these experiences, one hopes that at least the cities considering an autonomous shuttle deployment are. After all, being on the cutting edge of autonomous technology will inevitably bring risks with it and if you’re blinded by the cool factor you’ll end up as an example of what not to do. 

Just because a company is building a business doesn’t mean autonomous vehicles aren’t still big, expensive science experiments, and just because an autonomous vehicle wasn’t strictly speaking at fault doesn’t mean it didn’t fail. If AVs are going to live up to their promise, lessons must be learned from minor low-speed incidents as well as more dramatic failures.