Tesla Uses Autopilot Data to Defend Itself, But What About Driver Privacy?

Tesla only releases data when it suits its purposes, a new report claims. The carmaker says it's just setting the record straight.
www.thedrive.com

Share

When a Tesla electric car using the automaker’s Autopilot system crashes, Tesla likes to turn to in-car data to dismiss claims that its tech is at fault. But what about the customers who generate that data?

While Tesla is quick to use Autopilot data to counter claims of faults or glitches, it is less eager to release to its customers—or even seek their permission before releasing it, according to The Guardian. The newspaper said it could not find a single case in which Tesla sought permission before releasing data to the media when Autopilot was suspected to be at fault in a crash.

The Guardian also discovered one case in a which Tesla explicitly denied an owner’s request to see data from his own car. A Swiss driver, who spoke on condition of anonymity, wanted access to car data after his Model S collided with a van on the highway. While he considers himself a “Tesla fanboy,” the driver said he was concerned about being denied data that he could use to defend himself in court.

The Swiss Model S owner had requested data logs from his car, but Tesla has not released anything quite as extensive publicly. The carmaker typically releases specific pieces of information to counter what it views as unfair or inaccurate claims about Autopilot made by owners. Those disclosures have included revealing that a Montana Tesla driver did not have his or her hands on the wheel during a June 2016 crash, and that a California driver deactivated Autopilot by pressing the brake pedal, resulting in a collision the driver blamed on an Autopilot fault.

“Autopilot has been shown to save lives and reduce accident rates, and we believe it is important that the public have a factual understanding of our technology,” Tesla said in a statement defending its practices.

“In unusual cases in which claims have already been made publicly about our vehicles by our customers, authorities, or other individuals, we have released information based on the data to either corroborate or disprove these claims,” the automaker said. “The privacy of our customers is extremely important and something we take very seriously, and in such cases, Tesla discloses only the minimum amount of information necessary.”

What’s clear is that Tesla considers setting the record straight on Autopilot to be vitally important. Despite its name, Autopilot is not a truly autonomous system; it’s closer to the bundles of driver-assist features offered by other automakers. But the name “Autopilot” has led to some confusion among customers about the system’s actual capabilities. Autopilot was widely criticized after a fatal May 2016 crash involving a Model S running the system, but a National Highway Traffic Safety Administration (NHTSA) investigation cleared Tesla of any wrongdoing.

Tesla’s privacy policy states that the company has the right to “transfer and disclose information, including personal and non-personally identifiable information… to protect the rights, property, safety, or security of the Services, Tesla, third parties, visitors to our Services, or the public as determined by us in our sole discretion.”

UPDATE: Tesla told The Drive that, in the case of the Swiss driver mentioned by The Guardian, it provided all information necessary under the Swiss Data Protection Act, and did not release any information from that incident to the press.