How Tesla and Elon Musk Exaggeraged Safety Claims About Autopilot and Cars

The autonomous program isn’t meant for most types of driving, and the automaker compares its new luxury vehicles to older, cheaper cars.

After years of hype about its autonomous driving system, the facts are coming out about how safe Tesla and Autopilot really are.

Elon Musk’s company admits three of its vehicles have crashed while Autopilot was engaged, including one fatal accident in which Joshua Brown’s vehicle ran directly into a semi truck. In the wake of Brown’s death, Musk claimed Autopilot would save thousands of lives if it was deployed universally today. That bold claim is based on insufficient data and flawed comparisons between Tesla vehicles and all other cars that stack the deck in favor of the company.

 

Musk’s fame as a self-described “applied physicist” and serial entrepreneur have generated a seemingly inexhaustible public faith in his intelligence and leadership, but by promoting such a flawed statistical comparison of his firm’s controversial system he calls that confidence into question.

Autonomous drive technology was never mentioned in Musk’s ambitious 2006 “Top Secret Master Plan” for Tesla, and appears to have been tacked on to its world-changing mission in response to Google’s massively-hyped (but still not yet commercially deployed) self-driving car program. The reveal of Google’s radical, zero-human-control autonomous concept car in 2014 suddenly made Tesla’s cars-of-the-future look decidedly passé. And in contrast to Google, whose communication about autonomous drive focused entirely on the technology’s long-term safety benefits, Musk’s fixation on beating the competition to market looks more like a rush to protect Tesla’s high-tech image than the pure pursuit of safer roads. When Morgan Stanley in 2015 boosted Tesla’s stock price target by 90percent based on the projection that Musk would lead the auto industry into an“autonomous utopia,” it showed that even a perceived advantage in autonomous drive could help Tesla raise the huge amounts of capital it needs to continue growing.

 

Musk set about creating this perception in 2013, when he said that Autopilot would be capable of handling “90 percent of miles driven” by 2016. By mid-2014, Musk was promising investors that the system could handle all freeway driving, “from onramp to exit,” within 12 months of deployment. Tesla still has yet to make good on those bold claims, and competitors argue that Tesla’s relatively simple sensor hardware will never be capable of safely performing at such a high level of autonomy.

 

In the wake of Brown’s fatal crash, Tesla’s sensor supplier Mobileye clarified that its current technology is not designed to prevent a crash with laterally moving traffic like the turning semi truck Brown’s Model S struck. This week, Tesla revealed another Autopilot accident that saw a Model X swerve into wooden stakes going 55 mph in a canyon road.

Experts have understood Autopilot’s hardware limitations for some time, but Tesla owners and investors clearly believed that Autopilot was either an autonomous drive system, or something very close to it. Brown clearly believed that Autopilot was “autonomous” and described it as such in the description of a video that Musk shared on Twitter. So great was his apparent faith in Autopilot’s autonomous capabilities that he was reportedly watching a DVD at the time of his fatal crash. The extent of Autopilot’s true abilities, which wax and wane with each Over The Air software and firmware update Tesla pushes to the car, is hotly debated on Tesla forums where even Musk’s most devout acolytes waver between extolling its miraculous powers and blaming drivers for their inattentiveness depending on the circumstances.

This ambiguity and overconfidence in semi-autonomous systems is why Google refuses to develop anything less than fully autonomous systems which requires no driver input, a level of performance the search giant insists requires the extensive testing and expensive LIDAR sensors that Musk has often dismissed. It’s also why major automakers are developing driver alertness monitoring systems that they say will keep drivers in semi-autonomous vehicles from relying too heavily on their vehicle’s limited capabilities.

 

Rather than waiting for LIDAR costs to come down or building in a complex driver alertness monitoring system, Tesla has chosen to blame its faithful beta testers for any problems that pop up in testing. One Tesla owner describes this Catch-22, after being told that a crash was her fault because she turned off Autopilot by hitting the brakes: “So if you don’t brake, it’s your fault because you weren’t paying attention,” she told The Wall Street Journal. “And if you do brake, it’s your fault because you were driving.”

Confusion about Autopilot’s actual abilities has persisted even after the first fatal crash was reported, with Musk dismissing questions about the wreck by claiming that road fatalities would be cut in half “if the Tesla Autopilot was universally available.” The shaky statistical basis for these claims is just the latest in a long line of confusing and contradictory statements about Autopilot’s abilities.

 

Musk first claimed that Autopilot is twice as safe as a human driver before any reported crashes involving Autopilot, when he asserted that the average distance driven before an airbag deployment was twice as long for vehicles with Autopilot. This position ignores the fact that airbag deployments are not the same as fatalities or injuries. In fact, about 3,400 Americans (about 10 percent of total annual road deaths) die each year in frontal crashes where airbags are not deployed. Moreover, Musk was drawing on just 47 million miles driven in Tesla vehicles, or less than 0.00001 percent of the more than 3 trillion miles driven by Americans in 2014.

 

The fact that Autopilot is only supposed to be activated on divided freeways, without cross-traffic, cyclists, or pedestrians skews the statistics so far in its favor that any comparison with broader traffic safety statistics “has no meaning” according to Princeton automotive engineering professor Alan Kornhauser. And since Tesla has blamed drivers for wrecks in which they turned off Autopilot features by attempting to steer or brake just before a crash may also be limiting the number of incidents Tesla reports as involving Autopilot.

 

In Tesla’s response to the recent fatality, the company emphasized that Autopilot is responsible for fewer fatalities (one per 130 million miles driven) than the overall U.S. fleet average (one per 94 million miles driven). The accuracy of the latter figure has been called into question by Sam Abuelsamid, who points out that vehicle occupant deaths in the U.S. occur only once every 135.8 million miles, making the average U.S. fleet slightly safer on average than Tesla’s one death per 130 million miles.

 

In addition to being potentially inaccurate, the company’s statistics are also fundamentally lacking in comparability because they fail to account for significant differences in vehicle age and vehicle cost—two attributes that significantly affect vehicle safety.

 

Tesla compares its fleet of (on average) 2-year-old cars to the U.S. fleet and its average age of 11.5 years, almost as old as the automaker itself. Data from the Insurance Institute for Highway Safety show that even when vehicles from 2004 were new, they were much less safe than modern vehicles are. At the time of manufacture, these vehicles were responsible for 79 fatalities per million registered-vehicle years—by comparison, the 2011 model-year vehicles cut the fatality rate by nearly two-thirds, to just 28 per million vehicle years. This yawning gap in safety is even wider now, since the 2004 vehicles are now over a decade old, and have likely racked up over 100,000 miles. Tesla holds up this aging U.S. fleet (including motorcycles, which are 26 times as likely to be involved in a fatal accident as passenger vehicles) as a reasonable safety comparison for its fleet.

Tesla’s comparison also overlooks the dramatic cost difference between its luxury cars and an “average” vehicle on the road in the U.S. today.

The lowest-priced Model S starts at $66,000, about the same as an entry-level Audi A7, and climbs to around $135,000 with options, a similar price point as a fully-optioned Lexus LS 600h hybrid. The Model X starts around $83,000 and tops out around $145,000, a price range bracketed by the Porsche Cayenne SUV on the low end and a Porsche 911 Turbo on the high end. Vehicles in this price range are engineered to be some of the safest vehicles on the road, and many come with a panoply of advanced driver assistance safety functions, like Automatic Emergency Braking and Adaptive Cruise Control, making them the most direct competition for Tesla’s vehicles. Comparing a vehicle in this rarified segment to the overall market, where the average new vehicle costs less than $35,000, is beyond disingenuous.

 

Given Tesla’s sophistication and resources—and its strong incentive to make a robust case for the safety of its vehicles—it is surprising that they weren’t able to put together a more compelling comparison.

If federal safety regulators find that Musk’s public statements about Autopilot’s abilities similarly misleading, his bold “public beta test” could set back more than just his image and Tesla’s; it could raise suspicions that compromise the development of autonomous drive technology more broadly.

Continue reading “How Tesla and Elon Musk Exaggeraged Safety Claims About Autopilot and Cars”

Tesla’s Cheap Approach To Autopilot Might Not Lead Anywhere

Summary

  • More reports of Tesla autopilot users getting into accidents are emerging.
  • While many have already commented on Tesla’s handling of these incidents, I comment on the technical side of the autopilot technology Tesla is using.
  • Aside from product management issues, Tesla’s approach might not lead to fully autonomous driving in principle.

A self-driving car has killed someone. A Tesla car (NASDAQ: TSLA). No matter whether it was a product in beta, not a fully autonomous car, or if there was negligence on the driver’s end – this is the headline. News of more serious accidents keep emerging. Many contributors have already commented on how Tesla has handled the incident, in particular given the timing with regard to the company’s recent equity offering. For now, I am not interested in these issues. I am interested in Tesla’s technical product management regarding the autopilot. I will begin by discussing the narrative on the autopilot, both from a marketing and from a technical perspective.

(click to enlarge)

The marketing perspective

To understand how Tesla got into this mess, the first question investors should ask themselves is why Tesla ever decided it needed an autopilot. After all, Tesla has had enough production problems on its plate, from delayed launches, reliability problems with the model X to the Gigafactory.

The simple answer: branding. Tesla views itself as a technology company that solves humanity’s problems. It does not sell high-end luxurious electric vehicles, it sells a 21st century vision of technology bringing salvation. Its customers and shareholders buy participation in this vision first, products second. It’s about being on the right side of history.

Green energy is one aspect of that. Artificial intelligence is another big part of the narrative and a feature like the autopilot helps branding Tesla being at the forefront of both. Don’t just take it from me though. A quote from Jon McNeill (highlight mine):

(The autopilot feature) is one of the core stories of what’s going on here at Tesla.

That’s right, the autopilot is a story first, a product second. From the autopilot release:

The release of Tesla Version 7.0 software is the next step for Tesla Autopilot. We will continue to develop new capabilities and deliver them through over-the-air software updates, keeping our customers at the forefront of driving technology in the years ahead.

Now turning to the technical perspective, we have to keep in mind that this is the purpose of the autopilot. Tesla desperately wanted to be first to market here. How could it achieve that with substantially fewer resources, manpower and infrastructure than traditional car makers?

The technical narrative

Most readers here are familiar with different levels of autonomous driving. Systems like Tesla’s autopilot are classified according to which tasks they take away from a human driver and whether the human driver is still in control. There are multiple approaches to achieving different levels, and they depend on the end goal. For Tesla, that end goal is to deliver on the vision of cars that do not require a human driver at all (stated by Elon Musk in recent conference calls). Google (NASDAQ:GOOG) (NASDAQ:GOOGL) has the same end goal and has been working on the autonomous car projects for much longer than Tesla.

The two companies’ approach is fundamentally different and this is where we get back to the narrative. Both Google’s and Tesla’s cars use a wide array of sensors and radars to detect obstacles and map their environment. A high-level overview of Tesla’s autopilot functionality is given here.

The main difference is that Tesla is relying on relatively cheap off-the-shelve technology from their partner Mobileye (NYSE:MBLY) and other OEMs, combined with in-house software. Google on the other hand has spent years collecting data from fewer cars with considerably more expensive sensor technology, notably relying on LIDAR technology. LIDAR (or light sensing radar) surveys the environment by scanning it with laser light and measuring distances from reflections. This is the device always seen on top of Google’s self driving cars:

(click to enlarge)

LIDAR is expensive – articles from last year cite $80,000 for a single sensor, and over $150,000 in sensor cost for every Google car.

LIDAR is however believed to be the best sensoring solution if cost is not an issue. In the infamous accident, a white truck would likely been detected. Tesla instead relies on a passive, optical approach – Mobileye’s approach. A single forward camera relying on deep learning to discriminate objects in the environment.

Last year, Elon Musk said the following on the issue:

I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR,” Musk said during at a press conference in October. “I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context.

Elon Musk’s statement here frankly does not make sense from a technical perspective – LIDAR is superior technology, period. It would have just been economically infeasible to get LIDAR on Tesla’s cars, and further it would not have allowed to remotely install autopilot that easily on existing cars.

Hence, Tesla came up with a narrative that sounds consistent, logical and like it could be technically sound. That narrative is that while Tesla uses cheaper hardware, Tesla’s superior software, continuously learning from live data from tens of thousands of Tesla cars, will achieve the same outcome.

The New Yorker wrote a nice piece on this narrative. It frankly sounds like it is copied right out off marketing material:

Autopilot also gave Tesla access to tens of thousands of “expert trainers,” as Musk called them. When these de-facto test drivers overrode the system, Tesla’s sensors and learning algorithms took special note. The company has used its growing data set to continually improve the autonomous-driving experience for Tesla’s entire fleet. By late 2015, Tesla was gathering about a million miles’ worth of driving data every day.

A similarly superficial piece regurgitating this narrative has been blogged by Peter Diamandis, world-renowned AI researcher.

The underdog has shown up Google, of all technology companies, in much less time and with much fewer resources, and cheaper hardware on the cars to boot with. Like most things that sound too good to be true, this also is. Here is why:

Computer vision is just not there

Readers might have noticed that many other car companies have assisted driving, e.g. Mercedes (Intelligent Drive) and BMW (Driver assistance plus). Why do they not call it autopilot, but use rather more modest feature descriptions? Likely because they realize that there is a massive disconnect between these features and truly autonomous driving.

Now, I am not an expert in computer vision, but I do research in another subfield of deep learning. What I can tell you is this: the model accuracy in deep learning is simply not what you want in life-or-death situations. It’s just not there. It is maybe 99% even in very clinical settings. Maybe even 99.9%. This means that even in situations where you think (and Tesla said) it would work, it will not, e.g. because of unlucky combinations of weather, shadow/occlusion, reflection, color.

Releasing a deep learning-based system for chat bots, virtual assistants or a very narrow set of assisted driving tasks is probably alright. Calling it an autopilot is irresponsible. The whole narrative of ‘the systems continuously get better from all the data’ is simply misleading because deep-learning based computer vision can only get so good right now. It creates a dangerous expectation from customers who do not understand the nature of the probabilistic models used to make life-or-death decisions. This is entirely separate from the question whether assisted-driving makes sense – it does. The point is that these models are fundamentally different from control-theory based approaches used in other technology, i.e. predictable analytical models that can be verified to give real-time guarantees.

More data does not fix the fundamental gap between the kind of accuracy in critical systems and the state of the art in computer vision. I am not alone with this opinion. From one of the most renowned deep learning experts in the world, back in May:

It’s irresponsible to ship driving system that works 1,000 times and lulls false sense of safety, then… BAM!

The German car makers have not called their systems autopilot because they know this has the potential to mislead customers. Google has not released its car because it understands the problem with humans in the loop.

Google has substantially more expertise, resource and manpower in deep learning, software and computer vision than Tesla. This alone should have given journalists and analysts a sliver of doubt about the narrative of Tesla leapfrogging Google through intelligent use of software.

Funnily enough, a Tesla test vehicle with LIDAR has been spotted in Palo Alto recently:

(click to enlarge)

In my opinion, Tesla may have to admit that its current autopilot technology will never reach reliable, fully autonomous driving. Tesla might have to turn to LIDAR as well. This would completely destroy the narrative of having autopilot in anything but high-end models. The Model 3 would not be able to ever be autonomous at the promised price point.

Summary

Tesla might be forced to completely revert on its autopilot narrative. By trying to push a marketing story of being at the forefront of innovation, the company has endangered customers. Its off-the shelve technology might be good enough for a particular set of situations. This set of operations will certainly expand in the future, but might never reach safe full autonomy, unless Tesla changes its hardware stack.

Tesla has not stumped Google. Ultimately, I expect Google to come out on top with fully autonomous driving. Good things take time and Google is taking the time to get this right.

Continue reading “Tesla’s Cheap Approach To Autopilot Might Not Lead Anywhere”

The Science of Automated Cars and an Impatient Business

Deadly Tesla Crash Exposes Confusion over Automated Driving

Amid a federal investigation, ignorance of the technology’s limitations comes into focus

How much do we really know about what so-called self-driving vehicles can and cannot do? The fatal traffic accident involving a Tesla Motors car that crashed while using its Autopilot feature offers a stark reminder that such drivers are in uncharted territory—and of the steep cost of that uncertainty.

The sensor systems that enable Tesla’s hands-free driving are the result of decades of advances in computer vision and machine learning. Yet the failure of Autopilot—built into 70,000 Tesla vehicles worldwide since October 2014—to help avoid the May 7 collision that killed the car’s sole occupant demonstrates how far the technology has to go before fully autonomous vehicles can truly arrive.

The crash occurred on a Florida highway when an 18-wheel tractor-trailer made a left turn in front of a 2015 Tesla Model S that was in Autopilot mode and the car failed to apply the brakes, the National Highway Traffic Safety Administration (NHTSA)—which is investigating—said in a preliminary report. “Neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied,” according to a statement Tesla issued last week when news of the crash was widely reported. Tesla says Autopilot is disabled by default in the cars, and that before they engage the feature drivers are cautioned that the technology is still in the testing phase. Drivers are also warned that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” the company says.

In addition to investigating exactly what happened in Florida, Tesla is looking into a Pennsylvania crash that took place July 1—the day after the NHTSA announced its probe—involving a 2016 Tesla Model X that may have been using Autopilot at the time of the accident, according to the Detroit Free Press. Tesla says there is no evidence Autopilot was in use during the mishap, although the Pennsylvania State Police contend the driver said the car was using the self-driving feature.

FAULTY VISION

Tesla’s description of the Florida accident suggests the car’s computer vision system was likely the crux of the problem, says Ragunathan Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab and veteran of the university’s efforts to develop autonomous vehicles—including the Boss SUV that won the Defense Advanced Research Projects Agency (DARPA) 2007 Urban Challenge. Computer vision allows machines to detect, interpret and classify objects recorded by a camera, but the technology is known to be imperfect “by a very good margin,” Rajkumar says.

The paradox of computer vision systems is that in order to classify an object quickly, they generally use low-resolution cameras that do not gather large amounts of data—typically two megapixels, much less than the average smartphone camera. “The only way you can get high reliability is for [a self-driving technology] to combine data from two or more sensors,” Rajkumar says. Automobiles with self-driving features typically include cameras and radar as well as light detection and ranging (LiDAR).

Tesla vehicles rely on artificial vision technology provided by Mobileye, Inc. The company’s cameras act as sensors to help warn drivers when they are in danger of rear-ending another vehicle, and in some instances can trigger an emergency braking system. The companynoted in a statement last week that the Tesla incident “involved a laterally crossing vehicle,” a situation to which the company’s current automatic emergency braking systems are not designed to respond. Features that could detect this type of lateral turn across a vehicle’s path will not be available from Mobileye until 2018, according to the company. Mobileye co-founder Amnon Shashua acknowledged the accident last week during a press conference called to introduce a partnership with automaker BMW and chipmaker Intel that promised to deliver an autonomous vehicle known as the iNEXT by 2021. “It’s not enough to tell the driver you need to be alert,” Shashua said. “You need to tell the driver why you need to be alert.” He provided no details on how that should be done, however.

BUYER BEWARE

Consumers must be made aware of any self-driving technology’s capabilities and limitations, says Steven Shladover, program manager for mobility at California Partners for Advanced Transportation Technology (PATH), a University of California, Berkeley, intelligent transportation systems research and development program. “My first reaction [to the crash] was that it was inevitable because the technology is limited in its capabilities and in many cases the users are really not aware of what those limitations are,” says Shladover, who wrote about the challenges of building so-called self-driving vehicles in the June 2016 Scientific American. “By calling something an ‘autopilot’ or using terms like ‘self-driving,’ they sort of encourage people to think that the systems are more capable than they really are, and that is a serious problem.”

Vehicles need better software, maps, sensors and communication as well as programming to deal with ethical issues before than can truly be considered “self-driving,” according to Shladover. These improvementswill come but there will always be scenarios that they are not ready to handle properly, he says. “Nobody has a software engineering methodology today that can ensure systems perform safely in complex applications, particularly in systems with a really low tolerance for faults, such as driving,” he adds.

Vehicles with increasingly advanced self-driving features are emerging as a significant force in the automobile industry for several reasons: Some major carmakers see the technology as a way to differentiate their brands and tout new safety features in their higher-end models. Also, there is demand for systems that monitor driver alertness at the wheel as well as software that issues warnings when a vehicle strays from its lane and takes over braking systems when a vehicle is cut off; the market will only increase for such features as they become more affordable. Further, motor vehicle deaths were up by 7.7 percent in 2015, and 94 percent of crashes can be tied back to human choice or error, according an NHTSA report issued July 1.

The quest to roll out new autonomous driving features will unfold rapidly over the next five years, according to a number of companies working on the technology. In addition to the BMW iNEXT, GM’shands-free, semiautonomous cruise control is expected in 2017. The next Mercedes E-Class will come with several autonomous features, including active lane-change assist that uses a radar- and camera-based system. Much of the progress beyond those core self-driving capabilities will depend on federal government guidance. In March the U.S. Department of Transportation’s National Transportation Systems Center reviewed federal motor vehicle safety standards and concluded that increasing levels of automation for parking, lane changing, collision avoidance and other maneuvers is acceptable—provided that the vehicle also has a driver’s seat, steering wheel, brake pedal and other features commonly found in today’s automobiles.

Google had started down a similar road toward offering self-driving features about six years ago—but it abruptly switched direction in 2013 to focus on fully autonomous vehicles, for reasons similar to the circumstances surrounding the Tesla accident. “Developing a car that can shoulder the entire burden of driving is crucial to safety,” Chris Urmson, director of Google parent corporation Alphabet, Inc.’s self-driving car project, told Congress at a hearing in March. “We saw in our own testing that the human drivers can’t always be trusted to dip in and out of the task of driving when the car is encouraging them to sit back and relax.”

Just as Google’s experiments caused the company to rethink its efforts to automate driving, Tesla’s accident, although not necessarily a setback, “will justifiably lead to more caution,” Rajkumar says.

Continue reading “The Science of Automated Cars and an Impatient Business”