Smitten Tesla Owners Do Not Realize They Are Lab Rats For The Company’s Autonomous Technology


  • Musk’s latest tweets indicate, contrary to prior claims, the Autopilot data is not statistically relevant.
  • Musk’s tweets attempt to redefine beta and support our contention the product may be alpha or even pre alpha.
  • Empirical data suggests Autopilot is much less safe than not having Autopilot and we challenge Mr. Musk to present comparative statistical data to customers and investors.

In a series of tweets over the weekend, Tesla Motor (NASDAQ:TSLA) Chairman Elon Musk has tried hard to convey to his loyal followers the goodness of Tesla’s Autopilot and the thinking, or more appropriately lack thereof, behind the SolarCity (NASDAQ:SCTY) acquisition.

As Mr. Musk was busy tweeting, there was yet another accident where a customer is attributing fault to the Autopilot feature. While the credibility of the customer’s claim is yet to be confirmed, debate continues to rage on the internet on the safety of the Autopilot.

As is common whenever such a topic is discussed, Tesla fans routinely find the hapless drivers at fault for not knowing the limitations of the Autopilot. Here is a telling quote from the comment section of the linked article:

“Model S owner chiming in. I think we’re seeing people trust it too much. If I let autopilot have its way I’d be dead a dozen+ different times. You can’t be stupid about it – you have to judge road quality, lighting conditions, etc before turning it on. You also need to pay attention because sometimes it’ll lose the line and just veer suddenly.

The rash of accidents we’re seeing is almost certainly due to people putting too much trust in it. Notice, also, it’s almost always new vehicles and not older ones? New drivers unfamiliar with the limits of the technology trusting it too much.

This driver, on an undivided country road without cell phone reception? He/she had zero business using autopilot in such a circumstance. Inattention and too much trust caused this one, plain and simple.”

Note that this comes from a person who would have been dead a dozen+ times by using this “safe” system if not for his interventions. It is telling that Tesla has such a loyal fan base for this apparent death trap . Unfortunately for Tesla, this type of fanatic customer base will not be the norm as the car reaches mass market. As such, evolution ensures that this type of thinking does not prosper as the human race is known to pick proper survival strategies over time.

The real question for Musk here is this: Is Autopilot saving drivers or are fanatic customers saving Autopilot?

In the context of this question, note that the perspective of the user quoted above is not unique. There is an abundance of stories on Tesla Motor Club forums and YouTube, on how drivers are trying to be safe in spite of the autopilot. Overwhelming number of stories on Tesla Motor Club forums shows that customers are saving Autopilot and not the other way around.

Against this background comes Mr. Musk’s vacuous tweeting. We believe these tweets show that Mr. Musk at risk of losing whatever little credibility he has left on the subject.

Several things are clear from these tweets:

– Mr. Musk is also trying to redefine what “beta” means which we believe is ill advised.

– Mr. Musk is tacitly accepting that the current dataset is not statistically significant – an argument we have made in the past.

– Mr. Musk is not even sure if this beta will be “sufficient” after the system hits 1 billion miles in about 6 months.

It is mind boggling that the Chief Executive of a multibillion dollar auto company is claiming his company’s products are safe with a data set that he admits that is not statistically significant. This, while admitting that the sufficiency of testing will be unknown even after the next six months.

In the interest of not misleading the customers and investors who believed the Company’s claims about safety using statistically insignificant data, can Tesla put out a press release that more accurately depicts the technology’s status?

We will not hold our breath waiting for Tesla to do the right thing here and will instead focus on what is wrong with this new set of tweets from Mr. Musk.

Firstly, we need to clarify the definition of Autopilot. Autopilot is a set of technologies that come together to provide customer a sense that the car can drive by itself under certain situations. The key pieces of Tesla Autopilot technology are: Automatic emergency braking, Adaptive cruise control, Autosteer, Autopark, and Summon.

Secondly, we want to address what it means to be in “beta”. In our recent article about Tesla autopilot situation we said that “Forget beta, this system is likely not even ready for alpha.” Below, we discuss about the nature of alpha versus beta and share our view on where Tesla is likely in its product evolution. While there is wide latitude when it comes to what companies call an alpha test or a beta test, the following is a reasonable approximation of the differences:

Alpha Test Beta Test Tesla Specific Remark
What is the main goal Identify and resolve key technical challenges Identify and remove smaller annoyances “Public beta” is used by many companies to accelerate bug finding
When they happen When the product is functional but not ready for release When the product is almost ready for launch and has minor bugs. Autopilot may be pre-alpha as it appears that the hardware may not be capable of the intended functionality without LIDAR
Who does it Typically experts and in-house testers Experts and knowledgeable / professional customers Many Tesla customers are hard core but competence to conduct beta testing is questionable
How long will it take Several months or years for complex products Usually weeks or months before final product is released Tesla has been running the test for years and it is doubtful they will reach production with the deployed hardware
What does it cost the customer Alpha systems are almost never sold to customers as they are not considered customer ready Beta systems are typically deployed free with customer having the obligation of providing feedback Tesla charges for its product which by definition is not ready. This considerably increases its liability as courts may find it increases Tesla’s responsibility

The most telling aspect of the above discussion is that Tesla Autopilot is not in beta by the traditional definition. Note that Tesla has been claiming “beta” status for almost two years now and even by Mr. Musk’s own tweets the end point is not in sight. In reality, Mr. Musk’s tweets only go to confirm that the system may not even be in alpha as there is considerable doubt about the sufficiency of the system.

To call this current Autopilot system beta and foist it on unsuspecting customers is irresponsible and reckless at best. From an investor view point, the shortcomings of the current system are being somewhat hidden by the cult nature of vocal customer base.

Investors should also take the commentary of Tesla drivers with a grain of salt. When it comes to driving most people think that their skills are above average. This illusory superiority, we can be certain, extends to Tesla beta testers. A look at the comments section of Tesla related Seeking Alpha articles and comments on the Tesla Motor Forum’s site indicates that most Tesla drivers seem to think they understand the limitations of the Tesla Autopilot system. The irony is that this thinking extends to situations even after a driver reports a new and never before seen problem on their $100K Tesla. The reality is that given the immaturity of the system, Tesla’s documentation is likely well short of what it should be. Many of the systems limitations and problematic corner cases are likely not documented. Consequently, vast majority of the drivers who think they understand the system likely have some significant gaps in knowledge that they are not aware of.

Nothing demonstrates this better than the recent PR duel between Tesla and Mobileeye where Mobileeye claimed that its system is not designed to prevent the recent Florida accident whereas Tesla felt its IP should have detected it but did not for various reasons. The bottom line here is that Tesla itself does not know the limitations of its half-baked system. We are not talking about small bugs in the system but big ones where literally a truck can drive through.

Based on information taken from internet forums, we strongly believe that, by the time statistically significant data is available it will be abundantly clear that the Autopilot is unsafe.

On current PR vector, by glibly going on a PR offensive in the face of a half-baked technology, Tesla is not only putting lives at risk but also doing considerable damage to its brand.

If Tesla really has the data to stand behind its claims, Elon Musk must stop blogging and put out one or more detailed whitepapers showing the math and statistics of why he deems the Autopilot to be safer than human driving.

The key when it comes to presenting statistically significant data is find elements where such data is likely to be available. We cannot wait for fatalities to provide statistically significant data because fatalities are rare in modern cars – especially for cars like Model S and Model X with 5 star crash ratings.

In the current context, instead of waiting for fatalities, it would be much better for Tesla to present comparative accident or incident data. For example, a good way to show this would be: comparative incident and accident data between Teslas with autopilot and Teslas without autopilot (has to be shown with proper controls)

Anecdotal data we have seen on this subject from internet forums indicates that the accident / incident reports for Teslas with Autopilots is off-the charts. There are hundreds of reported incidents on about 25K cars. Our very rough estimate, based on this empirical evidence, is that Teslas using Autopilot have a MUCH WORSE safety record than without. We believe Autopilot is not just marginally bad but just plain bad when it comes to safety.

We are willing to change our view on the subject if Tesla can provide comparative data that can demonstrate otherwise. We challenge Mr. Musk to present controlled comparative statistical data to customers and investors.

Continue reading “Smitten Tesla Owners Do Not Realize They Are Lab Rats For The Company’s Autonomous Technology”

The Science of Automated Cars and an Impatient Business

Deadly Tesla Crash Exposes Confusion over Automated Driving

Amid a federal investigation, ignorance of the technology’s limitations comes into focus

How much do we really know about what so-called self-driving vehicles can and cannot do? The fatal traffic accident involving a Tesla Motors car that crashed while using its Autopilot feature offers a stark reminder that such drivers are in uncharted territory—and of the steep cost of that uncertainty.

The sensor systems that enable Tesla’s hands-free driving are the result of decades of advances in computer vision and machine learning. Yet the failure of Autopilot—built into 70,000 Tesla vehicles worldwide since October 2014—to help avoid the May 7 collision that killed the car’s sole occupant demonstrates how far the technology has to go before fully autonomous vehicles can truly arrive.

The crash occurred on a Florida highway when an 18-wheel tractor-trailer made a left turn in front of a 2015 Tesla Model S that was in Autopilot mode and the car failed to apply the brakes, the National Highway Traffic Safety Administration (NHTSA)—which is investigating—said in a preliminary report. “Neither Autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied,” according to a statement Tesla issued last week when news of the crash was widely reported. Tesla says Autopilot is disabled by default in the cars, and that before they engage the feature drivers are cautioned that the technology is still in the testing phase. Drivers are also warned that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” the company says.

In addition to investigating exactly what happened in Florida, Tesla is looking into a Pennsylvania crash that took place July 1—the day after the NHTSA announced its probe—involving a 2016 Tesla Model X that may have been using Autopilot at the time of the accident, according to the Detroit Free Press. Tesla says there is no evidence Autopilot was in use during the mishap, although the Pennsylvania State Police contend the driver said the car was using the self-driving feature.


Tesla’s description of the Florida accident suggests the car’s computer vision system was likely the crux of the problem, says Ragunathan Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab and veteran of the university’s efforts to develop autonomous vehicles—including the Boss SUV that won the Defense Advanced Research Projects Agency (DARPA) 2007 Urban Challenge. Computer vision allows machines to detect, interpret and classify objects recorded by a camera, but the technology is known to be imperfect “by a very good margin,” Rajkumar says.

The paradox of computer vision systems is that in order to classify an object quickly, they generally use low-resolution cameras that do not gather large amounts of data—typically two megapixels, much less than the average smartphone camera. “The only way you can get high reliability is for [a self-driving technology] to combine data from two or more sensors,” Rajkumar says. Automobiles with self-driving features typically include cameras and radar as well as light detection and ranging (LiDAR).

Tesla vehicles rely on artificial vision technology provided by Mobileye, Inc. The company’s cameras act as sensors to help warn drivers when they are in danger of rear-ending another vehicle, and in some instances can trigger an emergency braking system. The companynoted in a statement last week that the Tesla incident “involved a laterally crossing vehicle,” a situation to which the company’s current automatic emergency braking systems are not designed to respond. Features that could detect this type of lateral turn across a vehicle’s path will not be available from Mobileye until 2018, according to the company. Mobileye co-founder Amnon Shashua acknowledged the accident last week during a press conference called to introduce a partnership with automaker BMW and chipmaker Intel that promised to deliver an autonomous vehicle known as the iNEXT by 2021. “It’s not enough to tell the driver you need to be alert,” Shashua said. “You need to tell the driver why you need to be alert.” He provided no details on how that should be done, however.


Consumers must be made aware of any self-driving technology’s capabilities and limitations, says Steven Shladover, program manager for mobility at California Partners for Advanced Transportation Technology (PATH), a University of California, Berkeley, intelligent transportation systems research and development program. “My first reaction [to the crash] was that it was inevitable because the technology is limited in its capabilities and in many cases the users are really not aware of what those limitations are,” says Shladover, who wrote about the challenges of building so-called self-driving vehicles in the June 2016 Scientific American. “By calling something an ‘autopilot’ or using terms like ‘self-driving,’ they sort of encourage people to think that the systems are more capable than they really are, and that is a serious problem.”

Vehicles need better software, maps, sensors and communication as well as programming to deal with ethical issues before than can truly be considered “self-driving,” according to Shladover. These improvementswill come but there will always be scenarios that they are not ready to handle properly, he says. “Nobody has a software engineering methodology today that can ensure systems perform safely in complex applications, particularly in systems with a really low tolerance for faults, such as driving,” he adds.

Vehicles with increasingly advanced self-driving features are emerging as a significant force in the automobile industry for several reasons: Some major carmakers see the technology as a way to differentiate their brands and tout new safety features in their higher-end models. Also, there is demand for systems that monitor driver alertness at the wheel as well as software that issues warnings when a vehicle strays from its lane and takes over braking systems when a vehicle is cut off; the market will only increase for such features as they become more affordable. Further, motor vehicle deaths were up by 7.7 percent in 2015, and 94 percent of crashes can be tied back to human choice or error, according an NHTSA report issued July 1.

The quest to roll out new autonomous driving features will unfold rapidly over the next five years, according to a number of companies working on the technology. In addition to the BMW iNEXT, GM’shands-free, semiautonomous cruise control is expected in 2017. The next Mercedes E-Class will come with several autonomous features, including active lane-change assist that uses a radar- and camera-based system. Much of the progress beyond those core self-driving capabilities will depend on federal government guidance. In March the U.S. Department of Transportation’s National Transportation Systems Center reviewed federal motor vehicle safety standards and concluded that increasing levels of automation for parking, lane changing, collision avoidance and other maneuvers is acceptable—provided that the vehicle also has a driver’s seat, steering wheel, brake pedal and other features commonly found in today’s automobiles.

Google had started down a similar road toward offering self-driving features about six years ago—but it abruptly switched direction in 2013 to focus on fully autonomous vehicles, for reasons similar to the circumstances surrounding the Tesla accident. “Developing a car that can shoulder the entire burden of driving is crucial to safety,” Chris Urmson, director of Google parent corporation Alphabet, Inc.’s self-driving car project, told Congress at a hearing in March. “We saw in our own testing that the human drivers can’t always be trusted to dip in and out of the task of driving when the car is encouraging them to sit back and relax.”

Just as Google’s experiments caused the company to rethink its efforts to automate driving, Tesla’s accident, although not necessarily a setback, “will justifiably lead to more caution,” Rajkumar says.

Continue reading “The Science of Automated Cars and an Impatient Business”

Secret Masterplan and a Reluctant Wall Street

Part 2, that is.

Tesla Motors CEO Elon Musk on Sunday tweeted his intention to soon publish part two of his “top secret Tesla masterplan” following an embattled several weeks for the Silicon Valley heavyweight.

The tweet, which read “Working on Top Secret Tesla Masterplan, Part 2. Hoping to publish later this week” comes amid inquiries into two crashes of Tesla cars, as well as ongoing questions regarding Musk’s plans to combine his electronic vehicle company, Tesla TSLA 0.39% , with his solar panel maker company SolarCity SCTY -3.02% 

On July 1, a driver of a Tesla Model X in Pennsylvania crashed into a turnpike guard rail. The National Highway Traffic Safety Administration (NHTSA) announced last week it is investigating the crash “to determine whether automated functions were in use at the time of the crash.”

Tesla has said, “Based on the information we have now, we have no reason to believe that Autopilot had anything to do with this accident.”

The crash came on the heels of NHTSA’s disclosure it was investigating a May 7 crash in Florida that killed the driver of a Tesla Model S that was operating in Autopilot mode.

The Tesla Autopilot system allows the car to keep itself in a lane, maintain speed and operate for a limited time without a driver doing the steering.

Last month, Tesla proposed to buy SolarCity over which Musk serves as chairman and principal shareholder. Musk expects the deal will help Tesla get into the market for sustainable energy for homes and businesses.

Jim Chanos of Kynikos Associates has called the proposed acquisition “shameful example of corporate governance at its worst.”

Continue reading “Secret Masterplan and a Reluctant Wall Street”

U.S. Safety Agency Investigates Another Tesla Crash Involving Autopilot

The nation’s top auto safety regulator said on Wednesday that it had begun an investigation of a second crash involving a Tesla Motors car equipped with Autopilot technology, a system designed to let vehicles drive themselves for brief periods.

In the nonfatal crash, a Tesla sport utility vehicle rolled over last Friday on the Pennsylvania Turnpike after hitting barriers on both sides of the highway. Safety officials continue to investigate a fatal Florida accident in May. The driver of the Pennsylvania vehicle told the Pennsylvania State Police that he was operating it in Autopilot mode.

The accidents have put new scrutiny on Tesla’s Autopilot system and raised questions about whether the technology, which the company describes as only an experimental “beta” test, lulls drivers into a false sense of security.

Although Tesla drivers have posted YouTube videos of themselves operating the vehicles completely hands-free — even climbing into the back seat — the company has cautioned that Autopilot is meant only as an “auto-assist” feature that requires drivers to keep their hands on or near the steering wheel at all times.

In the Florida crash, the first known fatality involving an autonomous driving system, the driver was killed when his Tesla Model S sedan struck a tractor-trailer that was crossing the roadway.

An account given on Wednesday by a witness to the Florida accident seemed to indicate that the Autopilot system continued operating the car at highway speed, even after the vehicle’s top was sheared off by the impact and the Tesla went under the trailer and continued down the road.

“The car came from underneath the trailer,” said the witness, Terence Mulligan, who was named in the Florida Highway Patrol’s accident report. Mr. Mulligan, who was driving behind the tractor-trailer at the time, said: “The top was gone. It went right by me.”

Mr. Mulligan, in a telephone interview, said he turned and followed the Tesla, which did not slow down until it had left the road, crashed through two fences and hit a utility pole. His account jibed with the accident report by the Florida Highway Patrol, which said the car was traveling at 65 miles per hour when it hit the tractor-trailer.

Tesla has declined to comment on the details of the Florida crash, which is still under investigation by state and federal officials.

In a statement on Wednesday about the Pennsylvania crash, Tesla said it had “no reason to believe that Autopilot had anything to do with this accident” based on the information it had collected so far.

The Pennsylvania crash involved a Model X S.U.V. heading east on the Pennsylvania Turnpike about 100 miles east of Pittsburgh. The car scraped a guardrail on the right side of the road, crossed the roadway and hit the concrete median. It then rolled over onto its roof and came to a stop in the middle of the road.

Tesla vehicles have the ability to send data back to the company about their condition and operation. In a statement, the company said it received an automated alert from the Model X in Pennsylvania on July 1 showing that its airbags had deployed. But the company said more detailed information about the car’s operation was not received, a situation that could happen if the car’s antenna was damaged in the crash.

Details of the Pennsylvania crash were first reported by The Detroit Free Press. The Pennsylvania State Police declined to release additional details because an investigation is in progress.

The Pennsylvania driver, Albert Scaglione, said by phone on Wednesday that he had just been released from the hospital and declined to comment on the accident. “My attorneys will be releasing a statement shortly,” he said.

A passenger in the car, Tim Yanke, was reportedly not seriously injured.

The National Highway Traffic Safety Administration said on Wednesday that it was collecting information from the Pennsylvania State Police, Tesla and the driver to find out whether automated functions were in use at the time of the crash.

The federal safety agency has also sent a crash investigation team to Florida to determine if the Tesla Autopilot system was at fault in the accident on May 7, which killed Joshua Brown, a 40-year-old from Canton, Ohio.

In the Florida crash, charges are pending against Frank Baressi, the driver of the tractor-trailer that was hit by Mr. Brown’s Tesla. But no final determination on charges will be made until the inquiry is complete, Sgt. Kim Montes, a spokeswoman for the Florida Highway Patrol, said on Wednesday.

“We know the truck made a left turn, and the person going straight has the right of way,” she said, referring to Mr. Brown’s vehicle.

Mr. Baressi, reached by phone, declined to comment.

In an interview with The Associated Press last week, Mr. Baressi said he had heard a Harry Potter movie playing from Mr. Brown’s vehicle, but also acknowledged, “He went so fast through my trailer, I didn’t see him.”

Sergeant Montes said, “We don’t know if that’s accurate,” adding, “We may never know, obviously, given the damage of the vehicle. In a very violent crash, there’s not going to be a lot left inside a car that could be playing.”

A DVD player and a laptop computer were recovered from Mr. Brown’s vehicle after the crash.

Questions have been raised about why neither Tesla nor the federal safety agency notified the public sooner about the May 7 accident, if only to caution other drivers about using Tesla’s Autopilot feature.

When the federal investigation of Mr. Brown’s accident was disclosed last week, Tesla released a statement saying it had informed the agency of the crash “immediately after it occurred.”

But in a statement on Tuesday, Tesla said it did not tell the federal agency about the accident until nine days later.

The Florida Highway Patrol contacted Tesla, seeking help in downloading data from the car’s so-called black-box recorder, seven to 10 days after the crash.

The company said in a statement that it was obligated to notify the National Highway Traffic Safety Administration on a quarterly basis when it became aware of a fatal accident involving a Tesla vehicle.

“As part of its regular ongoing communication and not as part of any formal process, Tesla told N.H.T.S.A. about the accident while it was still in the process of conducting its investigation,” Tesla said. “This happened on May 16.”

Continue reading “U.S. Safety Agency Investigates Another Tesla Crash Involving Autopilot”