Tesla faces lawsuit over Model X crash as driver's family blames Autopilot

Discussion in 'Model X' started by simonalvarez0987, Apr 10, 2018.

  1. simonalvarez0987

    simonalvarez0987 Active Member

    Joined:
    Dec 21, 2017
    Messages:
    717
    #1 simonalvarez0987, Apr 10, 2018
    Last edited by a moderator: Apr 11, 2018
    [​IMG]

    [​IMG]

    Tesla is facing a new lawsuit from the family of the ill-fated Model X that crashed into a highway barrier near Mountain View, CA last month. According to the family’s lawyer, the fatal accident would not have happened if Autopilot had never been turned on.

    The latest developments in the story were reported on ABC7 News. In a recent interview with the news agency, Sevonne Huang, the wife of the Model X’s driver, reiterated the notion that her husband had complained about Autopilot’s behavior prior to the accident.

    According to Sevonne, her husband had observed that the electric SUV would head towards the barrier when Autopilot was engaged. She also noted that her husband wanted to show her how the car’s driver-assist feature behaved in that specific area. She stated, however, that “a lot of time (sic), it doesn’t happen.” Nevertheless, when she heard news of the crash and saw the blue Model X, she knew that it was her husband’s vehicle.

    “Yeah, that’s why (when) I saw the news, I knew that’s him,” she said.

    Mike Fong, the family’s lawyer, stated that he does not expect to file a complaint against Tesla until the NTSB’s investigation is complete. Nevertheless, Fong believes that Tesla’s response to the incident places the blame on the driver of the Model X. This, according to the attorney, distracts issues with Autopilot.

    “Its sensors misread the painted lane lines on the road, and its braking system failed to detect a stationary object ahead. Unfortunately, it appears that Tesla has tried to blame the victim here. It took him out of the lane that he was driving in, then it failed to brake, then it drove him into this fixed concrete barrier. We believe this would’ve never happened had this Autopilot never been turned on,” Fong said.

    According to the local news agency, Tesla has released a new statement about the Model X crash, as follows.

    We are very sorry for the family’s loss.

    According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.

    The fundamental premise of both moral and legal liability is a broken promise, and there was none here. Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel. This reminder is made every single time Autopilot is engaged. If the system detects that hands are not on, it provides visual and auditory alerts. This happened several times on Mr. Huang’s drive that day.

    We empathize with Mr. Huang’s family, who are understandably facing loss and grief, but the false impression that Autopilot is unsafe will cause harm to others on the road. NHTSA found that even the early version of Tesla Autopilot resulted in 40% fewer crashes and it has improved substantially since then. The reason that other families are not on TV is because their loved ones are still alive.

    The NTSB’s investigation is ongoing into the fatal Model X accident. This past weekend, Tesla CEO Elon Musk and NTSB Chairman Robert Sumwalt talked about the agency’s investigative processes and Tesla’s initiatives to address safety recommendations that it previously received. According to a spokesman from the NTSB, Musk and Sumwalt had a “very constructive conversation.”



    Article: Tesla faces lawsuit over Model X crash as driver's family blames Autopilot
     
  2. mcdanielp

    mcdanielp New Member

    Joined:
    Nov 8, 2017
    Messages:
    7
    The accident would never have happened it he was not driving either. What fucking stupid thing to say.
     
  3. mcdanielp

    mcdanielp New Member

    Joined:
    Nov 8, 2017
    Messages:
    7
    And if he did not know how to use AP or he misused it, it is Tesla\'s fault? Wow. Ambulance chaser level 100
     
  4. Vakil92

    Vakil92 New Member

    Joined:
    Oct 31, 2017
    Messages:
    2
    Location:
    Missouri
    Autopilot is a driver assist feature .... its not a full self drive ...
     
  5. I wonder why they dont question the missing marking of a lane you cant drive in. Was is a Tesla who crashed there the last time also the time they never replaced the crash barrier and killed this driver? Probably not, bad maintenance and poor marking is to blame, both cars and humans can make mistakes there.
     
  6. dgodfrey69

    dgodfrey69 New Member

    Joined:
    Apr 11, 2018
    Messages:
    3
    Suing the wrong people. There was a crash at that intersection a week before with no driver assist feature and the safety barrier not being replaced caused death.
     
  7. Johnny04

    Johnny04 New Member

    Joined:
    Mar 20, 2017
    Messages:
    38
    Location:
    Washington, DC
    They have no case, especially when they already admitted that he knew the autopilot couldn\'t handle that spot. So there is no surprise that it failed.

    If the autopilot worked well everywhere else but unexpectedly failed there and took the driver by surprise that he couldn\'t react fast enough, then we could argue that autopilot is harmful because we can\'t predict when it would fail, but in this case, he already knew it would fail there. So it\'s no surprise to him at all, and the responsilibity is all on him, which unfortunately he paid for it with his life.
     
  8. Rick Duva

    Rick Duva New Member

    Joined:
    Apr 11, 2018
    Messages:
    1
    Location:
    Florida
    This is why you won’t see any more advancement, because of douchebags like this! Constantly ignored warnings to put hands on wheel.
     
  9. Roy_H

    Roy_H Member

    Joined:
    Jan 12, 2018
    Messages:
    113
    Location:
    Ontario, Canada
    I am sure legally, Tesla is on firm ground. Huang signed an agreement that he was aware he was testing in development hardware and software, and was required to maintain vigilance at all times.

    However, Tesla also promotes their Auto Pilot as a refined system and people being what they are become complacent when something works well in normal conditions. I do not agree that Tesla should concentrate on providing good lane keeping ability while at the same time providing poor crash avoidance. Tesla needs to get its priorities straight and put safety first before convenient features.

    For this reason, I hope Tesla looses this suit, and they will adjust their priorities.
     
  10. dgodfrey69

    dgodfrey69 New Member

    Joined:
    Apr 11, 2018
    Messages:
    3
    Putting safety first will not be a benefit of loosing suit. All it will do is set a precedence that the human driver can not be faulted if a driver assist feature is provided by manufacture. For instance, if your Subaru has emergency breaking, and you hit a wall doing 100 mph, its Subaru's fault. Any feature would have to be 100% safe to be used, which I'm pretty sure is not possible.
     
  11. DT2018

    DT2018 New Member

    Joined:
    Apr 14, 2018
    Messages:
    1
    Location:
    PA
    #11 DT2018, Apr 14, 2018
    Last edited: Apr 14, 2018
    People drive sort of OK with two eyes and a brain. An AI solution might end up with more sensors than we have, for a couple reasons in general. 1) Economics might select for such solutions. 2) It might be weirdly difficult to even come up with AI that fully meets the following characteristics while using ONLY real time 2D/3D image data + any pre-canned object and world mapping data: highly reliable, maintainable, accountable, saleable. One school of thought might start with the idea that if you had a sensor package that could provide the AI with real time information describing ONLY boundaries, and maybe materials (penetrating radar?), the AI could be rather simple, and insect-like. This kind of system could possibly skip cameras altogether and opt for things like street sign data to only exist in a database--not be generated in real time. Uber, Apple, and Google seem to have one level or another of affinity to this approach. For Tesla, the apparent hope is that starting with image data is better. In the image data you can get real time intelligent (way more than just space and materials) understanding of the world exactly as well as humans have already proven possible. But again, people are actually at best inconsistently good drivers, right? Some of this can be explained away for various reasons, but not necessarily all. Even with the best AI, will we understand the AI we "made" well enough to maintain it, account for it, and sell it. So really, even Elon (I guess) wants the AI in Teslas to need to be somewhat less generally intelligent than humans; they can use RADAR and ultrasound data (aka insect-brain data). He really just doesn't like LIDAR. But it wasn't just cost he complained about, right? Something about passive versus active and what frequencies were being used had some kind of fundamental silliness to him I believe. I guess it might be something like how cameras can pick up lots of different frequencies of light passively, probably overlapping the active laser light in LIDAR? So maybe Elon is saying he wants the Active stuff (RADAR and ultrasound) to be in different frequencies from anything any cameras can get to truly be valuable. For one thing, the LIDAR didn't help the pedestrian with a strange object in the recent Uber accident. Something tells me it might have helped Tesla / more importantly the now deceased driver, with this Model X crash into the concrete divider. But certainly the larger debate can be found still simmering. Should the car have made a somewhat "human-like" decision to maybe use the pre-traveled path of cars in front of it when it got confused about the lane markings? That is at least a good example of some of the big questions here, as far as I believe. On the one hand, it might have saved the guy who was killed. On the other, can you sell such behavior? Can you account for it? Can you maintain it? In other words, the cars in front could theoretically just be the blind leading the stupid. Humans have a usually good sense for when they are being led off a cliff. The equivalent in AI may not even be what we really want. Or maybe it is. I guess a likely alternative in this case and many similar cases, might be to improve the road standards (it would possibly have to be both legally, and in practice) of line painting just enough to make insect-brain autonomous vehicles work reliably by changing their environment to be more suitable to insects.
     
  12. Peter Locke

    Peter Locke Guest

    I have a new car with many sensor features. I like to TEST them. Pushed to far, it could lead to a crash of death! No, you don't head for a concrete wall and then engage the autopilot to see if it will successfully avoid ALL objects. It just might not have the time/distance necessary even if everything works. I'm not ready to fault the autopilot here.
     

Share This Page