Tesla Autopilot and artificial intelligence: The unfair advantage

Discussion in 'In the News' started by charlottehu8, Jul 5, 2017.

  1. charlottehu8

    charlottehu8 New Member

    Joined:
    Jun 22, 2017
    Messages:
    3
    #1 charlottehu8, Jul 5, 2017
    Last edited by a moderator: Jul 6, 2017
    [​IMG]

    [​IMG]

    Serial tech entrepreneur and Tesla CEO Elon Musk has had a longstanding fear of artificial intelligence, but his company’s investments in artificial intelligence have been noted as an attempt to keep track of developments in the field of AI. In an interview for Vanity Fair in April 2017, he outright expressed his concerns with AI and claimed that one of the reasons for the development of SpaceX was that it could be an interplanetary escape route for humanity if artificial intelligence goes rogue. However, even Musk realizes the importance of AI in real-world applications, specifically for self-driving cars. At the end of June, Musk hired Andrej Karpathy as the new Director of Artificial Intelligence at Tesla, and MIT Technology Review claims it is the start of a plan to rethink automated driving at Tesla.

    Karpathy comes from OpenAI, a non-profit company founded by Musk that focuses on “discovering and enacting the path to safe artificial general intelligence.” Afterwards, he moved on to intern at DeepMind, a place that spotlighted reinforcement learning with AI. Karpathy’s previous research focuses are on image understanding and recognition, which directly translates into applying proven image recognitions algorithms in Tesla’s Autopilot.

    [​IMG]

    Recently, the popular question of morality was brought up in context to AI learning in Autopilot cars. It’s very interesting to consider how to teach technology to respond to an innately human moral problem. The Moral Machine, hosted by Massachusetts Institute of Technology, is a platform built to “gather human perspectives on moral decisions made by machine intelligence, such as self-driving cars.” It questions how the machine would act in human decisions such as whether to crash the driver or keep driving into a pedestrian that is crossing the street where there are no traffic regulators. How exactly do you teach a logical machine the mechanisms of ethical decision-making?

    Although Musk and Tesla are the leaders in the self-driving field, a number of other companies are also entering into the competition sphere. Google, Uber, and Intel’s Mobileye have all been considering the application of reinforcement learning in the context of self-driving cars. Uber, Waymo, GM (Cruise Automation), Mobileye (camera supplier), Mercedes and Velodyne (LiDAR Supplier) could be potential competitors in the realm of self-driving vehicles. However, most of the technology does not encompass full self-driving, which is Musk’s aim. While other companies are investing heavily in autonomous fleets, Tesla far outpaces them in terms of data collection and release of finished product.

    What are the differentiators for Tesla in the growing field of AI directed driverless cars?

    Historically, Musk has focused on “narrow AI” which can enable the car to make decisions without driver interference. The vehicles would increasingly rely on radar as well as ultrasonic technology for sensing and data-gathering to form the basis for Tesla’s Autopilot algorithms. A technology that isn’t derived from LiDAR, the combination of radar and camera system said to outperform LiDAR especially in adverse weather conditions such as fog.

    With the introduction of Autopilot 2.0 and Tesla’s “Vision” system, and billions of miles real-world driving data collected by Model S and Model X drivers, Tesla continues to create a detailed 3D map of the world that has increasingly finer resolution as more vehicles are purchased, delivered and placed onto roadways. The addition of GPS allows Tesla to put together a visual driving map for AI vehicles to follow, paving the path for newer and more advanced vehicles.

    [​IMG]

    The addition of Karpathy will be a notable asset for Tesla’s Autopilot team. In specific, the team will be able to apply Karpathy’s deep knowledge of reinforcement learning systems. Reinforcement learning for AI is similar to teaching animals via repetition of a behavior until a positive outcome is yielded. This type of machine learning will allow Tesla Autopilot to navigate complex and challenging scenarios. For example, AI will allow cars to determine in real-time how to navigate a four-way stop, a busy intersection or other difficult situations present on city streets. By making cars smarter with the way they navigate drivers, Tesla will put itself ahead of the curve with a fully-thinking, fully self-driving car.

    Tesla is expected to demonstrate a fully autonomous cross-country drive from California to New York by the end of this year as a showcase for its upcoming Full Self-driving Capability. If you’re buying a Tesla Model 3, or an existing Model S or Model X owner, just know that you’re contributing to a self-driving future, mile by mile.

    Article: Tesla Autopilot and artificial intelligence: The unfair advantage
     
  2. imipsgh

    imipsgh New Member

    Joined:
    Jun 26, 2017
    Messages:
    5
    Location:
    US
    Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI


    Every AV or SDC maker using public shadow drivers for AI will have to drive each vehicle type ONE TRILLION miles at an expense of at least $300B and will be putting those public shadow drivers and the public at risk. A risk that will result in injuries and fatalities as the scenarios progress from the currently benign to progressively more complicated and dangerous scenarios. Like accident scenarios that will have to be driven thousands of times each to train the AI. Factor in the fact that shadow driving itself leads to 17-24 second response times, especially in critical situations, and you can see this whole situation is untenable. Autonomous levels 4 and 5 will NEVER be reached this way. (To date no children nor families have been killed because the scenarios are benign. When they move to actual accident scenarios this will change. Governments, insurance companies and litigators will ensure it.)


    Regarding the cost. One trillion public shadow driving miles in 10 years is 228k vehicles driving 24x7. (Since most vehicles cannot take 400k miles a year you will wind up using more than that 228k). Those driving 24x7 takes 3 drivers a day that is 684k drivers. (Drivers who have to be skilled enough to get every action right or AI doesn’t learn the right thing). My very conservative $300B estimate is for the vehicles, gas sensors and drivers only (At a rate of $20k for the vehicle and sensors). Beyond that cost will be the cost of litigation for the accidents, injuries and loss of life that will occur. You cannot drive and redrive dangerous complicated scenarios over and over, thousands or more times, and not have accidents. Especially for scenarios that are meant to learn what to do in actual accident scenarios. Are you going to have shadow drivers drive accident scenarios in bad weather, with bad road conditions with dozens of other vehicles, pedestrians etc around? And keep driving billions of miles restumbling on them to train AI? Just counting the known or anticipated accent scenarios you would causes thousands, hundreds of thousands or more accidents till you got it right. (And that doesn’t count AI getting it right but the other drivers overcoming that and causing an unavoidable accident). That will be thousands in not tens or hundreds of thousands of injuries and loss of life and property.


    Now let’s factor in the litigation and government intervention. As it is no children have died in a public shadow driving accident to date. Even in benign conditions. (The current non-complicated or dangerous situations. Where the streets are well lined lines, mapped and learned and there are good weather and street conditions without a lot of complexity. That alone could shut the whole thing down for quite a while starting tomorrow). Let’s say it doesn't happen and some time goes by until the dangerous scenarios are run. There is absolutely no way various levels of governments, insurance companies, lawyers and individuals let you turn public roads into accident scenario beta test or Guinea pig sites. When this happens everything will come to a grinding halt leaving most of the complicated, dangerous or accident scenarios unlearned. That will stop L4 and L5 progress. (Tesla, comm.ai, PolySync etc are on this path now with the public not paid drivers as those beta testers).


    So what is the answer. . . Simulation for AI data gathering, engineering and testing. Augmented with test tracks and other sources where simulation cannot actually meet the burden and to ensure the accuracy of the simulation.


    Let me address the first thing folks usually say at this point. That simulation is not up to this. YES it is. Why don’t most folks know that? Because most of them come from Commercial IT or even the automakers and they do not have exposure to nor experience in simulation in the aerospace industry. Which has had most of the capabilities needed for 20 years. Is this more complicated than that? Yes. But the technology is there. Beyond that I believe that some of the current simulation products are not that far away. The problem is that part of the industry is disjointed. Not everyone knows what is available, what the capability gaps are or how to close them. That is why I am proposing an international association and trade study exhibit be created. (We have recently determined we want to add test tracks and all non-public AI and testing entities to the association.)


    Update 7-7-2016 - Chris Urmson declares L3 cannot be reached with public shadow driving. This confirms it cannot lead to L4 or L5 either

    Car companies' vision of a gradual transition to self-driving cars has a big problem

    https://www.vox.com/new-money/2017/7/5/15840860/tesla-waymo-audi-self-driving

    This is EXACTLY what happens in shadow driving. If you can't expect a human being who owns the car to handle level 3 due to computer to human handover issues how can you handle ONE TRILLION MILES of public shadow driving to handle teaching AI, engineering and testing? You cannot have it both ways. And keep in mind these folks are figuring this out in benign or easy conditions. Wait until bad weather, slippery roads, complex situations or actual accident scenarios hit.


    For more detail on all of this as well as references for information I cited please see the articles below

    Stop relying on AI to make Autonomous Vehicles - You are wasting time, $80B and risking lives
    https://www.linkedin.com/pulse/stop-relying-ai-make-autonomous-vehicles-you-wasting-michael-dekort


    Autonomous Vehicle and Mobility Simulation Association and Trade Study
    https://www.linkedin.com/pulse/autonomous-vehicle-mobility-simulation-association-trade-dekort


    Who will get to Autonomous Level 5 First and Why.
    https://www.linkedin.com/pulse/who-get-autonomous-level-5-first-why-michael-dekort
     
  3. J.Taylor

    J.Taylor Active Member

    Joined:
    Feb 13, 2017
    Messages:
    196
    Location:
    Canada
    A.I. self driving cars only have to be slightly better than an average car driver in order to be useful. Remember the average human driver is not very good at driving.
    An insurance company will be quite happy to insure a self driving car based on it's driving competency, just like they are now doing for human drivers. As the self driving car becomes ever more reliable, insurance rates for these cars will go down and the cost of insuring a real person to drive will continue to rise.

    We can expect self driving cars to take over. The only real question is how soon this will happen.
     

Share This Page