AI machines will lose interest in humans once they surpass their intelligence, says industry pione

Discussion in 'In the News' started by gene, May 16, 2018.

  1. gene

    gene Moderator

    May 19, 2016


    AI pioneer and co-founder and chief scientist of Artificial Intelligence startup NNAISENSE Jurgen Schmidhuber recently stated that while machines will eventually be smarter than humans, there is no reason why the emerging technology should be feared.

    Jurgen Schmidhuber has been involved in the AI field since the 1970s. In 1997, Schmidhuber helped publish a study on Long Short Term Memory, one of the concepts that ultimately became the roots of AI memory functions. Speaking during the Global Machine Intelligence Summit (GMIS) last year, the AI pioneer stated that he had big dreams for the technology since he first began studying the field. According to Schmidhuber, he wanted to build machines that can teach themselves.

    The AI pioneer carried over his vision for advanced AI well into the present day. In a recent statement to CNBC News, Schmidhuber noted that eventually, machines will likely surpass humans in terms of intelligence.

    “I’ve been working on AI for several decades, since the eighties basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire,” he said.

    [​IMG]Credit: Wort & Bild Verlag

    Unlike other tech leaders such as Elon Musk and the late Stephen Hawking, Schmidhuber has adopted a more optimistic outlook on AI. Musk, for one, has frequently mentioned the dangers of hyper-intelligent computer systems, to the point of stating that AI could be more dangerous than nuclear warheads.

    Schmidhuber, however, disagrees, stating that once AI surpasses humans’ intelligence, machines would likely just lose interest. The AI pioneer added that he and Musk had already spoken about the matter.

    “I’ve talked to him for hours, and I’ve tried to allay his fears on that, pointing out that even once AIs are smarter than we are, at some point they are just going to lose interest in humans,” he said.

    Schmidhuber believes that there are still concerns about the emergence of hyper-advanced computer systems, however. According to the AI pioneer, the real dangers of artificial intelligence lie not on machines, but on people themselves.

    “If there are any concerns, it’s that humans should be worried about beings that are similar to yourself and share goals. Cooperation could result, or it could go to an extreme form of competition, which would be war,” he said.

    Nevertheless, considering the pace and direction of AI research today, Schmidhuber remains optimistic. While the pioneer admitted that a portion of AI research is dedicated to making intelligent weapons, the vast majority of studies in the artificial intelligence field is geared towards helping people.

    “About 95 percent of all AI research is about enhancing the human life by making humans live longer, healthier and happier,” he said.

    In a lot of ways, Schmidhuber’s statements about human-friendly AI research and AI-based weapons rings true. While the Pentagon and countries like South Korea are exploring the concept of weaponized AI, several initiatives, including Elon Musk’s own OpenAI and Schmidhuber’s NNAISENSE, are actively engaged in the development of artificial intelligence designed to benefit people.

    Article: AI machines will lose interest in humans once they surpass their intelligence, says industry pioneer
  2. mail2larryh

    mail2larryh Guest

    Sorry, but this is getting a little silly. Unless someone found  a way to give machines a soul, they will remain just machines controlled by  their inventors and makers \"humans\". 
  3. Theo Jones

    Theo Jones New Member

    May 16, 2018
    The "Silly" thing is pretending humans have a soul. There is no such thing. the article is flawed, however, because it ignore the obvious example - do humans lose interest in cats or dogs because we surpass them in intelligence? no.
    • Agree Agree x 1
  4. The name \'Artificial Inteligence\' is misleading, there is not such thing. What seems to be \'Artificial Inteligence\' is indeed human inteligence, from software programers controling hardware. Hardware is unable to thing by itself. Let\'s think from 1st Principles and stop to repeat false notions.
  5. DanP

    DanP New Member

    Mar 25, 2017
    Bellevue, Switzerland
    So one expert claims his faith, the other claims an opposite faith. Conclusion: claiming a faith is worthless.

    What is a fact though is that there is a risk that intelligent machines may lead to a situation harming people, such as humans becoming slaves of machines or human dictators. Conclusion: we should try to prevent this happening by reinforcing democratic controls over those in power.
  6. robinrhaney

    robinrhaney Member

    Oct 26, 2017
    The key here is not human intelligence, or human souls (the latter may not exist, the former definitely does not in my experience), but our animal drives. Humans want things, fear things, hope for things, are interested in things. Machines do not, no matter how intelligent, experience any of these things. They may be programmed to fake it, but they are not feeling it. AIs will not lose interest in humans. They are not capable of interest now, so cannot lose it.
    The danger to humans comes from humans. "Computer says No" syndrome is ruining lives right now. Businesses are voluntarily handing control to machines, some as simple as shop tills. As AIs get more capable, that will happen more and more. Businesses will force staff and customers to obey the decisions of glorified typewriters, because it's cheap and easy, no better reason than that.
  7. J.Taylor

    J.Taylor Active Member

    Feb 13, 2017
    Souls are like Santa's flying reindeer, fun to believe in, but they have a zero chance of being real.

    What will make AI very powerful, but also very scary is self awareness. As near as we can tell, this sort of self awareness in animals was an emergent byproduct of learning to navigate in an environment. This means that the first truly capable self driving cars will have some measure of "self awareness" built into them by the time they are capable to self navigate and move to a chosen location.

Share This Page