Google Outlines Plan for a Kill Switch That Would Prevent a Robot Takeover
.. a revisit to that ever present unforgiving bottom line that one day they will do everything better than we can, and so be out of our control .. the "red button" i don't recall seeing before ..
A new paper offers a blueprint for avoiding the worst-case scenario when it comes to artificial intelligence.
And with that growing intelligence, of course, comes the growing fear that the machines will eventually rise up and end us all. That's why Google has outlined a plan for a kill switch that can stop algorithms before they get out of control--or at least, too out of control.
The proposed solution, described in a paper .. https://intelligence.org/files/Interruptibility.pdf .. co-authored by Laurent Orseau from Google's DeepMind .. http://deepmind.com/ .. and Stuart Armstong of Oxford's The Future of Humanity Institute, would "prevent the agent from continuing a harmful sequence of actions--harmful either for the agent or for the environment--and lead the agent into a safer situation."
And it's not as easy as installing an "off" switch, since any robot worth its weight in copper wire will figure out a way to override it. Instead, the command will need to "not appear as being part of the task at hand--which means the robot must think it's deciding to turn itself off instead of obeying the orders of a pesky human.
The authors playfully refer to this key switch as a "big red button."
Artificial intelligence fears have mostly proven unfounded [hmm, not really as it's early days], and more like the stuff of sci-fi movies and futuristic novels. So far, the only outsmarting the machines have done is beat us at games--and in one case, learn to pause Tetris .. http://www.wired.co.uk/article/super-mario-solved .. to avoid losing. In May, Singularity University founder Peter Diamandis spoke out .. http://www.inc.com/tess-townsend/diamandis-dont-regulate-against-.html .. against placing regulations on artificial intelligence. And a 2014 survey .. http://www.nickbostrom.com/papers/survey.pdf .. conducted by Oxford professor and Superintelligence author Nick Bostrom found that even AI experts believe machines only have a 50/50 chance of reaching human levels of intelligence by the 2040s.
But self-learning machines are concerning enough to draw the attention of big thinkers like Musk--he referred to AI as "our biggest existential threat"--as well as Stephen Hawking and Bill Gates, with whom Musk co-authored a letter about the technology's dangers. Together, the trio helped launch OpenAI .. http://www.inc.com/tess-townsend/elon-musk-open-ai-safe.html , a nonprofit that open sources AI findings. The company's stated goal is to ensure that artificial intelligence is used for good.
Even when used for good and creative purposes, though, AI gives off some spooky signs. A machine in Japan almost won a literary prize .. http://www.bustle.com/articles/149887-a-computer-wrote-a-novel-and-nearly-won-a-literary-prize-for-it .. for a full length novel it wrote--which it chose to end with the sentence, "The computer, placing priority on the pursuit of its own joy, stopped working for humans."
Google developing ‘kill switch’ to stop robot uprising against humans
The rise of the robots Credit: Warner Br/Everett/REX
Mark Molloy 8 June 2016 • 7:55pm
The rise of the robots could be a genuine threat to the human race, experts have warned, but Google hope to find a way of stopping rouge Artificial Intelligence (AI) from taking over the world.
[...]
If you lie awake at night worrying about the ‘threat’ of AI taking over the world, then take heart from a recent episode involving robot waiters in China.