Alphabet’s Eric Schmidt: The design of AI should “avoid undesirable outcomes”
"Robots should only be this big." (AP Photo/Gemunu Amarasinghe)
F6, your trick of where possible changing w=1600 to w=800 works well, lol, i was able to do it with the image above, but can't figure if it's possible to make many others smaller.
Before Skynet can become self-aware, before the robots can rise up .. http://qz.com/481479 , we need a system in place to safely pursue research into artificial intelligence. Or so argues Eric Schmidt, the chairman of Google’s parent company, and Jared Cohen, the head of its tech-minded think tank, Google Ideas.
Schmidt has long been bullish on the prospects for the technology, backing experimental projects like Alphabet’s self-driving car program and Google’s DeepMind predictive search engine, suggesting AI will revolutionize how we work and live, even going as far as tell us not to fear .. http://www.wired.com/2014/12/eric-schmidt-ai/ .. living in a world full of AI.
Life-altering technology, Schmidt and Cohen argue, should benefit everyone, not just businesses. “As a society, we should make use of this potential and ensure that AI always aims for the common good,” they wrote.
AI research “should be open, responsible and socially engaged.”
“[T]hose who design AI should establish best practices to avoid undesirable outcomes.”
Researchers need to ask themselves, while systems are still being developed, whether the data they’re using to train AI systems are right, whether there are any side-effects of their research they need to consider, and whether there are adequate failsafes in place within the system. “There should be verification systems that evaluate whether an AI system is doing what it was built to do,” Schmidt and Cohen wrote.
Artificial intelligence is quickly moving from the realm of science fiction to reality. While, thankfully, we haven’t had to worry about computer systems triggering armageddon ..
If scientists and deep thinkers are to be believed, once we’ve cracked AI systems that can truly think and act on their own, with their own agency, it won’t be long before they blow past us .. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html .. in terms of intelligence. To control this, we should be shaping the development of this intelligence to benefit humanity, rather than disrupt it.
F6, i totally acknowledge and accept i will always be a relative outside observer in the AI/ASI area .. still, i don't understand the certainty of your
"they completely fail to even acknowledge, let alone address, in any way or sense, the brutal and absolute inevitability that any AI/ASI (far) more intelligent and capable than we are, including whatever such may come of their effort, will, quickly, go from doing (just) what we want/tell it to do, to doing what it decides to do (whether we like it or not) -- we're already deep into building autonomous situationally-aware/analytical decision-making into the already emergent systems which already exist (in particular but without limitation in the development of autonomous weapons systems)"
.. again, i agree, that logic is strong and it's not easy to see it not as a certainty .. seems your "brutal inevitably" is the pivotal point .. seems their public position anyway is that they don't agree with the certainty of yours, rather they believe there is a chance that AI/ASI can be managed .. lol, seriously it's hard for me, when i really don't know, to believe that Musk and all of them privately believe it is already too late to avert the takeover which you think they all - if i get that right - must privately accept .. again, it would be interesting to see what Musk's, and the others, reply would be to your thinking on that ..