InvestorsHub Logo

fuagf

12/22/15 6:22 PM

#242032 RE: F6 #242005

Alphabet’s Eric Schmidt: The design of AI should “avoid undesirable outcomes”


"Robots should only be this big." (AP Photo/Gemunu Amarasinghe)

F6, your trick of where possible changing w=1600 to w=800 works well, lol, i was able to
do it with the image above, but can't figure if it's possible to make many others smaller.


Written by Mike Murphy

Obsession
Machines with Brains - http://qz.com/on/machines-with-brains/

7 hours ago

Before Skynet can become self-aware, before the robots can rise up .. http://qz.com/481479 , we need a system in place to safely pursue research into artificial intelligence. Or so argues Eric Schmidt, the chairman of Google’s parent company, and Jared Cohen, the head of its tech-minded think tank, Google Ideas.

Schmidt has long been bullish on the prospects for the technology, backing experimental projects like Alphabet’s self-driving car program and Google’s DeepMind predictive search engine, suggesting AI will revolutionize how we work and live, even going as far as tell us not to fear .. http://www.wired.com/2014/12/eric-schmidt-ai/ .. living in a world full of AI.

But it seems even Schmidt acknowledges that a degree of caution is required in AI research, much as other tech luminaries, such as physicist Stephen Hawking .. http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html .. and Tesla CEO Elon Musk have called for. (Musk has gone as far as to pledge $1 billion with a group of scientists and technologists, calling themselves OpenAI, to promote AI research .. http://qz.com/572419/ .. that has a “positive human impact.”)

In an op-ed .. http://time.com/4154126/technology-essay-eric-schmidt-jared-cohen/ .. in Time magazine, Schmidt and Cohen outlined three principles they believe developers, researchers, and companies should follow when exploring AI:

“First, AI should benefit the many, not the few.”

Life-altering technology, Schmidt and Cohen argue, should benefit everyone, not just businesses. “As a society, we should make use of this potential and ensure that AI always aims for the common good,” they wrote.

AI research “should be open, responsible and socially engaged.”

BothGoogle and Facebook have recently made overtures to bring greater transparency to their AI research. Facebook recently revealed the designs .. http://www.wired.com/2015/12/facebook-open-source-ai-big-sur/ .. for the servers it uses for AI research, while Google open-sourced the code .. http://www.wired.com/2015/11/google-open-sourcing-tensorflow-shows-ais-future-is-data-not-code/ .. behind its AI engine, TensorFlow. Critically, though, neither company gave away the data they use to train, test, and strengthen their AI algorithms, which could be the determining factor to their success.

“[T]hose who design AI should establish best practices to avoid undesirable outcomes.”

Researchers need to ask themselves, while systems are still being developed, whether the data they’re using to train AI systems are right, whether there are any side-effects of their research they need to consider, and whether there are adequate failsafes in place within the system. “There should be verification systems that evaluate whether an AI system is doing what it was built to do,” Schmidt and Cohen wrote.

Artificial intelligence is quickly moving from the realm of science fiction to reality. While, thankfully, we haven’t had to worry about computer systems triggering armageddon ..
.. just yet, we do have smart systems that can diagnose cancer .. http://qz.com/567658 , handle our appointments .. http://qz.com/488963 .. for us, and clean our floors .. http://qz.com/503850 .. on their own.

If scientists and deep thinkers are to be believed, once we’ve cracked AI systems that can truly think and act on their own, with their own agency, it won’t be long before they blow past us .. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html .. in terms of intelligence. To control this, we should be shaping the development of this intelligence to benefit humanity, rather than disrupt it.

http://qz.com/579741/alphabets-eric-schmidt-the-design-of-ai-should-avoid-undesirable-outcomes/

F6, i totally acknowledge and accept i will always be a relative outside observer in the AI/ASI area .. still, i don't understand the certainty of your

"they completely fail to even acknowledge, let alone address, in any way or sense, the brutal and absolute inevitability that any AI/ASI (far) more intelligent and capable than we are, including whatever such may come of their effort, will, quickly, go from doing (just) what we want/tell it to do, to doing what it decides to do (whether we like it or not) -- we're already deep into building autonomous situationally-aware/analytical decision-making into the already emergent systems which already exist (in particular but without limitation in the development of autonomous weapons systems)"

.. again, i agree, that logic is strong and it's not easy to see it not as a certainty .. seems your "brutal inevitably" is the pivotal point .. seems their public position anyway is that they don't agree with the certainty of yours, rather they believe there is a chance that AI/ASI can be managed .. lol, seriously it's hard for me, when i really don't know, to believe that Musk and all of them privately believe it is already too late to avert the takeover which you think they all - if i get that right - must privately accept .. again, it would be interesting to see what Musk's, and the others, reply would be to your thinking on that ..

Repeat from the one you replied to

my position is meant to be fair to you and the other[s] named in [all these]
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=119192626