InvestorsHub Logo
Followers 72
Posts 100599
Boards Moderated 3
Alias Born 08/01/2006

Re: F6 post# 240912

Tuesday, 12/15/2015 7:43:35 PM

Tuesday, December 15, 2015 7:43:35 PM

Post# of 479950
Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World

F6, been thinking off-'n-on since last on this subject that though Musk, with others, is worried about A1 taking over, i'm boosted by the fact that he and some others do not see it as a fiat-accompli .. so am
back, because woan (still feel man in the context of meaning/standing for/representing humanity sucks, lol, surely all would see that as a word 'woan' is more inclusive and fairer than 'man', chuckle)
has over time overcome many dangers to our existence, i'm sticking to (back to) being optimistic this will be another danger/hurdle overcome .. after wavering, i'm back to hoping so anyway .. :)


Cade Metz 12.15.15. 7:00 am.



Elon Musk. Nathaniel Wood for WIRED

Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.

At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI .. http://www.wired.com/2015/12/elon-musk-snags-top-google-researcher-for-new-ai-non-profit/ . In an interview with Steven Levy of Backchannel .. https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.xqr8zlwdv , timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”

--
If OpenAI stays true to its mission, it will act as a check
on powerful companies like Google and Facebook.
--


Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”

It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

But in the creation of OpenAI, there are more forces at work than just the possibility of super-human intelligence achieving world domination. In the shorter term, OpenAI can directly benefit Musk and Altman and their companies (Y Combinator backed such unicorns as Airbnb, Dropbox, and Stripe). After luring top AI researchers from companies like Google and setting them up at OpenAI, the two entrepreneurs can access ideas they couldn’t get their hands on before. And in pooling online data from their respective companies as they’ve promised to, they’ll have the means to realize those ideas. Nowadays, one key to advancing AI is engineering talent, and the other is data .. http://www.wired.com/2015/11/google-open-sourcing-tensorflow-shows-ais-future-is-data-not-code/ .

If OpenAI stays true to its mission of giving everyone access to new ideas, it will at least serve as a check on powerful companies like Google and Facebook. With Musk, Altman, and others pumping more than a billion dollars into the venture, OpenAI is showing how the very notion of competition has changed in recent years. Increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive.

The Advantages of Open

OpenAI is the culmination of an extremely magnanimous month in the world of artificial intelligence. In early November, Google open sourced (part of) the software engine that drives its AI services .. http://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/ —deep learning technologies that have proven enormously adept at identifying images, recognizing spoken words, translating languages, and understanding natural language. And just before the unveiling of OpenAI, Facebook open sourced the designs for the computer server it built to run its own deep learning services .. http://www.wired.com/2015/12/facebook-open-source-ai-big-sur/ , which tackle many of the same tasks as Google’s tech. Now, OpenAI is vowing to share everything it builds—and a big focus seems to be, well, deep learning.

Yes, such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that, ultimately, advances their own interests as well. For one, as larger community improves these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent. In the field of deep learning in particular, researchers—many of whom come from academia—are very much attracted to the idea of openly sharing their work, of benefiting as many people as possible. “It is certainly a competitive advantage when it comes to hiring researchers,” Altman tells WIRED. “The people we hired … love the fact that [OpenAI is] open and they can share their work.”

--
'Thinking about AI is the cocaine of technologists: it makes
us excited, and needlessly paranoid.' Chris Nicholson, Skymind
--


This competition may be more direct than it might seem. We can’t help but think that Google open sourced its AI engine, TensorFlow, because it knew OpenAI was on the way—and that Facebook shared its Big Sur server design as an answer to both Google and OpenAI. Facebook says this was not the case. Google didn’t immediately respond to a request for comment. And Altman declines to speculate. But he does say that Google knew OpenAI was coming. How could it not? The project nabbed Ilya Sutskever, one of its top AI researchers.

That doesn’t diminish the value of Google’s open source project. Whatever the company’s motives, the code is available to everyone to use as they see fit. But it’s worth remembering that, in today’s world, giving away tech is about more than magnanimity. The deep learning community is relatively small, and all of these companies are vying for the talent that can help them take advantage of this extremely powerful technology. They want to share, but they also want to win. They may release some of their secret sauce, but not all. Open source will accelerate the progress of AI, but as this happens, it’s important that no one company or technology becomes too powerful. That’s why OpenAI is such a meaningful idea.

His Own Apollo Program

You can also bet that, on some level, Musk too sees sharing as a way of winning. “As you know, I’ve had some concerns about AI for some time,” he told Backchannel. And certainly, his public fretting over the threat of an AI apocalypse .. http://www.wired.com/2015/01/ai-arrived-really-worries-worlds-brightest-minds/ .. is well known. But he also runs Tesla, which stands to benefit from the sort of technology OpenAI will develop. Like Google, Tesla is building self-driving cars, which can benefit from deep learning in enormous ways.

Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.

--
More AI [ out one of three inside ]

Google Made a Chatbot That Debates the Meaning of Life
http://www.wired.com/2015/06/google-made-chatbot-debates-meaning-life/
--

Yes, Musk could just hire AI researchers to work at Tesla. And he is. But with OpenAI, he can hire better researchers (because it’s open, and because it’s not constrained by any one company’s business model or short-term interest). He can even lure researchers away from Google. Plus, he can create a far more powerful pool of data that can help feed the work of these researchers. Altman says that Y Combinator companies will share their data with OpenAI, and that’s no small thing. Pair their data with Tesla’s, and you start to rival Google—at least in some ways.

“It’s probably better in some dimensions and worse in others,” says Chris Nicholson, the CEO of deep learning startup called Skymind, which was recently accepted into the Y Combinator program. “I’m sure Airbnb has great housing data that Google can’t touch.”

Musk was an early investor in a company called DeepMind—

[ $11M AI safety research program launched
Aha, this is what we've been waiting for! .. :)
.. a couple back ..
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=118661846 ]

a UK-based outfit that describes itself as “an Apollo program for AI.” And this investment gave him a window into how this remarkable technology was developing. But then Google bought DeepMind, and that window closed. Now, Musk has started his own Apollo program. He once again has the inside track. And OpenAI’s other investors are in a similar position, including Amazon, an Internet giant trails Google and Facebook in the race to AI.

Pessimistic Optimists

But, no, this doesn’t diminish the value of Musk’s open source project. He may have selfish as well as altruistic motives. But the end result is still enormously beneficial to the wider world of AI. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well—if it hasn’t already. That’s good for Tesla and all those Y Combinator companies. But it’s also good for everyone that’s interested in using AI.

Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn’t necessarily that Dr. Evil will turn this tech loose on the world. It’s that the tech will turn itself loose on the world. Deep learning won’t stop at self-driving cars and natural language understanding. Top researchers believe that, given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to super-human intelligence.

“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”

[ YUPPERS, :) we all agree on that! ]

This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.

How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.

Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”

http://www.wired.com/2015/12/elon-musks-billion-dollar-ai-plan-is-about-far-more-than-saving-the-world/

...love reading those sorts of articles which recognize dangers yet
also hold some hope that the good guys about can avert apocalypse ..

See also:

How Elon Musk Thinks: The First Principles Method
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=118763112

Animated map shows how religion spread around the world


http://investorshub.advfn.com/boards/read_msg.aspx?message_id=115590575


It was Plato who said, “He, O men, is the wisest, who like Socrates, knows that his wisdom is in truth worth nothing”

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.