LOL, computer beats GO champ, read about that just an hour or so ago .. 'there are more options in GO that there are atoms in the universe' stuck in the cranium
They are going ahead in leaps and bounds .. thanks, you got me by heaps with that one.
Yoshua Bengio leads one of the world’s preëminent research groups developing a powerful AI technique known as deep learning. The startling capabilities that deep learning has given computers in recent years, from human-level voice recognition and image classification to basic conversational skills, have prompted warnings about the progress AI is making toward matching, or perhaps surpassing, human intelligence. Prominent figures such as Stephen Hawking and Elon Musk have even cautioned that artificial intelligence could pose an existential threat to humanity. Musk and others are investing millions of dollars in researching the potential dangers of AI, as well as possible solutions. But the direst statements sound overblown to many of the people who are actually developing the technology. Bengio, a professor of computer science at the University of Montreal, put things in perspective in an interview with MIT Technology Review’s senior editor for AI and robotics, Will Knight.
Should we worry about how quickly artificial intelligence is advancing?
There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical.
Is there a risk that AI researchers might accidentally “unleash the demon,” as Musk has put it?
It’s not like somebody found some magical recipe suddenly. Things are much more complicated than the simple story some people would like to tell. Journalists would sometimes like to tell the story that someone in their garage will have this amazing idea, and then we have a breakthrough and have AI. Similarly, companies want to tell a nice little story that “Oh, we have this revolutionary technology that’s going to change the world—AI is almost here, and we are the company that’s going to deliver it.” That’s not at all how it works.
-- “We’re missing something big. We’ve been making pretty fast progress, but it’s still not at the level where we would say the machine understands. We are still far from that.” --
What about the idea, central to these concerns, that AI could somehow start improving itself and then become difficult to control?
It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.
What are some of the big unsolved problems with AI?
Unsupervised learning is really, really important. Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.
Another big challenge is natural language understanding. We’ve been making pretty fast progress in the past few years, so it’s very encouraging. But it’s still not at the level where we would say the machine understands. That would be when we could read a paragraph and then ask any question about it, and the machine would basically answer in a reasonable way, as a human would. We are still far from that.
What approaches beyond deep learning will be needed to create a true machine intelligence?
Traditional endeavors, including reasoning and logic—we need to marry these things with deep learning in order to move toward AI. I’m one of the few people who think that machine learning people, and especially deep learning people, should pay more attention to neuroscience. Brains work, and we still don’t know why in many ways. Improving that understanding has a great potential to help AI research.
And I think that neuroscience people would gain a lot from keeping track of what we do and trying to fit what they observe of the brain with the kinds of concepts we are developing in machine learning.
Did you ever think you’d have to explain to people that AI isn’t about to take over the world? That must be odd.
It’s certainly a new concern. For so many years, AI has been a disappointment. As researchers we fight to make the machine slightly more intelligent, but they are still so stupid. I used to think we shouldn’t call the field artificial intelligence but artificial stupidity. Really, our machines are dumb, and we’re just trying to make them less dumb.
Now, because of these advances that people can see with demos, now we can say, “Oh, gosh, it can actually say things in English, it can understand the contents of an image.” Well, now we connect these things with all the science fiction we’ve seen and it’s like, “Oh, I’m afraid!”
Okay, but surely it’s still important to think now about the eventual consequences of AI.
Absolutely. We ought to be talking about these things. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.
They are all excellent reads which most all here would love!
The Sadness and Beauty of Watching Google’s AI Play Go
de Metz Business 03.11.16. 7:00 am.
Geordie Wood for WIRED
SEOUL, SOUTH KOREA — At first, Fan Hui thought the move was rather odd. But then he saw its beauty.
“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.
The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo,...
[...]
Losing Control
Rather unexpectedly, I felt this sadness as the match ended and I walked towards the post-game press conference. I was soon stopped by a Chinese reporter named Fred Zhou, whose home country has so closely followed this match. According to Google, 60 million Chinese watched the first game on Wednesday afternoon. Zhou said he was so happy to talk with another technology reporter, lamenting how many journalists were treating the match like sport and hailing the power of Google’s machine learning. But then his tone changed. He said that although he was so very excited to see AlphaGo triumph after Game One on Wednesday, he now felt a certain despair. In the first game, Lee Sedol was caught off-guard. In the second, he was powerless.
Oh-hyoung Kwon, a Korean who helps run a startup incubator in Seoul, later told me that he experienced that same sadness—not because Lee Sedol was a fellow Korean but because he was a fellow human. Kwon even went so far as to say that he is now more aware of the potential for machines to break free from the control of humans, echoing words we’ve long heard from people like Elon Musk and Sam Altman. “There was an inflection point for all human beings,” he said of AlphaGo’s win. “It made us realize that AI is really near us—and realize the dangers of it too.” http://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/?mbid=social_fb
--
Match 3 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo
China launches 'hack-proof' communications satellite
World's first quantum satellite is launched in Jiuquan, Gansu Province, China, August 16, 2016. China Daily via Reuters
By Ben Blanchard Tue Aug 16, 2016 1:32am EDT
China on Tuesday launched the world's first quantum satellite, which will help it establish "hack-proof" communications between space and the ground, state media said, the latest advance in an ambitious space program.
The program is a priority as President Xi Jinping has urged China to establish itself as a space power, and apart from its civilian ambitions, it has tested anti-satellite missiles.
The Quantum Experiments at Space Scale, or QUESS, satellite, was launched from the Jiuquan Satellite Launch Centre in the remote northwestern province of Gansu in the early hours of Tuesday, the official Xinhua news agency said.
"In its two-year mission, QUESS is designed to establish 'hack-proof' quantum communications by transmitting uncrackable keys from space to the ground," it said.
"Quantum communication boasts ultra-high security as a quantum photon can neither be separated nor duplicated," it added. "It is hence impossible to wiretap, intercept or crack the information transmitted through it."
The satellite will enable secure communications between Beijing and Urumqi, Xinhua said, referring to the capital of China's violence-prone far western region of Xinjiang, where the government says it is battling an Islamist insurgency.
"The newly-launched satellite marks a transition in China's role - from a follower in classic information technology development to one of the leaders guiding future achievements," Pan Jianwei, the project's chief scientist, told the agency.
Quantum communications holds "enormous prospects" in the field of defense, it added.
China insists its space program is for peaceful purposes, but the U.S. Defense Department has highlighted its increasing space capabilities, saying it was pursuing activities aimed to prevent adversaries from using space-based assets in a crisis.