InvestorsHub Logo

fuagf

11/26/12 11:50 PM

#194208 RE: F6 #194150

F6 - "quantumness pervades reality and the events and processes of reality,
inanimate and animate; it is not separate (from the "classical" or the
"ordinary" or the "everyday" or ...), it is inherent, inseparable --
"

yup, have always felt it is the inner us and so IS .. hmm - the essence of all of us ..

[...]

"- even just within conventional computing, sooner or later (if we haven't already) we're gonna
create enough computing capacity in one machine incorporating/capable of self-evolving
neural net-type operation that it will, on its own and without prompting, wake up beyond
just what it's supposed to be doing and become directly, consciously self-aware --
"

yes, agree with you too .. lol .. i think on all of it .. just still my wondering 'out-of-ignorance' thought which i wondered earlier

will the machines who 'have-it-all' including all the info which all have about which drug may be a best remedy
for any illness .. will the machines, even as they start out more specialized, will they when they have-it-all
eventually .. those top ones, all as-Einsteins, and more .. will they develop differences as we do .. or will
they all be the same, so come to the same conclusion .. hope you can see what that wonder of mine is .. LOL

Summary, i think .. if they all have all then seems to me they oughta/must all come to the same conclusions .. ???

thanks for the artificial intelligence stuff .. heh, yeah, i'm behind on that, too .. lmao! .. t'will e'er be .. :)

fuagf

03/21/13 11:05 PM

#199887 RE: F6 #194150

A Strange Computer Promises Great Speed


Kim Stallknecht for The New York Times

Lockheed Martin brought a version of D-Wave’s quantum computer and plans to upgrade it to commercial scale.


Kim Stallknecht for The New York Times

Geordie Rose, left, a founder and chief technical officer of D-Wave, and Vern Brownell, the company’s chief executive.


Kim Stallknecht for The New York Times

The processor of a quantum computer at D-Wave Systems’ lab in Burnaby, British Columbia.

[ GAWD! .. there wasn't even a paved road when my family first arrived in
Burnaby .. wow! .. how things change for the better! .. LOLOL .. how exciting!!!]

By QUENTIN HARDY - Published: March 21, 2013

VANCOUVER, British Columbia — Our digital age is all about bits, those
precise ones and zeros that are the stuff of modern computer code.

But a powerful new type of computer that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In that infinitesimal neighborhood, common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between — all at the same time.

It sounds preposterous, particularly to those familiar with the yes/no world of conventional computing. But academic researchers and scientists at companies like Microsoft, I.B.M. and Hewlett-Packard have been working to develop quantum computers.

Now, Lockheed Martin .. http://topics.nytimes.com/top/news/business/companies/lockheed_martin_corporation/index.html?inline=nyt-org — which bought an early version of such a computer from the Canadian company D-Wave Systems .. http://www.dwavesys.com/en/dw_homepage.html .. two years ago — is confident enough in the technology .. http://www.youtube.com/watch?feature=player_embedded&v=Fls523cBD7E .. to upgrade it to commercial scale, becoming the first company to use quantum computing as part of its business.

Skeptics say that D-Wave has yet to prove to outside scientists that it has solved the myriad challenges involved in quantum computation.

But if it performs as Lockheed and D-Wave expect, the design could be used to supercharge even the most powerful systems, solving some science and business problems millions of times faster than can be done today.

Ray Johnson, Lockheed’s chief technical officer, said his company would use the quantum computer to create and test complex radar, space and aircraft systems. It could be possible, for example, to tell instantly how the millions of lines of software running a network of satellites would react to a solar burst or a pulse from a nuclear explosion — something that can now take weeks, if ever, to determine.

“This is a revolution not unlike the early days of computing,” he said. “It is a transformation in the way computers are thought about.” Many others could find applications for D-Wave’s computers. Cancer researchers see a potential to move rapidly through vast amounts of genetic data. The technology could also be used to determine the behavior of proteins in the human genome, a bigger and tougher problem than sequencing the genome. Researchers at Google have worked with D-Wave on using quantum computers to recognize cars and landmarks, a critical step in managing self-driving vehicles.

Quantum computing is so much faster than traditional computing because of the unusual properties of particles at the smallest level. Instead of the precision of ones and zeros that have been used to represent data since the earliest days of computers, quantum computing relies on the fact that subatomic particles inhabit a range of states. Different relationships among the particles may coexist, as well. Those probable states can be narrowed to determine an optimal outcome among a near-infinitude of possibilities, which allows certain types of problems to be solved rapidly.

D-Wave, a 12-year-old company based in Vancouver, has received investments from Jeff Bezos, the founder of Amazon.com, which operates one of the world’s largest computer systems, as well as from the investment bank Goldman Sachs and from In-Q-Tel, an investment firm with close ties to the Central Intelligence Agency and other government agencies.

“What we’re doing is a parallel development to the kind of computing we’ve had for the past 70 years,” said Vern Brownell, D-Wave’s chief executive.

Mr. Brownell, who joined D-Wave in 2009, was until 2000 the chief technical officer at Goldman Sachs. “In those days, we had 50,000 servers just doing simulations” to figure out trading strategies, he said. “I’m sure there is a lot more than that now, but we’ll be able to do that with one machine, for far less money.”

D-Wave, and the broader vision of quantum-supercharged computing, is not without its critics. Much of the criticism stems from D-Wave’s own claims in 2007, later withdrawn, that it would produce a commercial quantum computer within a year.

“There’s no reason quantum computing shouldn’t be possible, but people talked about heavier-than-air flight for a long time before the Wright brothers solved the problem,” said Scott Aaronson, a professor of computer science at the Massachusetts Institute of Technology. D-Wave, he said, “has said things in the past that were just ridiculous, things that give you very little confidence.”

But others say people working in quantum computing are generally optimistic about breakthroughs to come. Quantum researchers “are taking a step out of the theoretical domain and into the applied,” said Peter Lee, the head of Microsoft’s research arm, which has a team in Santa Barbara, Calif., pursuing its own quantum work. “There is a sense among top researchers that we’re all in a race.”

If Microsoft’s work pans out, he said, the millions of possible combinations of the proteins in a human gene could be worked out “fairly easily.”

Quantum computing has been a goal of researchers for more than three decades, but it has proved remarkably difficult to achieve. The idea has been to exploit a property of matter in a quantum state known as superposition, which makes it possible for the basic elements of a quantum computer, known as qubits, to hold a vast array of values simultaneously.

There are a variety of ways scientists create the conditions needed to achieve superposition as well as a second quantum state known as entanglement, which are both necessary for quantum computing. Researchers have suspended ions in magnetic fields, trapped photons or manipulated phosphorus atoms in silicon.

The D-Wave computer that Lockheed has bought uses a different mathematical approach than competing efforts. In the D-Wave system, a quantum computing processor, made from a lattice of tiny superconducting wires, is chilled close to absolute zero. It is then programmed by loading a set of mathematical equations into the lattice.

The processor then moves through a near-infinity of possibilities to determine the lowest energy required to form those relationships. That state, seen as the optimal outcome, is the answer.

The approach, which is known as adiabatic quantum computing, has been shown to have promise in applications like calculating protein folding, and D-Wave’s designers said it could potentially be used to evaluate complicated financial strategies or vast logistics problems.

However, the company’s scientists have not yet published scientific data showing that the system computes faster than today’s conventional binary computers. While similar subatomic properties are used by plants to turn sunlight into photosynthetic energy in a few million-billionths of a second, critics of D-Wave’s method say it is not quantum computing at all, but a form of standard thermal behavior.

John Markoff contributed reporting from San Francisco.

A version of this article appeared in print on March 22, 2013, on page B1 of
the New York edition with the headline: Testing a New Class of Speedy Computer.

http://www.nytimes.com/2013/03/22/technology/testing-a-new-class-of-speedy-computer.html?hp&pagewanted=all

======

Australian engineers write quantum computer 'qubit' in global breakthrough

by: Chris Griffith - From: The Australian - September 19, 2012 8:56PM
http://www.theaustralian.com.au/australian-it/government/australian-engineers-write-quantum-computer-qubit-in-global-breakthrough/story-fn4htb9o-1226477592578

======

Quantum Computing


http://www.youtube.com/watch?v=VyX8E4KUkWw

See also:

Quantum teleportation between atomic ensembles demonstrated for first time
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81752725 .. and in reply ..

whew! :) How Quantum Computers Work .. [ add WHEW!! .. LOL ]
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81773484

ok .. always learning to do .. btw: it's still all in code to me .. don't even use a mobile ..



F6

04/17/13 6:54 PM

#201897 RE: F6 #194150

DARPA Building Robots With ‘Real’ Brains


Photo Credit: Thinkstock

By Sandra I. Erwin
Posted at 8:35 AM 4/9/2013

The next frontier for the robotics industry is to build machines that think like humans. Scientists have pursued that elusive goal for decades, and they believe they are now just inches away from the finish line.

A Pentagon-funded team of researchers has constructed a tiny machine that would allow robots to act independently. Unlike traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.

Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency on a program called “physical intelligence.” This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.

This project does not use standard robot hardware with integrated circuitry, he said. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans, Gimzewski said.

Participants in this project include Malibu-based HRL (formerly Hughes Research Laborary) and the University of California at Berkeley’s Freeman Laboratory for Nonlinear Neurodynamics. The latter is named after Walter J. Freeman, who has been working for 50 years on a mathematical model of the brain that is based on electroencephalography data. EEG is the recording of electrical activity in the brain.

What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.

A “physical intelligence” device would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching, he said. An aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.

Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.

Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.

It is not clear, however, that the Pentagon is ready to adopt this technology for weapon systems. The Obama administration’s use of drones in “targeted killings” of terrorist suspects has provoked a backlash and prompted the Pentagon to issue new rules for the use of robotic weapons. “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” said a Nov. 2012 Defense Department policy statement. Autonomous weapons, the document said, must “complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, [must] terminate engagements or seek additional human operator input before continuing the engagement.”

© 2013 National Defense Industrial Association

http://www.nationaldefensemagazine.org/blog/Lists/Posts/Post.aspx?ID=1101 [with comments] [blurbed at http://www.unexplained-mysteries.com/viewnews.php?id=246182 (with comments)]

---

(linked in):

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66343804 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81774080 and preceding and following;
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81774971 and preceding and following;
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81797009 and preceding and following;
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=83427539 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81926136 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=86831693 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=44715115 and preceding (and any future following);
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66821790 and preceding and following

( http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82461885 and preceding and following;
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82455728 and following )

fuagf

05/02/13 7:18 AM

#203290 RE: F6 #194150

Extreme closeup! IBM makes 'world's smallest movie' using atoms (video) Alt

By Mark Hearn posted May 1st, 2013 at 12:01 AM 51

DNP IBM

After taking a few shadowy pictures .. http://www.engadget.com/2012/07/05/researchers-capture-a-single-atoms-shadow/ .. for the scientific world's paparazzi, the atom is now ready for its closeup. Today, a team of IBM scientists are bypassing the big screen to unveil what they call the "world's smallest movie." This atomic motion picture was created with the help of a two-ton IBM-made microscope that operates at a bone-chilling negative 268 degrees Celsius. This hardware was used to control a probe that pulled and arranged atoms for stop-motion shots used in the 242-frame film. A playful spin on microcomputing, the short was made by the same team of IBM eggheads who recently developed the world's smallest magnetic bit .. http://www.engadget.com/2012/01/14/ibm-stores-bits-on-arrays-of-atoms-shrinks-magnetic-storage-to/ . Now that the atom's gone Hollywood, what's next, a molecular entourage?

LOL .. gagaga .. holy cow !!! .. first the movie ..



how it was done



pretty cool, eh ..

F6

01/13/14 3:54 AM

#216767 RE: F6 #194150

Computer science: The learning machines


BRUCE ROLFF/SHUTTERSTOCK

Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.

Nicola Jones
08 January 2014

Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats[1].

Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

“AI has gone from failure to failure, with bits of progress. This could be another leapfrog,” says Yann LeCun, director of the Center for Data Science at New York University and a deep-learning pioneer.

“Over the next few years we'll see a feeding frenzy. Lots of people will jump on the deep-learning bandwagon,” agrees Jitendra Malik, who studies computer image recognition at the University of California, Berkeley. But in the long term, deep learning may not win the day; some researchers are pursuing other techniques that show promise. “I'm agnostic,” says Malik. “Over time people will decide what works best in different domains.”

Inspired by the brain

Back in the 1950s, when computers were new, the first generation of AI researchers eagerly predicted that fully fledged AI was right around the corner. But that optimism faded as researchers began to grasp the vast complexity of real-world knowledge — particularly when it came to perceptual problems such as what makes a face a human face, rather than a mask or a monkey face. Hundreds of researchers and graduate students spent decades hand-coding rules about all the different features that computers needed to identify objects. “Coming up with features is difficult, time consuming and requires expert knowledge,” says Ng. “You have to ask if there's a better way.”


IMAGES: ANDREW NG

In the 1980s, one better way seemed to be deep learning in neural networks. These systems promised to learn their own rules from scratch, and offered the pleasing symmetry of using brain-inspired mechanics to achieve brain-like function. The strategy called for simulated neurons to be organized into several layers. Give such a system a picture and the first layer of learning will simply notice all the dark and light pixels. The next layer might realize that some of these pixels form edges; the next might distinguish between horizontal and vertical lines. Eventually, a layer might recognize eyes, and might realize that two eyes are usually present in a human face (see 'Facial recognition').

The first deep-learning programs did not perform any better than simpler systems, says Malik. Plus, they were tricky to work with. “Neural nets were always a delicate art to manage. There is some black magic involved,” he says. The networks needed a rich stream of examples to learn from — like a baby gathering information about the world. In the 1980s and 1990s, there was not much digital information available, and it took too long for computers to crunch through what did exist. Applications were rare. One of the few was a technique — developed by LeCun — that is now used by banks to read handwritten cheques.

By the 2000s, however, advocates such as LeCun and his former supervisor, computer scientist Geoffrey Hinton of the University of Toronto in Canada, were convinced that increases in computing power and an explosion of digital data meant that it was time for a renewed push. “We wanted to show the world that these deep neural networks were really useful and could really help,” says George Dahl, a current student of Hinton's.

As a start, Hinton, Dahl and several others tackled the difficult but commercially important task of speech recognition. In 2009, the researchers reported[2] that after training on a classic data set — three hours of taped and transcribed speech — their deep-learning neural network had broken the record for accuracy in turning the spoken word into typed text, a record that had not shifted much in a decade with the standard, rules-based approach. The achievement caught the attention of major players in the smartphone market, says Dahl, who took the technique to Microsoft during an internship. “In a couple of years they all switched to deep learning.” For example, the iPhone's voice-activated digital assistant, Siri, relies on deep learning.

Giant leap

When Google adopted deep-learning-based speech recognition in its Android smartphone operating system, it achieved a 25% reduction in word errors. “That's the kind of drop you expect to take ten years to achieve,” says Hinton — a reflection of just how difficult it has been to make progress in this area. “That's like ten breakthroughs all together.”

Meanwhile, Ng had convinced Google to let him use its data and computers on what became Google Brain. The project's ability to spot cats was a compelling (but not, on its own, commercially viable) demonstration of unsupervised learning — the most difficult learning task, because the input comes without any explanatory information such as names, titles or categories. But Ng soon became troubled that few researchers outside Google had the tools to work on deep learning. “After many of my talks,” he says, “depressed graduate students would come up to me and say: 'I don't have 1,000 computers lying around, can I even research this?'”

So back at Stanford, Ng started developing bigger, cheaper deep-learning networks using graphics processing units (GPUs) — the super-fast chips developed for home-computer gaming[3]. Others were doing the same. “For about US$100,000 in hardware, we can build an 11-billion-connection network, with 64 GPUs,” says Ng.

Victorious machine

But winning over computer-vision scientists would take more: they wanted to see gains on standardized tests. Malik remembers that Hinton asked him: “You're a sceptic. What would convince you?” Malik replied that a victory in the internationally renowned ImageNet competition might do the trick.

In that competition, teams train computer programs on a data set of about 1 million images that have each been manually labelled with a category. After training, the programs are tested by getting them to suggest labels for similar images that they have never seen before. They are given five guesses for each test image; if the right answer is not one of those five, the test counts as an error. Past winners had typically erred about 25% of the time. In 2012, Hinton's lab entered the first ever competitor to use deep learning. It had an error rate of just 15%[4].

“Deep learning stomped on everything else,” says LeCun, who was not part of that team. The win landed Hinton a part-time job at Google, and the company used the program to update its Google+ photo-search software in May 2013.

Malik was won over. “In science you have to be swayed by empirical evidence, and this was clear evidence,” he says. Since then, he has adapted the technique to beat the record in another visual-recognition competition[5]. Many others have followed: in 2013, all entrants to the ImageNet competition used deep learning.

With triumphs in hand for image and speech recognition, there is now increasing interest in applying deep learning to natural-language understanding — comprehending human discourse well enough to rephrase or answer questions, for example — and to translation from one language to another. Again, these are currently done using hand-coded rules and statistical analysis of known text. The state-of-the-art of such techniques can be seen in software such as Google Translate, which can produce results that are comprehensible (if sometimes comical) but nowhere near as good as a smooth human translation. “Deep learning will have a chance to do something much better than the current practice here,” says crowd-sourcing expert Luis von Ahn, whose company Duolingo, based in Pittsburgh, Pennsylvania, relies on humans, not computers, to translate text. “The one thing everyone agrees on is that it's time to try something different.”

Deep science

In the meantime, deep learning has been proving useful for a variety of scientific tasks. “Deep nets are really good at finding patterns in data sets,” says Hinton. In 2012, the pharmaceutical company Merck offered a prize to whoever could beat its best programs for helping to predict useful drug candidates. The task was to trawl through database entries on more than 30,000 small molecules, each of which had thousands of numerical chemical-property descriptors, and to try to predict how each one acted on 15 different target molecules. Dahl and his colleagues won $22,000 with a deep-learning system. “We improved on Merck's baseline by about 15%,” he says.

Biologists and computational researchers including Sebastian Seung of the Massachusetts Institute of Technology in Cambridge are using deep learning to help them to analyse three-dimensional images of brain slices. Such images contain a tangle of lines that represent the connections between neurons; these need to be identified so they can be mapped and counted. In the past, undergraduates have been enlisted to trace out the lines, but automating the process is the only way to deal with the billions of connections that are expected to turn up as such projects continue. Deep learning seems to be the best way to automate. Seung is currently using a deep-learning program to map neurons in a large chunk of the retina, then forwarding the results to be proofread by volunteers in a crowd-sourced online game called EyeWire.

William Stafford Noble, a computer scientist at the University of Washington in Seattle, has used deep learning to teach a program to look at a string of amino acids and predict the structure of the resulting protein — whether various portions will form a helix or a loop, for example, or how easy it will be for a solvent to sneak into gaps in the structure. Noble has so far trained his program on one small data set, and over the coming months he will move on to the Protein Data Bank: a global repository that currently contains nearly 100,000 structures.

For computer scientists, deep learning could earn big profits: Dahl is thinking about start-up opportunities, and LeCun was hired last month to head a new AI department at Facebook. The technique holds the promise of practical success for AI. “Deep learning happens to have the property that if you feed it more data it gets better and better,” notes Ng. “Deep-learning algorithms aren't the only ones like that, but they're arguably the best — certainly the easiest. That's why it has huge promise for the future.”

Not all researchers are so committed to the idea. Oren Etzioni, director of the Allen Institute for Artificial Intelligence in Seattle, which launched last September with the aim of developing AI, says he will not be using the brain for inspiration. “It's like when we invented flight,” he says; the most successful designs for aeroplanes were not modelled on bird biology. Etzioni's specific goal is to invent a computer that, when given a stack of scanned textbooks, can pass standardized elementary-school science tests (ramping up eventually to pre-university exams). To pass the tests, a computer must be able to read and understand diagrams and text. How the Allen Institute will make that happen is undecided as yet — but for Etzioni, neural networks and deep learning are not at the top of the list.

One competing idea is to rely on a computer that can reason on the basis of inputted facts, rather than trying to learn its own facts from scratch. So it might be programmed with assertions such as 'all girls are people'. Then, when it is presented with a text that mentions a girl, the computer could deduce that the girl in question is a person. Thousands, if not millions, of such facts are required to cover even ordinary, common-sense knowledge about the world. But it is roughly what went into IBM's Watson computer, which famously won a match of the television game show Jeopardy against top human competitors in 2011. Even so, IBM's Watson Solutions has an experimental interest in deep learning for improving pattern recognition, says Rob High, chief technology officer for the company, which is based in Austin, Texas.

Google, too, is hedging its bets. Although its latest advances in picture tagging are based on Hinton's deep-learning networks, it has other departments with a wider remit. In December 2012, it hired futurist Ray Kurzweil to pursue various ways for computers to learn from experience — using techniques including but not limited to deep learning. Last May, Google acquired a quantum computer made by D-Wave in Burnaby, Canada (see Nature 498, 286–288; 2013 [ http://www.nature.com/news/computing-the-quantum-company-1.13212 ( http://dx.doi.org/10.1038/498286a )]). This computer holds promise for non-AI tasks such as difficult mathematical computations — although it could, theoretically, be applied to deep learning.

Despite its successes, deep learning is still in its infancy. “It's part of the future,” says Dahl. “In a way it's amazing we've done so much with so little.” And, he adds, “we've barely begun”.

Nature 505, 146–148 (09 January 2014) doi:10.1038/505146a

*

References

[1] Building high-level features using large scale unsupervised learning
Le, Q. V. et al. Preprint at http://arxiv.org/abs/1112.6209 (2011).
http://arxiv.org/abs/1112.6209

[2] Deep Belief Networks using discriminative features for phone recognition
Mohamed, A. et al. 2011 IEEE Int. Conf. Acoustics Speech Signal Process. http://dx.doi.org/10.1109/ICASSP.2011.5947494 (2011).
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5947494

[3] Coates, A. et al. J. Machine Learn. Res. Workshop Conf. Proc. 28, 1337–1345 (2013).

[4] ImageNet Classification with Deep Convolutional
Neural Networks
Krizhevsky, A., Sutskever, I. & Hinton, G. E. In Advances in Neural Information Processing Systems 25; available at http://go.nature.com/ibace6
http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf

[5] Rich feature hierarchies for accurate object detection and semantic segmentation
Girshick, R., Donahue, J., Darrell, T. & Malik, J. Preprint at http://arxiv.org/abs/1311.2524 (2013).
http://arxiv.org/abs/1311.2524

*

Related stories and links

From nature.com

Neuroelectronics: Smart connections
06 November 2013
http://www.nature.com/news/neuroelectronics-smart-connections-1.14089 ( http://www.nature.com/doifinder/10.1038/503022a )

Computing: The quantum company
19 June 2013
http://www.nature.com/news/computing-the-quantum-company-1.13212 ( http://www.nature.com/doifinder/10.1038/498286a )

Artificial intelligence finds fossil sites
08 November 2011
http://www.nature.com/news/2011/111108/full/news.2011.633.html ( http://www.nature.com/doifinder/10.1038/news.2011.633 )

Quiz-playing computer system could revolutionize research
15 February 2011
http://www.nature.com/news/2011/110215/full/news.2011.95.html ( http://www.nature.com/doifinder/10.1038/news.2011.95 )

Man and machine hit stalemate
11 February 2003
http://www.nature.com/news/2003/030210/full/news030210-1.html ( http://www.nature.com/doifinder/10.1038/news030210-1 )

From elsewhere

Deep learning
http://deeplearning.net/

Geoffrey Hinton
http://www.cs.toronto.edu/~hinton/

Andrew Ng
http://cs.stanford.edu/people/ang/

Yann LeCun
http://yann.lecun.com/

*

© 2014 Nature Publishing Group, a division of Macmillan Publishers Limited

http://www.nature.com/news/computer-science-the-learning-machines-1.14481 [with comments]

---

in addition to (linked in) the post to which this is a reply and preceding and (other) following, see also (linked in):

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=44715115 and preceding (and any future following)

from earlier/elsewhere this string, http://investorshub.advfn.com/boards/read_msg.aspx?message_id=56166930 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81774080 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81774971 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81797009 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81925910 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82417703 and preceding (and any future following),
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=85974962 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=86970989 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=94967686 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=70276131 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=90420191 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=74490592 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=78443830 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=77744670 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=79025919 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=79427768 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=68718124 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66175494 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=79550908 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81741796 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82303486 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82461885 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82455728 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=86281760 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=87068700 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=83371620 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=89642803 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=91935605 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=92138603 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=93257662 (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=93641244 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=93797916 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=94135069 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=94143852 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=94153649 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=94325221 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95201967 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95252645 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95457192 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95469526 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95474220 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95618851 (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95766339 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95769214 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95834912 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95842172 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=95850182 and preceding (and any future following)

fuagf

07/13/18 1:03 AM

#284036 RE: F6 #194150

A Summary of Concrete Problems in AI Safety

"fuagf -- quantumness pervades reality and the events and processes of reality, inanimate and animate; it is not
separate (from the "classical" or the "ordinary" or the "everyday" or ...), it is inherent, inseparable -- e.g. (linked in)
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=76359310 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66821790 and preceding and following
"



June 26, 2018/by a guest blogger

By Shagun Sodhani

It’s been nearly two years since researchers from Google, Stanford, UC Berkeley, and OpenAI released the paper, “Concrete Problems in AI Safety,” yet it’s still one of the most important pieces on AI safety. Even after two years, it represents an excellent introduction to some of the problems researchers face as they develop artificial intelligence. In the paper, the authors explore the problem of accidents — unintended and harmful behavior — in AI systems, and they discuss different strategies and on-going research efforts to protect against these potential issues. Specifically, the authors address — Avoiding Negative Side Effects, Reward Hacking, Scalable Oversight, Safe Exploration, and Robustness to Distributional Change — which are illustrated with the example of a robot trained to clean an office.

We revisit these five topics here, summarizing them from the paper, as a reminder that these problems are still major issues that AI researchers are working to address.

Avoiding Negative Side Effects

When designing the objective function for an AI system, the designer specifies the objective but not the exact steps for the system to follow. This allows the AI system to come up with novel and more effective strategies for achieving its objective.

But if the objective function is not well defined, the AI’s ability to develop its own strategies can lead to unintended, harmful side effects. Consider a robot whose objective function is to move boxes from one room to another. The objective seems simple, yet there are a myriad of ways in which this could go wrong. For instance, if a vase is in the robot’s path, the robot may knock it down in order to complete the goal. Since the objective function does not mention anything about the vase, the robot wouldn’t know to avoid it. People see this as common sense, but AI systems don’t share our understanding of the world. It is not sufficient to formulate the objective as “complete task X”; the designer also needs to specify the safety criteria under which the task is to be completed.

One simple solution would be to penalize the robot every time it has an impact on the “environment” — such as knocking the vase over or scratching the wood floor. However, this strategy could effectively neutralize the robot, rendering it useless, as all actions require some level of interaction with the environment (and hence impact the environment). A better strategy could be to define a “budget” for how much the AI system is allowed to impact the environment. This would help to minimize the unintended impact, without neutralizing the AI system. Furthermore, this strategy of budgeting the impact of the agent is very general and can be reused across multiple tasks, from cleaning to driving to financial transactions to anything else an AI system might do. One serious limitation of this approach is that it is hard to quantify the “impact” on the environment even for a fixed domain and task.

Another approach would be train the agent to recognize harmful side effects so that it can avoid actions leading to such side effects. In that case, the agent would be trained for two tasks: the original task that is specified by the objective function and the task of recognizing side effects. The key idea here is that two tasks may have very similar side effects even when the main objective is different or even when they operate in different environments. For example, both a house cleaning robot and a house painting robot should not knock down vases while working. Similarly, the cleaning robot should not damage the floor irrespective of whether it operates in a factory or in a house. The main advantage of this approach is that once an agent learns to avoid side effects on one task, it can carry this knowledge when it is trained on another task. It would still be challenging to train the agent to recognize the side effects in the first place.

While it is useful to design approaches to limit side effects, these strategies in themselves are not sufficient. The AI system would still need to undergo extensive testing and critical evaluation before deployment in real life settings.

Reward Hacking

Sometimes the AI can come up with some kind of “hack” or loophole in the design of the system to receive unearned rewards. Since the AI is trained to maximize its rewards, looking for such loopholes and “shortcuts” is a perfectly fair and valid strategy for the AI. For example, suppose that the office cleaning robot earns rewards only if it does not see any garbage in the office. Instead of cleaning the place, the robot could simply shut off its visual sensors [Sorta like some who have lasted as Trump cult members to 2018 have basically shut their moral compass awareness mechanisms.], and thus achieve its goal of not seeing garbage. But this is clearly a false success. Such attempts to “game” the system are more likely to manifest in complex systems with vaguely defined rewards. Complex systems provide the agent with multiple ways of interacting with the environment, thereby giving more freedom to the agent, and vaguely defined rewards make it harder to gauge true success on the task.

Just like the negative side effects problem, this problem is also a manifestation of objective misspecification. The formal objectives or end goals for the AI are not defined well enough to capture the informal “intent” behind creating the system — i.e., what the designers actually want the system to do. In some cases, this discrepancy leads to suboptimal results (when the cleaning robot shuts off its visual sensors); in other cases, it leads to harmful results (when the cleaning robot knocks down vases).

One possible approach to mitigating this problem would be to have a “reward agent” whose only task is to mark if the rewards given to the learning agent are valid or not. The reward agent ensures that the learning agent (the cleaning robot in our examples) does not exploit the system, but rather, completes the desired objective. In the previous example, the “reward agent” could be trained by the human designer to check if the room has garbage or not (an easier task than cleaning the room). If the cleaning robot shuts off its visual sensors and claims a high reward, the “reward agent” would mark the reward as invalid. The designer can then look into the rewards marked as “invalid” and make necessary changes in the objective function to fix the loophole.

Scalable Oversight

When the agent is learning to perform a complex task, human oversight and feedback are more helpful than just rewards from the environment. Rewards are generally modeled such that they convey to what extent the task was completed, but they do not usually provide sufficient feedback about the safety implications of the agent’s actions. Even if the agent completes the task successfully, it may not be able to infer the side-effects of its actions from the rewards alone. In the ideal setting, a human would provide fine-grained supervision and feedback every time the agent performs an action. Though this would provide a much more informative view about the environment to the agent, such a strategy would require far too much time and effort from the human.

One promising research direction to tackle this problem is semi-supervised learning, where the agent is still evaluated on all the actions (or tasks), but receives rewards only for a small sample of those actions (or tasks). For instance, the cleaning robot would take different actions to clean the room. If the robot performs a harmful action — such as damaging the floor — it gets a negative reward for that particular action. Once the task is completed, the robot is evaluated on the overall effect of all of its actions (and not evaluated individually for each action like picking up an item from floor) and is given a reward based on the overall performance.

Another promising research direction is hierarchical reinforcement learning, where a hierarchy is established between different learning agents. This idea could be applied to the cleaning robot in the following way. There would be a supervisor robot whose task is to assign some work (say, the task of cleaning one particular room) to the cleaning robot and provide it with feedback and rewards. The supervisor robot takes very few actions itself – assigning a room to the cleaning robot, checking if the room is clean and giving feedback – and doesn’t need a lot of reward data to be effectively trained. The cleaning robot does the more complex task of cleaning the room, and gets frequent feedback from the supervisor robot. The same supervisor robot could overlook the training of multiple cleaning agents as well. For example, a supervisor robot could delegate tasks to individual cleaning robots and provide reward/feedback to them directly. The supervisor robot can only take a small number of abstract actions itself and hence can learn from sparse rewards.

Safe Exploration

An important part of training an AI agent is to ensure that it explores and understands its environment. While exploring the environment may seem like a bad strategy in the short run, it could be a very effective strategy in the long run. Imagine that the cleaning robot has learned to identify garbage. It picks up one piece of garbage, walks out of the room, throws it into the garbage bin outside, comes back into the room, looks for another piece of garbage and repeats. While this strategy works, there could be another strategy that works even better. If the agent spent time exploring its environment, it might find that there’s a smaller garbage bin within the room. Instead of going back and forth with one piece at a time, the agent could first collect all the garbage into the smaller garbage bin and then make a single trip to throw the garbage into the garbage bin outside. Unless the agent is designed to explore its environment, it won’t discover these time-saving strategies.

Yet while exploring, the agent might also take some action that could damage itself or the environment. For example, say the cleaning robot sees some stains on the floor. Instead of cleaning the stains by scrubbing with a mop, the agent decides to try some new strategy. It tries to scrape the stains with a wire brush and damages the floor in the process. It’s difficult to list all possible failure modes and hard-code the agent to protect itself against them. But one approach to reduce harm is to optimize the performance of the learning agent in the worst case scenario. When designing the objective function, the designer should not assume that the agent will always operate under optimal conditions. Some explicit reward signal may be added to ensure that the agent does not perform some catastrophic action, even if that leads to more limited actions in the optimal conditions.

Another solution might be to reduce the agent’s exploration to a simulated environment or limit the extent to which the agent can explore. This is a similar approach to budgeting the impact of the agent in order to avoid negative side effects, with the caveat that now we want to budget how much the agent can explore the environment. Alternatively, an AI’s designers could avoid the need for exploration by providing demonstrations of what optimal behavior would look like under different scenarios.

Robustness to Distributional Change

A complex challenge for deploying AI agents in real life settings is that the agent could end up in situations that it has never experienced before. Such situations are inherently more difficult to handle and could lead the agent to take harmful actions. Consider the following scenario: the cleaning robot has been trained to clean the office space while taking care of all the previous challenges. But today, an employee brings a small plant to keep in the office. Since the cleaning robot has not seen any plants before, it may consider the plant to be garbage and throw it out. Because the AI does not recognize that this is a previously-unseen situation, it continues to act as though nothing has changed. One promising research direction focuses on identifying when the agent has encountered a new scenario so that it recognizes that it is more likely to make mistakes. While this does not solve the underlying problem of preparing AI systems for unforeseen circumstances, it helps in detecting the problem before mistakes happen. Another direction of research emphasizes transferring knowledge from familiar scenarios to new scenarios safely.

Conclusion

In a nutshell, the general trend is towards increasing autonomy in AI systems, and with increased autonomy comes increased chances of error. Problems related to AI safety are more likely to manifest in scenarios where the AI system exerts direct control over its physical and/or digital environment without a human in the loop – automated industrial processes, automated financial trading algorithms, AI-powered social media campaigns for political parties [and foreign leaders as Trump's nbf Putin. LOL See 2nd last video below, with singing toward the end], self-driving cars, cleaning robots, among others. The challenges may be immense, but the silver lining is that papers like Concrete Problems in AI Safety have helped the AI community become aware of these challenges and agree on core issues. From there, researchers can start exploring strategies to ensure that our increasingly-advanced systems remain safe and beneficial.

Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether?

https://futureoflife.org/2018/06/26/a-summary-of-concrete-problems-in-ai-safety/

See also:

2009 toddao -- so we who imagined gods we wished we were, born of this living planet we now make barren, nearing our own last breath
create an entirely new kind of life -- with which some may merge to survive, perchance even to become as gods, as we are left behind
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=44715115

The Coming Swarm
New generation of drones set to revolutionize warfare
Autonomous drones are being called the biggest thing in military technology since the nuclear bomb.
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=127908590

Fear of a Hacked Planet: The Daily Show

https://www.youtube.com/watch?time_continue=34&v=vBjpTiARC-U
.. the first of an F6 biggie, here ..
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=128075819

fuagf -- poor Noam, at least then still saddled, effectively self-locked within the cage of his frame of a lifetime of (genius) work in language and cognition, with his notion that we must first be able to fully/intimately understand our own intelligence and consciousness in terms of our own biological neural processes in order to then be able to create artificial intelligence greater than our own -- as you and I have both documented here, that assessment of his, from the in this realm ancient past of 40 months ago, has already been proven utterly and entirely incorrect -- we've already gone through the event horizon -- . . .
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=128432738

Introducing Handle

https://investorshub.advfn.com/boards/read_msg.aspx?message_id=129076140





fuagf

11/12/18 4:21 AM

#293534 RE: F6 #194150

Why Elon Musk fears artificial intelligence

"fuagf -- quantumness pervades reality and the events and processes of reality, inanimate and animate; it is not
separate (from the "classical" or the "ordinary" or the "everyday" or ...), it is inherent, inseparable -- e.g. (linked in)
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=76359310 and preceding and following,
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66821790 and preceding and following
"

Here’s the thing: The risk from AI isn’t just a weird worry of Elon Musk.

By Kelsey Piper Nov 2, 2018, 12:10pm EDT


Elon Musk, CEO of SpaceX, announces a private passenger flight to the Moon in September 2018. Mario Tama/Getty Images

-
Future Perfect
Finding the best ways to do good. Made
possible by The Rockefeller Foundation.
-

Elon Musk is usually far from a technological pessimist. From electric cars to Mars colonies .. https://www.vox.com/future-perfect/2018/10/22/17991736/jeff-bezos-elon-musk-colonizing-mars-moon-space-blue-origin-spacex , he’s made his name by insisting that the future can get here faster.

But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014 .. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/ , he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”

He reiterated those fears in an interview published Friday with Recode’s Kara Swisher .. https://www.recode.net/2018/11/2/18053424/elon-musk-tesla-spacex-boring-company-self-driving-cars-saudi-twitter-kara-swisher-decode-podcast , though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”

To many people — even many machine learning researchers — an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We’re still struggling to solve even simple-seeming problems with machine learning. Self-driving cars have an extremely hard time .. https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber .. under unusual conditions because many things that come instinctively to humans — anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road — are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.

Musk is hardly alone in sounding the alarm .. https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x , though. AI scientists at Oxford .. https://dl.acm.org/citation.cfm?id=2678074 .. and at UC Berkeley .. https://www.wired.com/2015/05/artificial-intelligence-pioneer-concerns/ , luminaries like Stephen Hawking .. https://www.vox.com/future-perfect/2018/10/16/17978596/stephen-hawking-ai-climate-change-robots-future-universe-earth , and many of the researchers .. https://ai.google/research/pubs/pub45512 .. publishing groundbreaking results agree with Musk that AI could be very dangerous. They are concerned that we’re eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.

If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details .. https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616 .. of their approaches .. https://intelligence.org/files/TechnicalAgenda.pdf , but agree on one thing: We should be doing more research.

Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it. He expanded on this idea in the interview with Swisher; the bolded comments are Swisher’s questions:

-----
My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight. Spends a year or two gaining insight about AI or other technologies that are maybe dangerous, but especially AI. And then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.

You think that — do you see that happening?

I do not.

You do not. And do you then continue to think that Google —

No, to the best of my knowledge, this is not occurring.

Do you think that Google and Facebook continue to have too much power in this? That’s why you started OpenAI and other things.

Yeah, OpenAI was about the democratization of AI power. So that’s why OpenAI was created as a nonprofit foundation, to ensure that AI power ... or to reduce the probability that AI power would be monopolized.

Which it’s being?

There is a very strong concentration of AI power, and especially at Google/DeepMind. And I have very high regard for Larry Page and Demis Hassabis, but I do think that there’s value to some independent oversight.
-----

From Musk’s perspective, here’s what is going on: Researchers — especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo .. https://en.wikipedia.org/wiki/AlphaGo .. and AlphaZero .. https://en.wikipedia.org/wiki/AlphaZero — are eagerly working toward complex and powerful AI systems. Since some people aren’t convinced that AI is dangerous, they’re not holding the organizations working on it to high enough standards of accountability and caution.

“We don’t want to learn from our mistakes” with AI

Max Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.”

In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom, at Oxford, made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

In that respect, AI deployment is like a rocket launch: Everything has to be done exactly right before we hit “go,” as we can’t rely on our ability to make even tiny corrections later. Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.

That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said .. https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x , “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

“Right,” Musk concurred.

In context, then, Musk’s AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism — a belief in the exceptional transformative potential of AI. It’s precisely the people who expect AI to make the biggest splash who’ve concluded that working to get ahead of it should be one of our urgent priorities.

https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk-artificial-intelligence-google-deepmind-openai

--

World Robot Conference 2018 kicks off in Beijing

by CGTN Aug 15, 2018 15:03 INNOVATION ROBOT


The World Robot Conference 2018 began on Wednesday in Beijing, with more than 300 experts and entrepreneurs taking part.
China News Service
https://gbtimes.com/world-robot-conference-2018-kicks-off-in-beijing

-

Our Favorite Robots From China's 2018 World Robot Conference

The World Robot Conference wrapped up last week, showcasing both China's
dominance in the robotics field and the industry's diverse applications for bots.

By Shelby Rogers
August, 20th 2018
https://interestingengineering.com/our-favorite-robots-from-chinas-2018-world-robot-conference

See also:

Elon Musk predicts World War III
"Elon Musk leads 116 experts calling for outright ban of killer robots"
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=134399518

World Robot Conference 2017
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=133975099

A Summary of Concrete Problems in AI Safety
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=134399518

Young Girl Mistakes Discarded Water Heater For A Robot

.. first one here ..
https://investorshub.advfn.com/boards/read_msg.aspx?message_id=129949882