InvestorsHub Logo
Followers 72
Posts 100863
Boards Moderated 3
Alias Born 08/01/2006

Re: F6 post# 264505

Sunday, 02/05/2017 12:02:34 AM

Sunday, February 05, 2017 12:02:34 AM

Post# of 481489
The Technological Singularity

By Murray Shanahan

Overview

The idea that human history is approaching a “singularity”—that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both—has moved from the realm of science fiction to serious debate. Some singularity theorists predict that if the field of artificial intelligence (AI) continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. Murray Shanahan offers an introduction to the idea of the singularity and considers the ramifications of such a potentially seismic event.

Shanahan’s aim is not to make predictions but rather to investigate a range of scenarios. Whether we believe that singularity is near or far, likely or impossible, apocalypse or utopia, the very idea raises crucial philosophical and pragmatic questions, forcing us to think seriously about what we want as a species.

Shanahan describes technological advances in AI, both biologically inspired and engineered from scratch. Once human-level AI—theoretically possible, but difficult to accomplish—has been achieved, he explains, the transition to superintelligent AI could be very rapid. Shanahan considers what the existence of superintelligent machines could mean for such matters as personhood, responsibility, rights, and identity. Some superhuman AI agents might be created to benefit humankind; some might go rogue. (Is Siri the template, or HAL?) The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations. Shanahan makes it clear that we need to imagine both possibilities if we want to bring about the better outcome.
About the Author

Murray Shanahan is Professor of Cognitive Robotics in the Department of Computing at Imperial College London. He is the author of Solving the Frame Problem (MIT Press) and Embodiment and the Inner Life.

Reviews

“This evenhanded primer on a topic whose significance is becoming increasingly recognized ought, as per its inclusion in this series, to receive wide exposure. It will naturally appeal more to those with lay interests in philosophy, ethics, anthropology, computer science, cognitive science, engineering, sf, and world issues.”—Library Journal

“Shanahan's presentation is succinct, comprehensive and commendably accessible for such a complex subject.”—E & T: Engineering & Technology Magazine

https://mitpress.mit.edu/books/technological-singularity

==

Forget ideology, liberal democracy’s newest threats come from technology and bioscience

John Naughton

A groundbreaking book by historian Yuval Harari claims that artificial intelligence and genetic
enhancements will usher in a world of inequality and powerful elites. How real is the threat?


What price humanity when consciousness is no longer required? Photograph: Alamy

Sunday 28 August 2016 00.15 EDT
Last modified on Thursday 2 February 2017 17.40 EST

The BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge social anthropologist. “Men have become like gods,” Leach began. “Isn’t it about time that we understood our divinity? Science offers us total mastery over our environment and over our destiny, yet instead of rejoicing we feel deeply afraid.”

That was nearly half a century ago, and yet Leach’s opening lines could easily apply to today. He was speaking before the internet had been built and long before the human genome had been decoded, and so his claim about men becoming “like gods” seems relatively modest compared with the capabilities that molecular biology and computing have subsequently bestowed upon us. Our science-based culture is the most powerful in history, and it is ceaselessly researching, exploring, developing and growing. But in recent times it seems to have also become plagued with existential angst as the implications of human ingenuity begin to be (dimly) glimpsed.

--
The Guardian's Science Weekly Cross Section: Uta Frith – Science Weekly podcast
Nicola Davis sits down with Professor Uta Frith to talk autism, passion, rebellion and the role of women in science
Listen > https://www.theguardian.com/science/audio/2017/feb/01/cross-section-uta-frith-science-weekly-podcast
--

The title that Leach chose for his Reith Lecture – A Runaway World .. http://www.bbc.co.uk/programmes/p00h3xy8 – captures our zeitgeist too. At any rate, we are also increasingly fretful about a world that seems to be running out of control, largely (but not solely) because of information technology and what the life sciences are making possible. But we seek consolation in the thought that “it was always thus”: people felt alarmed about steam in George Eliot’s time and got worked up about electricity, the telegraph and the telephone as they arrived on the scene. The reassuring implication is that we weathered those technological storms, and so we will weather this one too. Humankind will muddle through.

But in the last five years or so even that cautious, pragmatic optimism has begun to erode. There are several reasons for this loss of confidence. One is the sheer vertiginous pace of technological change. Another is that the new forces at loose in our society – particularly information technology and the life sciences – are potentially more far-reaching in their implications than steam or electricity ever were. And, thirdly, we have begun to see startling advances in these fields that have forced us to recalibrate our expectations.
Advertisement

A classic example is the field of artificial intelligence (AI), defined as the quest to enable machines to do things that would require intelligence if performed by a human. For as long as most of us can remember, AI in that sense was always 20 years away from the date of prediction. Maybe it still is. But in the last few years we have seen that the combination of machine learning, powerful algorithms, vast processing power and so-called “Big Data” can enable machines to do very impressive things – real-time language translation, for example, or driving cars safely through complex urban environments – that seemed implausible even a decade ago.

And this, in turn, has led to a renewal of excited speculation about the possibility – and the existential risks –

[...]

In a way, it’s a story about the evolution and nature of modernity. For most of human history, Harari argues, humans believed in a cosmic order. Their world was ruled by omnipotent gods who exercised their power in capricious and incomprehensible ways. The best one could do was to try to placate these terrifying powers and obey (and pay taxes to) the priesthoods who claimed to be the anointed intermediaries between mere humans and gods. It may have been a dog’s life but at least you knew where you stood, and in that sense belief in a transcendental order gave meaning to human lives.

But then came science. Harari argues that the history of modernity is best told as a struggle between science and religion. In theory, both were interested in truth – but in different kinds of truth. Religion was primarily interested in order, whereas science, as it evolved, was primarily interested in power – the power that comes from understanding why and how things happen, and enables us to cure diseases, fight wars and produce food, among other things.

[...]

The answer, he argues, was in a new kind of religion: humanism – a belief system that “sanctifies the life, happiness and power of Homo sapiens”. So the deal that defined modern society was a covenant between humanism and science in which the latter provided the means for achieving the ends specified by the former.

And our looming existential crisis, as Harari sees it, comes from the fact that this covenant is destined to fall apart in this century.

[...]

Personally, I’m not convinced by his dataism idea: the technocratic ideology underpinning our current obsession with “Big Data” will eventually collapse under the weight of its own absurdity. But in two other areas, Harari is exceedingly perceptive. The first is that our confident belief that we cannot be superseded by machines – because we have consciousness and they cannot have it – may be naive. Not because machine consciousness will be possible but because for Harari’s dystopia to arrive, consciousness is not required. We require machines that are super-intelligent: intelligence is necessary; consciousness is an optional extra which in most cases would simply be a nuisance. And it’s therefore not a showstopper for AI development.
Advertisement

The second is that I’m sure that his reading of the potential of bioscience is accurate. Even the Economist magazine recently ran a cover story entitled: “Cheating death: the science that can extend your lifespan.” But the exciting new possibilities offered by genetic technology will be expensive and available only to elites. So the long century in which medicine had a “levelling up” effect on human populations, bringing good healthcare within the reach of most people, has come to an end. Even today, rich people live longer and healthier lives. In a couple of decades, that gap will widen into a chasm.

https://www.theguardian.com/commentisfree/2016/aug/28/ideology-liberal-democracy-technology-bioscience-yuval-harari-artificial-intelligence

.. the last bit is almost bottom .. F6, it was good to feel, in watching from at least say 21:50 of that video, .. http://investorshub.advfn.com/boards/read_msg.aspx?message_id=128427323 .. that the most basic disagreement with Chomsky i could pick from the video is the one on the singularity question .. his views on progress and regression, specifically on the latter, that our destruction of the environment is a serious existential threat, i know you agree with .. one of my yuge 'opes is that we all here are still around in say say 2030 - to see how his now position, that the environmental destruction we engage in is more of a now existential threat than the technological singularity situation, pans out.

Thanks, going back over stuff on such a variety of topics (not ugh, lol, just politics) takes one fluck of a lot of time .. yet is always a gas as new stuff inevitably flicks in, too .. :)




It was Plato who said, “He, O men, is the wisest, who like Socrates, knows that his wisdom is in truth worth nothing”

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.