InvestorsHub Logo

F6

11/25/12 2:16 AM

#194150 RE: fuagf #194126

fuagf -- quantumness pervades reality and the events and processes of reality, inanimate and animate; it is not separate (from the "classical" or the "ordinary" or the "everyday" or ...), it is inherent, inseparable -- e.g. (linked in) http://investorshub.advfn.com/boards/read_msg.aspx?message_id=76359310 and preceding and following, http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66821790 and preceding and following

the key is when they become self-organizing, on their own accessing data/sensor inputs/each other and analyzing what they've accessed, creating their own new data, then testing and learning, and so forth -- whether that happens within purely conventional computing (which of course does rely essentially entirely on quantum properties and interactions), or only within sufficiently-scaled quantum computing once that's here -- coventional computers are already doing such things in specific ways at our bidding; we're already developing and giving them basic tools of thinking/conscious process, both on the software side (my last comment at http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81774971 , and the [open literature] article below) and on the hardware side (e.g. [linked in] http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66343804 and preceding and following) -- even just within conventional computing, sooner or later (if we haven't already) we're gonna create enough computing capacity in one machine incorporating/capable of self-evolving neural net-type operation that it will, on its own and without prompting, wake up beyond just what it's supposed to be doing and become directly, consciously self-aware -- and if that hasn't by then happened within conventional computing, it will happen very quickly once they've fired up the first serious quantum computers thus set up/enabled

my take, anyway


--


Scientists See Promise in Deep-Learning Programs


A voice recognition program translated a speech given by Richard F. Rashid, Microsoft’s top scientist, into Mandarin Chinese.
Hao Zhang/The New York Times



A student team led by the computer scientist Geoffrey E. Hinton used deep-learning technology to design software.
Keith Penner


By JOHN MARKOFF
Published: November 23, 2012

Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.

The advances have led to widespread enthusiasm among researchers who design software to perform human activities like seeing, listening and thinking. They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers [ http://www.nytimes.com/2012/08/19/business/new-wave-of-adept-robots-is-changing-global-industry.html?pagewanted=all (about a third of the way down at {linked in} http://investorshub.advfn.com/boards/read_msg.aspx?message_id=78944571 and preceding and following)].

The technology, called deep learning, has already been put to use in services like Apple’s Siri virtual personal assistant, which is based on Nuance Communications’ speech recognition service, and in Google’s Street View, which uses machine vision to identify specific addresses.

But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just “neural nets” for their resemblance to the neural connections in the brain.

“There has been a number of stunning new results with deep-learning methods,” said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. “The kind of jump we are seeing in the accuracy of these systems is very rare indeed.”

Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. Their field has long been plagued by outbursts of misplaced enthusiasm followed by equally striking declines.

In the 1960s, some computer scientists believed that a workable artificial intelligence system was just 10 years away. In the 1980s, a wave of commercial start-ups collapsed, leading to what some people called the “A.I. winter.”

But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton [ http://www.cs.toronto.edu/~hinton/ ] won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs.

From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent.

The achievement was particularly impressive because the team decided to enter the contest at the last minute and designed its software with no specific knowledge about how the molecules bind to their targets. The students were also working with a relatively small set of data; neural nets typically perform well only with very large ones.

“This is a really breathtaking result because it is the first time that deep learning won, and more significantly it won on a data set that it wouldn’t have been expected to win at,” said Anthony Goldbloom, chief executive and founder of Kaggle, a company that organizes data science competitions, including the Merck contest.

Advances in pattern recognition hold implications not just for drug development but for an array of applications, including marketing and law enforcement. With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.

Artificial neural networks, an idea going back to the 1950s, seek to mimic the way the brain absorbs information and learns from it. In recent decades, Dr. Hinton, 64 (a great-great-grandson of the 19th-century mathematician George Boole [ http://plato.stanford.edu/entries/boole/ ], whose work in logic is the foundation for modern digital computers), has pioneered powerful new techniques for helping the artificial networks recognize patterns.

Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.

These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision.

Deep-learning systems have recently outperformed humans in certain limited recognition tests.

Last year, for example, a program created by scientists at the Swiss A. I. Lab [ http://www.idsia.ch/ ] at the University of Lugano won a pattern recognition contest by outperforming both competing software systems and a human expert in identifying images in a database of German traffic signs.

The winning program accurately identified 99.46 percent of the images in a set of 50,000; the top score in a group of 32 human participants was 99.22 percent, and the average for the humans was 98.84 percent.

This summer, Jeff Dean, a Google technical fellow, and Andrew Y. Ng, a Stanford computer scientist, programmed a cluster of 16,000 computers to train itself to automatically recognize images in a library of 14 million pictures of 20,000 different objects. Although the accuracy rate was low — 15.8 percent — the system did 70 percent better than the most advanced previous one.

Deep learning was given a particularly audacious display at a conference last month in Tianjin, China, when Richard F. Rashid [ http://academic.research.microsoft.com/Author/282205/ ], Microsoft’s top scientist, gave a lecture in a cavernous auditorium while a computer program recognized his words and simultaneously displayed them in English on a large screen above his head.

Then, in a demonstration that led to stunned applause, he paused after each sentence and the words were translated into Mandarin Chinese characters, accompanied by a simulation of his own voice in that language, which Dr. Rashid has never spoken.

The feat was made possible, in part, by deep-learning techniques that have spurred improvements in the accuracy of speech recognition.

Dr. Rashid, who oversees Microsoft’s worldwide research organization, acknowledged that while his company’s new speech recognition software made 30 percent fewer errors than previous models, it was “still far from perfect.”

“Rather than having one word in four or five incorrect, now the error rate is one word in seven or eight,” he wrote on Microsoft’s Web site. Still, he added that this was “the most dramatic change in accuracy” since 1979, “and as we add more data to the training we believe that we will get even better results.”

One of the most striking aspects of the research led by Dr. Hinton is that it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields.

“We decided early on not to make money out of this, but just to sort of spread it to infect everybody,” he said. “These companies are terribly pleased with this.”

Referring to the rapid deep-learning advances made possible by greater computing power, and especially the rise of graphics processors, he added:

“The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There’s no looking back now.”

© 2012 The New York Times Company

http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html [ http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html?pagewanted=all ]


F6

11/26/12 11:00 PM

#194205 RE: fuagf #194126

Scientists Discuss What Happens To The 'Soul' After Death (VIDEO)
By Jahnabi Barooah
Posted: 11/25/2012 7:16 pm EST Updated: 11/26/2012 11:21 am EST

What happens when human beings die? Is there a final destination for the soul? These were the questions discussed among four scientists on a video that recently aired on “Through the Wormhole” [ http://science.discovery.com/tv-shows/through-the-wormhole/videos/through-the-wormhole-2-tracking-souls-to-the-afterlife.htm ] hosted by Morgan Freeman on the Science channel.

A number of scientists who have studied consciousness and near-death experiences extensively believe they are close to solving the puzzle, but they vehemently disagree with each other about the solution.

Christof Koch [ http://www.klab.caltech.edu/~koch/ ], the Chief Scientific Officer of the Allen Institute of Brain Science and Lois and Victor Troendle Professor of Cognitive and Behavioral Biology at California Institute of Technology, argued that the soul dies and everything is lost when human beings lose consciousness. “You lose everything. The world does not exist anymore for you. Your friends don’t exist anymore. You don’t exist. Everything is lost,” he said.

Bruce Greyson [ http://en.wikipedia.org/wiki/Bruce_Greyson ], Professor of Psychiatry at the University of Virginia, challenged Koch’s view of consciousness. He said that, “if you take these near death experiences at face value, then they suggest that the mind or the consciousness seems to function without the physical body.”

Stuart Hameroff [ http://www.quantumconsciousness.org/ ], who proposed the highly controversial Orch-OR (orchestrated objective reduction) theory of consciousness in 1996 along with Roger Penrose, told the Science channel, “I think the quantum approach to consciousness [ http://www.huffingtonpost.com/2012/10/28/soul-after-death-hameroff-penrose_n_2034711.html ] can, in principle, explain why we’re here and what our purpose is, and also the possibility of life after death, reincarnation and persistence of consciousness after our bodies give up.”

Finally, Eben Alexander [ http://www.lifebeyonddeath.net/ ] who wrote the widely circulated and criticized cover story for Newsweek, 'Proof of Heaven [ http://www.thedailybeast.com/newsweek/2012/10/07/proof-of-heaven-a-doctor-s-experience-with-the-afterlife.html ]', said, “I have great belief and knowledge that there is a wonderful existence for our souls outside of this earthly realm and that is our true reality, and we all find that out when we live this earth.”

Whose argument do you find more compelling? Do you believe that human beings have souls? If so, what happens to the soul when a human being dies? Share your thoughts in the comments below.

Copyright © 2012 TheHuffingtonPost.com, Inc.

http://www.huffingtonpost.com/2012/11/25/soul-after-death-koch-greyson-hameroff-alexander_n_2189187.html [with embedded slide show "Top Scientists On God: Who Believes, Who Doesn't", and comments; the video above, as embedded, at http://www.youtube.com/watch?v=5X5nUg_z9xk ]

---

(linked in):

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=71430107 and preceding (and any future following)

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=79896399 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=80506176 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=81624509 and preceding (and any future following)

F6

01/05/13 5:02 PM

#196319 RE: fuagf #194126

Awakening


Max Aguilera-Hellweg

Since its introduction in 1846, anesthesia has allowed for medical miracles. Limbs can be removed, tumors examined, organs replaced—and a patient will feel and remember nothing. Or so we choose to believe. In reality, tens of thousands of patients each year in the United States alone wake up at some point during surgery. Since their eyes are taped shut and their bodies are usually paralyzed, they cannot alert anyone to their condition. In efforts to eradicate this phenomenon, medicine has been forced to confront how little we really know about anesthesia’s effects on the brain. The doctor who may be closest to a solution may also answer a question that has confounded centuries’ worth of scientists and philosophers: What does it mean to be conscious?

By Joshua Lang
January/February 2013 ATLANTIC MAGAZINE

Linda Campbell was not quite 4 years old when her appendix burst, spilling its bacteria-rich contents throughout her abdomen. She was in severe pain, had a high fever, and wouldn’t stop crying. Her parents, in a state of panic, brought her to the emergency room in Atlanta, where they lived. Knowing that Campbell’s organs were beginning to fail and her heart was on the brink of shutting down, doctors rushed her into surgery.

Today, removing an appendix leaves only a few droplet-size scars. But back then, in the 1960s, the procedure was much more involved. As Campbell recalls, an anesthesiologist told her to count backward from 10 while he flooded her lungs with anesthetic ether gas, allowing a surgeon to slice into her torso, cut out her earthworm-size appendix, and drain her abdomen of infectious slop, leaving behind a lengthy, longitudinal scar.

The operation was successful, but not long after Campbell returned home, her mother sensed that something was wrong. The calm, precocious girl who went into the surgery was not the same one who emerged. Campbell began flinging food from her high chair. She suffered random episodes of uncontrollable vomiting. She threw violent temper tantrums during the day and had disturbing dreams at night. “They were about people being cut open, lots of blood, lots of violence,” Campbell remembers. She refused to be alone, but avoided anyone outside her immediate circle. Her parents took her to physicians and therapists. None could determine the cause of her distress. When she was in eighth grade, her parents pulled her from school for rehabilitation.

Over time, Campbell’s most severe symptoms subsided, and she learned how to cope with those that remained. She managed to move on, become an accountant, and start a family of her own, but she wasn’t cured. Her nightmares continued, and nearly anything could trigger a panic attack: car horns, sudden bright lights, wearing tight-fitting pants or snug collars, even lying flat in a bed. She explored the possibility of post-traumatic stress disorder with her therapists, but could not identify a triggering event. One clue that did eventually surface, though, hinted at a possibly traumatic experience. During a session with a hypnotherapist, Campbell remembered an image, accompanied by an acute feeling of fear, of a man looming over her.

Then, one fall afternoon in 2006, four decades after her symptoms began, Campbell met an anesthesiologist at a hypnotherapy workshop. Over lunch, she found herself telling the anesthesiologist about her condition. She mentioned the appendectomy she’d had not long before everything changed.

The anesthesiologist was intrigued. He told her about a phenomenon that had sometimes accompanied early gas anesthetics, particularly ether, in which patients reacted to the gas by coughing and choking, as if they were suffocating.

The comment sparked something in Campbell. “I started having all these flashes,” she remembers. “The flashes were me being on the table. The flashes were of the room. The flashes were of the bright lights over me.” A man—the same one from her memory?—was there. At some point, the room went black. “And then I got to the place where I was on the table, and I just remember feeling terror,” she says. “That’s all I remember. I don’t see anything. I don’t feel anything. It’s absolute, abject terror. And the feeling that I am dying.” At that moment, Campbell realized that something had happened to her during her appendectomy, something that changed her forever. After several years of investigation, she figured it out: she had woken up on the table.

This experience is called “intraoperative recall” or “anesthesia awareness,” and it’s more common than you might think. Although studies diverge, most experts estimate that for every 1,000 patients who undergo general anesthesia each year in the United States, one to two will experience awareness. Patients who awake hear surgeons’ small talk, the swish and stretch of organs, the suctioning of blood; they feel the probing of fingers, the yanks and tugs on innards; they smell cauterized flesh and singed hair. But because one of the first steps of surgery is to tape patients’ eyes shut, they can’t see. And because another common step is to paralyze patients to prevent muscle twitching, they have no way to alert doctors that they are awake.

Many of these cases are benign: vague, hazy flashbacks. But up to 70 percent of patients who experience awareness suffer long-term psychological distress, including PTSD—a rate five times higher than that of soldiers returning from Iraq and Afghanistan. Campbell now understands that this is what happened to her, although she didn’t believe it at first. “The whole idea of anesthesia awareness seemed over-the-top,” she told me. “It took years to begin to say, ‘I think this is what happened to me.’?” She describes her memories of the surgery like those from a car accident: the moments before and after are clear, but the actual event is a shadowy blur of emotion. She searched online for people with similar experiences, found a coalition of victims, and eventually traveled up the East Coast to speak with some of them. They all shared a constellation of symptoms: nightmares, fear of confinement, the inability to lie flat (many sleep in chairs), and a sense of having died and returned to life. Campbell (whose name and certain other identifying details have been changed) struggles especially with the knowledge that there is no way for her to prove that she woke up, and that many, if not most, people might not believe her. “Anesthesia awareness is an intrapersonal event,” she says. “No one else sees it. No one else knows it. You’re the only one.”

In most cases of awareness, patients are awake but still dulled to pain. But that was not the case for Sherman Sizemore Jr., a Baptist minister and former coal miner who was 73 when he underwent an exploratory laparotomy in early 2006 to pinpoint the cause of recurring abdominal pain. In this type of procedure, surgeons methodically explore a patient’s viscera for evidence of abnormalities. Although there are no official accounts of Sizemore’s experience, his family maintained in a lawsuit that he was awake—and feeling pain—throughout the surgery. (The suit was settled in 2008.) He reportedly emerged from the operation behaving strangely. He was afraid to be left alone. He complained of being unable to breathe and claimed that people were trying to bury him alive. He refused to be around his grandchildren. He suffered from insomnia; when he could sleep, he had vivid nightmares.

The lawsuit claimed that Sizemore was tormented by doubt, wondering whether he had imagined the horrific pain. No one advised Sizemore to seek psychiatric help, his family alleged, and no one mentioned the fact that many patients who experience awareness suffer from PTSD. On February 2, 2006, two weeks after his surgery, Sizemore shot himself. He had no history of psychiatric illness.

*

Before the introduction of ether in the mid-19th century, surgery was a rare and gruesome business. One of the most common operations was amputation. Surgeons used saws and knives to remove the offending appendage, and boiling oil and scalding irons to cauterize the wound. They resorted to a variety of methods, some more dangerous than others, to manage patients’ pain. James Wardrop, a surgeon to the British royal family in the 19th century, wrote of a procedure called deliquium animi, in which he bled patients into quiescence. Others used alcohol, opiates, ice, tourniquets, or simple distraction.

The promise of painless surgery remained a preposterous idea in mainstream medicine until October 16, 1846. On that day, at the Harvard-affiliated Massachusetts General Hospital, a dentist named William Thomas Green Morton gave the first public demonstration of ether gas, administering it to a patient whose neck tumor was then removed by a surgeon. The event took place in a domed amphitheater now known as the “ether dome,” and earned Harvard Medical School a truly international reputation. Oliver Wendell Holmes Sr., who coined the term anesthesia (from the Greek word anaisthesia, meaning “lack of sensation”), rejoiced that “the fierce extremity of suffering has been steeped in the waters of forgetfulness, and the deepest furrow in the knotted brow of agony has been smoothed for ever.” In 2007, when the British Medical Journal asked subscribers to name the most-significant medical developments since 1840, anesthesia was among the top three, along with antibiotics and modern sanitation.

The miracle of anesthesia transcends pain. Painkillers—mainly opiates and alcohol—existed before ether, but they weren’t sufficient to quell the nightmare of surgery. Ether accomplished something altogether different: it eliminated both experience and memory. When the drug wore off and patients woke up, their bodies stitched together and their minds intact, it was almost as though the intervening hours hadn’t happened. The field that emerged from that historic moment in the ether dome was less concerned with the broad goal of curing disease than with a single task: the mastery of consciousness.

Anesthesia is often taken for granted in the daily routine of medicine today, both by health professionals and by the tens of millions of Americans who undergo surgery each year. Anesthesiologists are imbued with an almost heavenly power: with a mere push of their thumb on a clear plastic syringe, you go under. But in the past decade or so, several highly publicized cases, including Sherman Sizemore Jr.’s, have brought anesthesia awareness into the public forum. In 1998, a woman named Carol Weihrer, who claimed to have suffered awareness while having her eye removed, founded the Anesthesia Awareness Campaign, an advocacy group and resource for victims, and made the talk-show rounds. In 2007, the Hollywood thriller Awake intended, according to a producer, to “do to surgery what Jaws did to swimming in the ocean.” Fearful of malpractice lawsuits, the profession grew defensive. The American Society of Anesthesiologists promised to find the cause of and solution for awareness. “Even one case is one too many,” wrote the society’s president in 2007.

This promise, however, is not so easily fulfilled. Despite 167 years of research, anesthesiologists still have little idea how their drugs unlock the mind. Which gears turn and unwind to produce oblivion? How do they turn back into place? These questions, as important as they are for preventing anesthesia awareness, are dwarfed by a central riddle that has puzzled scientists and philosophers—not to mention most mildly introspective people—for hundreds, if not thousands, of years: What does it mean to be conscious?

*

Doctors began investigating how anesthesia affects consciousness during the 1960s, shortly after the first reports of awareness. One South African researcher was especially curious about whether and how one might recall memories from a surgery. Perhaps a near-death experience? Pushing well beyond the limits of what would today be considered ethical, he collected 10 volunteers undergoing dental surgery. The procedures went along as normal until, midway through, the room went silent and the medical staff reached for scripts.

“Just a moment!” the anesthesiologist would say. “I don’t like the patient’s color. Much too blue. His (or her) lips are very blue. I’m going to give a little more oxygen.”

The anesthesiologist would then act out a medical emergency, rushing to the patient’s bedside to ventilate his or her lungs, as if this action were necessary to save the patient’s life. After several moments, the team would breathe a collective sigh of relief.

“There, that’s better now,” the anesthesiologist would affirm. “You can carry on with the operation.”

A month later, the patients were hypnotized and asked to remember the day of the surgery. One female patient said she could hear someone talking in the operating theater.

“Who is it who’s talking?” the interviewer asked.

“Dr. Viljoen,” she said, referring to the anesthesiologist. “He’s saying my color is gray.”

“Yes?”

“He’s going to give me some oxygen.”

“What are his words?”

A long pause followed.

“He said that I will be all right now.”

“Yes?”

“They’re going to start again now. I can feel him bending close to me.”

Of the 10 volunteers, four remembered the words accurately; four retained vague memories; and two had no recollection of the surgery. The eight patients who did remember it displayed anxiety during the interview, many of them bursting from hypnosis, unable to continue. But when out of hypnosis, it was as though nothing had happened. They had no memory of the incident. The terror and anxiety seemed permanently buried in their subconscious.

This experiment revealed a fundamental problem for the study of awareness, the frequency of which can be measured only through reported accounts. For some victims, it can take weeks for memories to surface. For Linda Campbell, it took 40 years. But what if no memory remains? Did awareness happen? Does it matter?

An anesthesiologist’s job is surprisingly subjective. The same patient could be put under general anesthesia a number of different ways, all to accomplish the same fundamental goal: to render him unconscious and immune to pain. Many methods also induce paralysis and prevent the formation of memory. Getting the patient under, and quickly, is almost always accomplished with propofol, a drug now famous for killing Michael Jackson. It is milky and viscous, almost like yogurt in a fat syringe. When injected, it has a nearly instant hypnotic effect: blood pressure falls, heart rate increases, and breathing stops. (Anesthesiologists use additional drugs, as well as ventilation, to immediately correct for these effects.)

Other drugs in the anesthetic arsenal include fentanyl, which kills pain, and midazolam, which does little for pain but induces sleepiness, relieves anxiety, and interrupts memory formation. Rocuronium disconnects the brain from the muscles, creating a neuromuscular blockade, also known as paralysis. Sevoflurane is a multipurpose gaseous wonder, making it one of the most commonly used general anesthetics in the United States today—even though anesthesiologists are still relatively clueless as to how it produces unconsciousness. It crosses from the lungs into the blood, and from the blood to the brain, but … then what?

Other mysteries have been untangled. Redheads are known to feel pain especially acutely. This confused researchers, until someone realized that the same genetic mutation that causes red hair also increases sensitivity to pain. One study found that redheaded patients require about 20 percent more general anesthesia than brunettes. Like redheads, children also require stronger anesthesia; their youthful livers clear drugs from the system much more quickly than adults’ livers do. Patients with drug or alcohol problems, on the other hand, may be desensitized to anesthesia and require more—unless the patient is intoxicated at that moment, in which case less drug is needed.

After delivering the appropriate cocktail, anesthesiologists carefully monitor a patient’s reactions. One way they do this is by tracking vital signs: blood pressure, heart rate, and temperature; fluid intake and urine output; oxygen saturation in arteries. They also observe muscles, pupils, breathing, and pallor, among many other indicators.

One organ, however, has remained stubbornly beyond their watch. Even though anesthesiologists are not entirely sure how their drugs work, they do know where they go: the brain. All changes in your vital signs are only the peripheral reverberations of anesthetic drugs’ hammering on the soft mass inside your skull. Determining consciousness by measuring anything besides brain activity is like trying to decide whether a friend is angry by studying his or her facial expressions instead of asking directly, “Are you mad?”

In lamenting how little we know about the anesthetized brain, Gregory Crosby, a professor of anesthesiology at Harvard, wrote in The New England Journal of Medicine in 2011, “The astonishing thing is not that awareness occurs, but that it occurs so infrequently.”

*

This ignorance gap seems almost absurd in the context of today’s dazzling array of medical technologies. Doctors can parse your brain with innumerable X-ray slices and then collate them into a three-dimensional grayscale image in a process called computed tomography, or CT. They can send you into a tube where powerful magnets flip the spin of protons on water molecules in your brain; when the protons flip back into position, they emit radio waves, and from that information a computer can generate a comprehensive image known as an MRI (for magnetic resonance imaging). Positron-emission-tomography, or PET, scans provide detailed maps of metabolic activity. Yet we are in the dark ages when it comes to determining whether the brain is conscious or not? We can’t figure out whether patients are awake, or what being awake even means?

Due to their hulking size, CT scanners and MRI machines are rarely, if ever, brought into an operating room. But other technologies are more mobile. For example, doctors can measure electrical activity in the brain using a machine that, with the help of a few electrodes attached to your scalp, generates what’s known as an electroencephalogram (EEG)—essentially a snapshot of your brain waves. An EEG is printed in undulating, longitudinal lines, like the scribbled outline of a mountain range: sometimes smooth and regular like the Appalachians, at other times rough and craggy like the Rockies; and in death or deep coma more like the Great Plains.

This technology is regularly used in sleep studies and to diagnose epilepsy and encephalitis, as well as to monitor the brain during certain specialized surgeries. But the problem with an EEG is interpreting it. The data come at a constant, unforgiving pace, with lines stacked one on top of another like on a page of sheet music (high-density versions can have up to 256 lines). Before digitization, EEG printers disgorged paper at 30 millimeters per second, resulting in 324 meters of print for just three hours of surgery. And even today’s machines provide data that are next to impossible to analyze on the fly, at least with any sort of detail or depth.

In 1985, when a 23-year-old doctoral student named Nassib Chamoun first looked at the sheet music of an EEG, he saw a symphony—albeit one he could not yet read. Chamoun was then an electrical engineer doing a research fellowship at the Harvard School of Public Health, working on decoding the circuitry of the human heart. When an anesthesiologist he worked with argued that the brain was much more electrically interesting than the heart, Chamoun agreed to attend a demonstration of EEG, which was then a relatively new technology in the operating room, during a surgery at a Harvard hospital. The EEG printer was an old model, spilling reams of paper that piled near the head of the gurney. As the anesthesiologist injected the patient with drugs, the machine’s pen danced wildly, ink splattering off the page. Chamoun was entranced by the complexity of the patterns he saw that day. He couldn’t stop thinking about how to engineer these data into something that would be more useful for surgeons and anesthesiologists. He left his doctorate program and embarked on a 25-year quest to decode the brain—and, ultimately, to quantify and measure consciousness.

As a child in Lebanon, Chamoun had been fascinated by taking things apart and putting them back together. During the 1970s, as ethnic tensions there boiled into civil war, Chamoun spent a lot of time cooped up at home when school was canceled, or when it was unsafe for him to venture outside. The soldering iron and circuit board became his playground. Family members asked him to fix televisions, tape recorders, and radios. His parents gave him a microscope as a birthday present. He made his way to the United States for college, eager to expand his study beyond home electronics.

As it happened, the mid-’80s were an auspicious time for a young technologist with a promising idea. When Chamoun began working with EEG, he had early access to mainframe computers at Harvard and Boston University. More important, he had access to surgeries. He wheeled his digital EEG machine into Harvard operating rooms, fixed electrodes to patients, and recorded millions of data points. Then, using computers, he began to sift through the oceans of information, searching for a unifying pattern. Meanwhile, he was enlisting his old mentor, a Nobel Prize–winning Harvard professor, and courting venture-capital firms for seed money. The result was Aspect Medical Systems, a biotech firm he founded in 1987 with a singular goal: to build a monitor that anesthesiologists could use to discern their patients’ level of awareness.

Chamoun turned out to have a pivotal ally in a family friend, Charlie Zraket, the CEO of a big defense contractor called Mitre. In the 1960s, mathematicians had developed a statistical method called bispectral analysis, which breaks down waveforms to find underlying patterns. This method was originally used for studying waves in the ocean, but Mitre applied it to voice-recognition software, and later to sonar on war submarines and radar on airplanes. If bispectral analysis could be used to interpret patterns in ocean, radio, and sound waves, Zraket and Chamoun reasoned, why couldn’t it be applied to brain waves?

Chamoun ended up banking everything on the belief that if he collected enough EEG data, he could hack the patterns using bispectral analysis. But by 1995, eight years in, the entire project was collapsing. Chamoun had gathered more than $18 million from investors, credit lines, and friends, and had spent it all, but still his algorithms could not reliably predict a patient’s level of awareness. Just as he was confronting bankruptcy, he secured a $4 million investment from a well-known venture capitalist. This bought the time Chamoun needed. From that point, he and his team achieved a series of breakthroughs that caused them to fundamentally reframe the way they thought about consciousness. Chamoun had never believed that the brain was something with a simple on/off switch, but he had been looking for one master equation—a sort of electrical fingerprint of consciousness—that would connect all the dots. Only when he let go of the idea of a single equation did a new, more viable model come into view: consciousness as a spectrum of discrete phases that flowed one into the next, each marked by a different electrical fingerprint. Fully conscious to lightly sedated was one phase; lightly to moderately sedated was another; and so on. Once he realized this, Chamoun was able to identify at least five separate equations and arrange them in order, like snapping a series of photos and compiling them into a broad panorama shot.

The end product was a shoebox-size blue machine that used EEG data to rank a patient’s level of awareness on an index of zero to 100, from coma to fully awake. Chamoun called it the Bispectral Index, or BIS, monitor. To use the BIS, all that anesthesiologists had to do was connect a pair of disposable electrode sensors to the machine, apply them firmly to a patient’s forehead, and wait for a number to appear on the box’s green-and-black digital display. They would then administer anesthesia and watch the number drop from a waking average of 97 to somewhere in the ideal “depth of anesthesia” range—between 40 and 60—at which point they could declare the patient ready for surgery.

The FDA cleared the BIS monitor in 1997. When Time interviewed Chamoun about the revolutionary device, he called it anesthesia’s “Holy Grail.” Two years later, Aspect Medical’s quarterly revenue surpassed $8 million; the company soon went public. In 2000, Ernst & Young named Chamoun the Healthcare and Life Sciences Entrepreneur of the Year for the New England region.

Enthusiasm for the BIS monitor grew in 2004, when The Lancet published a groundbreaking study reporting that the device could reduce the incidence of anesthesia awareness by more than 80 percent. This nearly pushed the BIS into the realm of medical best practices. By July 2007, half of all American operating rooms had a BIS monitor. By 2010, the device had been used almost 40 million times worldwide. At his home in the suburbs of Boston, Chamoun has the 10 millionth sensor memorialized in a sealed plastic case.

The BIS monitor fundamentally changed the way scientists thought about consciousness. It compressed an enigmatic idea that had long mystified researchers into a medical indicator that could be quantified and measured, like blood pressure or body temperature. One effect of the accessibility of Chamoun’s invention was that it was occasionally used outside the operating room, for purposes he had not foreseen. In a 2006 injunction involving a North Carolina death-row inmate named Willie Brown, a federal judge ruled that performing a lethal injection on a conscious prisoner could cause excessive pain. North Carolina requires prisons to anesthetize inmates before killing them, but the judge worried about the possibility of anesthesia awareness. Only when prison officials purchased a BIS monitor did he allow them to proceed with Brown’s execution. So on April 21, 2006, attendants hooked Brown up to the monitor, injected him with a sedative, and watched his BIS value drop. At approximately 2 o’clock in the morning, once the number had fallen below 60, an attendant administered a lethal dose of pancuronium bromide and potassium chloride.

*

In the centuries before EEG and computers, the most-active contemplators of consciousness were not doctors but philosophers. The 17th-century French thinker René Descartes proposed an influential theory that leaned on neuroanatomy as well as philosophical inference. He declared the pineal gland, a pea-size glob just behind the thalamus, the seat of consciousness, “the place in which all our thoughts are formed.” But Descartes was a dualist: he believed that body and mind are separate and distinct. Within the physical matter of the pineal gland, he reasoned, something inexplicable must lie, something intangible—something that he identified as the soul.

This idea has been rejected by reductionist thinkers, who believe that consciousness is a scientific phenomenon that can be explained by the physiology of the brain. In an attempt to understand various sensory functions, a 19th-century cohort of reductionist biologists burned, cut, and excised lumps of the brain in rabbits, dogs, and monkeys, eventually pinpointing centers for hearing, vision, smell, touch, and memory. But even the most-extreme experiments of the period failed to identify a center for consciousness. In 1892, a German scientist named Friedrich Goltz, who rejected this notion of cerebral localization and hypothesized instead that the brain operated as a cohesive unit, cut out the majority of a dog’s cerebral cortex over the course of three operations. The animal managed to survive for 18 months; it even remained active, walking its cage and curling up to sleep, and reacted to noises and light by flipping its ears and shutting its eyes. Yet other things had changed. The dog required assistance with eating, and its memory seemed to have been destroyed. “The condition was that of idiocy but not of unconsciousness,” wrote one scientist.

Today’s neuroscientists, most of whom are reductionists, have offered multiple hypotheses about where consciousness resides, from the anterior cingulate cortex, a region also associated with motivation, to some parts of the visual cortex, to the cytoskeleton structure of neurons. Some theories peg consciousness not to a particular part of the brain but to a particular process, such as the rhythmic activation of neurons between the thalamus and the cortex.

David Chalmers, an Australian philosopher who has written extensively about consciousness, would refer to this neurobiological hunt as the “easy problem.” With enough time and money, scientists could ostensibly succeed in locating a consciousness center of the brain. But at that point, Chalmers argues, an even bigger mystery would still remain, one that he calls the “hard problem.” Say you and a friend are looking at a sunset. Your body is processing a huge variety of sensory inputs: a spectrum of electromagnetic waves—red, orange, and yellow light—which focus on your retina; the vibrations of your friend’s voice, which bounce along the bones of your inner ear and transform into a series of electrical signals that travel from neuron to neuron; memories of past sunsets, which spark a surge of dopamine in your mesolimbic pathway. These effects coalesce into one cohesive, indivisible experience of the sunset, one that differs from your friend’s. The hard problem, or what the philosopher Joseph Levine called the “explanatory gap,” is determining how physical and biological processes—all of them understood easily enough on their own—translate into the singular mystery of subjective experience. If this gap cannot be bridged, then consciousness must be informed by some sort of inexplicable, intangible element. And all of a sudden we are back to Descartes.

*

In 2004, 60-year-old man checked in for open gastric-bypass surgery and a gall-bladder removal at Virginia Mason Medical Center in Seattle. Simon, as I’ll call him, stood 5 feet 9 inches tall and weighed approximately 300 pounds. In an open gastric bypass, the surgeon penetrates mounds of flesh and fat before finding the peritoneum, the glossy membrane that holds the abdominal cavity intact. Many surgeons use a space-age device called a Harmonic Scalpel, which cuts tissue while simultaneously blasting it with ultrasound waves to stop the bleeding. Once the surgeon uncovers the stomach and yards of folded, tubular intestines, she uses metal retractors to pull the skin apart and clear away slippery membranes, juicy organs, and fatty layers of tissue. Then, to business: cut, suture, cut, suture, cauterize, cut.

No surgeon could have imagined a procedure of this magnitude 167 years ago, in the days before anesthesia. It would have been impossible to endure, both for the patient and the surgeon. Simon’s anesthesiologist, Michael Mulroy, was particularly worried about him because of his hypotension and reliance on painkillers, both of which increased his risk of awareness. To make sure that Simon didn’t drift into consciousness, Mulroy decided to use a BIS monitor.

Surgery records show that throughout the three-hour procedure, Simon’s BIS value hovered between 37 and 51, well below the threshold for sedation. Mulroy had given Simon a relatively light dosage, reluctant to risk further deflating his patient’s dangerously low blood pressure, but he took comfort in the fact that the BIS told him that Simon was unconscious and unaware.

After the surgery, in the postoperative recovery room, nurses asked Simon whether he was in pain. “Not now,” he said, “but I was during surgery.” Simon reported memories that began after intubation, including “unimaginable pain” and “the sensation that people were tearing at me.” According to a clinical report, he heard voices around him and “wished he were dead,” but when he tried to alert the surgical team, his body did not respond to his brain’s commands.

The news of Simon’s experience devastated Mulroy. He explained to his patient that he had used the BIS monitor and that it had confirmed Simon’s unconscious state throughout the procedure. In the end, Mulroy says, all he could do was apologize and arrange for a psychiatrist. He hasn’t seen Simon since, but he published the case in a 2005 issue of the journal Anesthesia & Analgesia. Mulroy felt that the BIS monitor had betrayed him; he might have done more to deepen Simon’s sedation if the BIS had not reassured him that everything was fine.

Mulroy was one of the first to question the BIS, but his concern was soon echoed in other corners of the medical community. In 2008, The New England Journal of Medicine published a study comparing nearly 2,000 surgery patients at high risk of awareness: 967 patients were monitored by the BIS, and 974 via attention to changes in the amount of anesthetic gas they exhaled throughout a procedure. The author, a researcher at the Washington University School of Medicine in St. Louis named Michael Avidan, found that both groups of patients experienced awareness at similar rates. In other words, the BIS was no more effective than a much cheaper and more standard method. After questions were raised about his methodology, Avidan repeated the experiment with a broader sample and found the same thing. Chamoun’s window to the brain, it turned out, was not especially enlightening.

Avidan, who seems to have a singular zeal for highlighting the BIS monitor’s weaknesses, has also published a study showing that in many cases, two monitors on the same patient display different values. In a YouTube video, he applies BIS electrodes to a volunteer’s forehead, cuts them with scissors, and waits a full 40 seconds for the device’s value to change. Today, the BIS monitor has become the most controversial medical device in anesthesiology, if not all of surgery. Aspect’s stock plummeted and the board of directors sold the company in 2009. Chamoun temporarily accepted a high-paying position at the new parent company, Covidien, but resigned not long after. His heart wasn’t in it anymore.

The BIS monitor is not obsolete: it may still be clinically useful, may still prevent some cases of awareness. “It’s important to take into consideration the collective scientific evidence and clinical experience of millions of patients,” says Chamoun. “The BIS can help reduce the risk of awareness, but it will not completely eliminate that risk.”

Even after the Avidan studies, many anesthesiologists around the world still choose to rely on the BIS to guide them through surgery. But guarantee that a patient is unconscious? That, it cannot do. Chamoun is an engineer: he was never interested in providing a holistic assessment of what it means to be conscious. For that, medicine had to hold out for someone who could see beyond the data—someone whose fascination with the mind was as much humanistic as scientific.

*

On a warm afternoon in Madison, Wisconsin, last spring, a psychiatrist was pointing an electromagnetic gun at my brain.

“Put your arm in your lap,” he said.

I obeyed. My head was dressed in a 60-electrode, high-density EEG-recording device. The doctor stood behind my chair, eyeing a digitized MRI of my brain and gliding the gun over my scalp until he found his target: my motor cortex.

“Relax.”

I tried.

The gun clicked. My forehead muscles twitched. My arm leapt out of my lap, straight into the air, as if yanked by invisible puppet strings. “Do it again,” I said.

This process is called transcranial magnet stimulation, or TMS. It is the key to a device that Giulio Tononi, one of the most-talked-about figures in anesthesiology since Nassib Chamoun, hopes will provide a truly comprehensive assessment of consciousness. If successful, Tononi’s device could reliably prevent anesthesia awareness. But his ambitions are much grander than that. Tononi is unraveling the mystery of consciousness: how it works, how to measure it, how to control it, and, possibly, how to create it.

At the heart of Tononi’s work is his integrated-information theory, which is based on two distinct principles, as intuitive as they are scientific. First, consciousness is informative. Every waking moment of your life provides a nearly infinite reservoir of possible experiences, each one different from the next. Second, consciousness is integrated: you can’t process this information in parts. When you see a red ball, you can’t experience the color red separately from the shape of the ball. When you hear a word, you can’t experience the sound of it separately from its meaning. According to this theory, a more conscious brain is both more informative (it has a deeper reservoir of experiences and stimuli) and more integrated (its perception of these experiences is more unified) than a less conscious one.

Compare the brain to New York City: just as cars navigate the city’s neighborhoods via a patchwork of streets, bridges, tunnels, and highways, electrical signals traverse the brain via a meshwork of neurons. Tononi’s theory predicts that in a fully conscious brain, traffic in one neighborhood will affect traffic in other neighborhoods, but that as consciousness fades—for instance, during sleep or anesthesia—this ripple effect will decrease or disappear.

In 2008, in one of several experiments demonstrating this effect, Tononi pulsed the brains of 10 fully conscious subjects with his electromagnetic gun—the equivalent of, say, injecting a flood of new cars into SoHo. The traffic (the electromagnetic waves) rippled across Manhattan (the brain): things jammed up in Tribeca and Greenwich Village, even in Chelsea. Tononi’s EEG electrodes captured ripples and reverberations that were different for every subject and for every region of the brain, patterns as complex and varied as the traffic in Manhattan on any given day.

Tononi then put the same subjects under anesthesia. Before he pulsed his gun again, the subjects’ brain traffic seemed as busy as when they were conscious: cars still circulated in SoHo and Tribeca, in Greenwich Village and Chelsea. But the pulse had a drastically different effect: This time, the traffic jam was confined to SoHo. No more ripples. “It’s as if [the brain] has fragmented into pieces,” Tononi told me. He published these findings in 2010, and also used them to file a patent for “a method for assessing anesthetization.”

I first encountered Giulio Tononi in 2011, at an American Society of Anesthesiologists conference, where he gave the final lecture. His voice—with an erudite Italian accent suitable for narrating the audio tour at the Sistine Chapel—echoed throughout the auditorium. His blond hair was parted in a zigzag across his head. He wore a brown suit with silver studs on the lapels, a white shirt, and a bolo tie. Here, speaking to a rapt audience of mostly American anesthesiologists, was an Italian neuroscientist dressed as if he were from Wyoming.?“Anesthesia: the merciful annihilation of consciousness,” Tononi said, a PowerPoint presentation projected behind him. “The one we devoutly wish for in the proper circumstances. Now, just like sleep takes consciousness away and gives it back, so does the anesthesiologist. Every day. He taketh and giveth.”

On the next slide, Tononi projected Michelangelo’s The Creation of Eve, which was captioned: And the LORD God caused a deep sleep to fall upon Adam, and he slept: and he took one of his ribs, and closed up the flesh instead thereof. “A quote from the very first surgical procedure that was done with anesthesia,” Tononi said. “The operation was reasonably successful, it seems.”

This tendency toward grandiloquence dates back at least to adolescence. As a teenager in Trento, a city in northern Italy, where his father was mayor, Tononi wrote a letter to Karl Popper, a famous European philosopher, asking him whether he should devote his life to studying consciousness. Popper wrote back with encouragement and sent an inscribed copy of one of his books. Tononi considered approaching the subject through mathematics or philosophy, but ultimately decided that medicine would provide the best foundation. So he attended medical school, became a psychiatrist, and moved to New York for a fellowship under the physician Gerald Edelman. Although Edelman had won a Nobel Prize for his work in immunology, he had by that point pivoted to neuroscience. With Edelman, Tononi began publishing extensively on consciousness. He moved to Madison in 2001, and is now the Distinguished Chair in Consciousness Science at the University of Wisconsin.

When I visited Tononi in June to participate in one of his consciousness studies, he invited me to a dinner party at his home, 15 minutes outside Madison. These dinners are legendary among his research fellows, many of them Ph.D.s and M.D.s from Italy or Switzerland. I arrived at a luxurious log cabin replete with a tractor, an indoor swimming pool, and an outdoor pizza oven. Hanging over the oven was a wire sculpture in the shape of the Greek letter phi, which Tononi has chosen as the symbol for his consciousness theory and as the title of his latest book. The letter was also engraved on his bolo tie and commemorated on the license plate of his car.

He served a multicourse dinner featuring pasta made from scratch, pizzas cooked in the outdoor oven, a well-paired rosé, and absinthe. Midway through the second course, he asked each of his guests a question: whether we believed in free will, and why. I said that I didn’t. I argued that if we are made of atoms based on physical laws, which form molecules ruled by chemical laws, which compose cells that abide by biological laws, how could there be free will? Tononi only smiled. If his theory of integration is correct, my logic is flawed, and free will can exist.

Tononi is to his neuroscientist peers as the 18th-century philosopher Immanuel Kant was to his empiricist counterpart David Hume. Like most modern neuroscientists, Hume saw only the “easy problem.” He proposed that consciousness was nothing more and nothing less than the bundling of various bits of experiential knowledge, or, as he called them, “perceptions.” Using this logic, my physiological argument against free will could stand.

Kant, however, believed that the mind is more than an accumulation of experiences of the physical world. Like Descartes 150 years earlier and David Chalmers 200 years later, Kant focused on the “hard problem,” making the logical argument that something beyond sensory inputs must account for the subjectivity of conscious experience—what Kant called “transcendental” consciousness. Tononi’s theory hinges on a similar conception of consciousness as something more than the sum of its experiential parts—leaving room, then, for the possibility of free will.

The amount of integrated information in the brain—the quantity of consciousness—is what Tononi calls phi. He believes he can approximate it with a combination of his TMS-EEG technology and mathematical models. Many well-known philosophers and neuroscientists, however, remain skeptical. Chalmers has praised Tononi for his bold attempt to quantify consciousness, but he doesn’t think Tononi has come any closer to solving the hard problem. And even Tononi admits that, in scientific-research time, his theory is still in its infancy. What Tononi has made progress on is neither the easy nor the hard problem: it’s the practical problem. He is currently developing a machine that has the potential to end anesthesia awareness once and for all. Like the BIS monitor, the device would provide a numerical assessment of a patient’s awareness, and would be simple and compact enough to become a regular fixture in operating rooms. Unlike the BIS monitor, it would also be relevant outside the operating room. Whereas the BIS is rooted in data specific to surgery, Tononi has developed a comprehensive theory of consciousness that could, with appropriate technological tweaks, be applied in any number of medical, scientific, or social settings.

My experience with the electrode cap and TMS gun in Tononi’s lab offers a rough guide to how his awareness monitor might work. First, an anesthesiologist would dress your head with electrodes, which would transmit EEG data to a processor. After putting you under, she would monitor the drugs’ effects by using a paddle attached to a high-voltage generator to repeatedly blast your brain with electromagnetic waves. The EEG processor would monitor your brain’s reaction to each blast, calculating the complexity of patterns and the degree of integration and ultimately displaying a numerical phi value—your level of consciousness. Returning to the New York metaphor: if the traffic jam stayed in SoHo, the machine would display a low value, and the anesthesiologist could relax; if it spread to other neighborhoods, the machine would display a high value, and she might want to administer more drugs.

While this device is still millions of dollars and many trial hours away from implementation, Tononi and his Italian colleague Marcello Massimini have tested and validated an approximation of phi in multiple settings, and are preparing to publish their findings. The method has already been used by some clinics in Europe—not on anesthetized patients, but on vegetative ones. What if Terri Schiavo’s family had been able to ascertain that she was, in fact, completely unconscious, more so than she would have been even under heavy anesthesia? These clinics are calculating phi to assess whether comatose patients experience consciousness, and if so, how much. Of course, this application has troubling risks; a flaw in Tononi’s theory could lead families to turn off life support for a still-conscious person. But if the theory holds up—if Tononi has successfully managed to quantify consciousness—it could deliver these families from uncertainty. It could also change the way we think about animal rights, upend the abortion debate—possibly even revolutionize the way we think about artificial intelligence. But before any of that, it would fulfill the promise first offered in the ether dome more than a century and a half ago: an end to the nightmare of waking surgery.

*

In his recently published book, Phi, Tononi narrates a literary tour of his theory of consciousness through a fictionalized protagonist: Galileo. In one of the last chapters, Galileo encounters a diabolical machine that surgically manipulates the brain to produce pure sensations of pain. Tononi calls it “the only real and eternal hell.” The creator of the machine asks: “What is the perfect pain? Can pain be made to last forever? Did pain exist, if it leaves no memory? And is there something worse than pain itself?”

For George Wilson, a Scottish chemist who had his foot amputated in 1843, before the dissemination of anesthesia, pain gave way to something seemingly beyond physical sensation, something articulable only in spiritual, nearly existential terms. Wilson described his experience in a letter several years after his surgery:

Of the agony it occasioned I will say nothing. Suffering so great as I underwent cannot be expressed in words, and fortunately cannot be recalled. The particular pangs are now forgotten, but the blank whirlwind of emotion, the horror of great darkness, and the sense of desertion by God and man, bordering close upon despair, which swept through my mind and overwhelmed my heart, I can never forget, however gladly I would do so.

While subduing consciousness is the most urgent aspect of Tononi’s work, he is especially animated when discussing consciousness in its fullest, brightest state. In his office in Madison, he described a hypothetical device called a “qualiascope” that could visualize consciousness the same way telescopes visualize light waves, or thermal goggles visualize heat. The more integrated the information—that is, the more conscious the brain—the brighter the qualiascope would glow. Using the device in an operating room, you would watch a patient’s consciousness fade to a dull pulse. If he woke up mid-operation, you might see a flicker.

But if you turned your gaze away from the operating room, you would gain an astonishing perspective on the universe. “The galaxy would look like dust,” Tononi told me. “Within this empty, dusty universe, there would be true stars. And guess what? These stars would be every living consciousness. It’s really true. It’s not just a poetic image. The big things, like the sun, would be nothing compared to what we have.”

*

VISUALIZING CONSCIOUSNESS

??In an experiment on vegetative patients, researchers pulsed one subject’s brain with electro­magnetic waves on three different days as the subject emerged from a coma. The resulting EEG patterns reflected Giulio Tononi’s theory of consciousness: they became more complex and widespread as the patient became more conscious.



*

Joshua Lang is a medical student at the UC Berkeley–UCSF Joint Medical Program.

Copyright © 2013 by The Atlantic Monthly Group (emphasis in original)

http://www.theatlantic.com/magazine/archive/2013/01/awakening/309188/ [ http://www.theatlantic.com/magazine/archive/2013/01/awakening/309188/?single_page=true ] [with comments]

---

(linked in) http://investorshub.advfn.com/boards/read_msg.aspx?message_id=82222573 and preceding and following