InvestorsHub Logo
icon url

F6

04/19/10 5:49 AM

#97316 RE: F6 #97314

Albert Einstein Quotes

http://www.youtube.com/watch?v=wod0UoOHhvo [with comments]

---

also (items linked in) http://investorshub.advfn.com/boards/read_msg.aspx?message_id=33948658 (and preceding)

icon url

F6

08/18/11 2:16 AM

#151943 RE: F6 #97314

Virtual and Artificial, but 58,000 Want Course


The teachers, from left, Peter Norvig and Sebastian Thrun.
Noah Berger for The New York Times


By JOHN MARKOFF
Published: August 15, 2011

PALO ALTO, Calif. — A free online course at Stanford University on artificial intelligence, to be taught this fall by two leading experts from Silicon Valley, has attracted more than 58,000 students around the globe — a class nearly four times the size of Stanford’s entire student body.

The course [ http://www.ai-class.com/ ] is one of three being offered experimentally by the Stanford computer science department to extend technology knowledge and skills beyond this elite campus to the entire world, the university is announcing on Tuesday.

The online students will not get Stanford grades or credit, but they will be ranked in comparison to the work of other online students and will receive a “statement of accomplishment.”

For the artificial intelligence course, students may need some higher math, like linear algebra and probability theory, but there are no restrictions to online participation. So far, the age range is from high school to retirees, and the course has attracted interest from more than 175 countries.

The instructors are Sebastian Thrun and Peter Norvig, two of the world’s best-known artificial intelligence experts. In 2005 Dr. Thrun led a team of Stanford students and professors in building a robotic car that won a Pentagon-sponsored challenge by driving 132 miles over unpaved roads in a California desert. More recently he has led a secret Google project [ http://www.nytimes.com/2010/10/10/science/10google.html ] to develop autonomous vehicles that have driven more than 100,000 miles on California public roads.

Dr. Norvig is a former NASA scientist who is now Google’s director of research and the author of a leading textbook on artificial intelligence.

The computer scientists said they were uncertain about why the A.I. class had drawn such a large audience. Dr. Thrun said he had tried to advertise the course this summer by distributing notices at an academic conference in Spain, but had gotten only 80 registrants.

Then, several weeks ago he e-mailed an announcement to Carol Hamilton, the executive director of the Association for the Advancement of Artificial Intelligence. She forwarded the e-mail widely, and the announcement spread virally.

The two scientists said they had been inspired by the recent work of Salman Khan, an M.I.T.-educated electrical engineer who in 2006 established a nonprofit organization to provide video tutorials to students around the world on a variety of subjects via YouTube.

“The vision is: change the world by bringing education to places that can’t be reached today,” said Dr. Thrun.

The rapid increase in the availability of high-bandwidth Internet service, coupled with a wide array of interactive software, has touched off a new wave of experimentation in education.

For example, the Khan Academy [ http://www.khanacademy.org/about ], which focuses on high school and middle school, intentionally turns the relationship of the classroom and homework upside down. Students watch lectures at home, then work on problem sets in class, where the teacher can assist them one on one.

The Stanford scientists said they were focused on going beyond early Internet education efforts, which frequently involved uploading online videos of lectures given by professors and did little to motivate students to do the coursework required to master subjects.

The three online courses, which will employ both streaming Internet video and interactive technologies for quizzes and grading, have in the past been taught to smaller groups of Stanford students in campus lecture halls. Last year, for example, Introduction to Artificial Intelligence drew 177 students.

The two additional courses will be an introductory course on database software, taught by Jennifer Widom, chairwoman of the computer science department, and an introduction to machine learning, taught by Andrew Ng.

Dr. Widom said she had recorded her video lectures during the summer and would use classroom sessions to work with smaller groups of students on projects that might be competitive and to bring in people from the industry to give special lectures. Unlike the A.I. course, this one will compare online students with one another and not with the Stanford students.

How will the artificial intelligence instructors grade 58,000 students? The scientists said they would make extensive use of technology. “We have a system running on the Amazon cloud, so we think it will hold up,” Dr. Norvig said.

In place of office hours, they will use the Google moderator service, software that will allow students to vote on the best questions for the professors to respond to in an online chat and possibly video format. They are considering ways to personalize the exams to minimize cheating. Part of the instructional software was developed by Know Labs, a company Dr. Thrun helped start.

Although the three courses are described as an experiment, the researchers say they expect university classes to be made more widely accessible via the Internet.

“I personally would like to see the equivalent of a Stanford computer science degree on the Web,” Dr. Ng said.

Dr. Widom said that having Stanford courses freely available could both assist and compete with other colleges and universities. A small college might not have the faculty members to offer a particular course, but could supplement its offerings with the Stanford lectures.

There has also been some discussion at Stanford about whether making the courses freely available would prove to be a threat to the university, which charges high fees for tuition. Dr. Thrun dismissed that idea.

“I’m much more interested in bringing Stanford to the world,” he said. “I see the developing world having colossal educational needs.”

Hal Abelson, a computer scientist at M.I.T. who helped develop an earlier generation of educational offerings that began in 2002, said the Stanford course showed how rapidly the online world was evolving.

“The idea that you could put up open content at all was risky 10 years ago, and we decided to be very conservative,” he said. “Now the question is how do you move into something that is more interactive and collaborative, and we will see lots and lots of models over the next four or five years.”

© 2011 The New York Times Company

http://www.nytimes.com/2011/08/16/science/16stanford.html [comments at http://community.nytimes.com/comments/www.nytimes.com/2011/08/16/science/16stanford.html ]

---

further in particular to both of the prior two posts in this string

also (linked in) http://investorshub.advfn.com/boards/read_msg.aspx?message_id=66175494 and preceding adn following

icon url

F6

08/05/14 7:10 AM

#226613 RE: F6 #97314

Is One of the Most Popular Psychology Experiments Worthless?


Noel Pennington/Flickr

A trolley is careening toward an unsuspecting group of workers. You have the power to derail the trolley onto a track with just one worker. Do you do it? It might not matter.

Olga Khazan
Jul 24 2014, 11:23 AM ET

Harvard University justice professor Michael J. Sandel stood [ http://www.youtube.com/watch?v=kBdfcR-8hEY (next below; the second/first still-working YouTube in/see {linked in} the post to which this is a reply and preceding and {other} following)]
before a lecture hall filled with students recently and presented them with an age-old moral quandary:

"Suppose you're the driver of a trolley car, and your trolley car is hurtling down the track at 60 miles an hour. You notice five workers working on the track. You try to stop, but you can't, because your brakes don't work. You know that if you crash into these five workers, they will all die. You feel helpless until you notice that off to the side, there's a side track. And there's one worker on the side track."

The question: Do you send the trolley onto the side track, thus killing the one worker but sparing the five, or do you let events unfold as they will and allow the deaths of all five? (Most people, for what it's worth, say they would turn.)

Then Sandel asked about a popular variation on the same problem. The same trolley is careening toward unsuspecting innocents, but this time, you're an onlooker on a footbridge, and, "you notice that standing next to you, leaning over the bridge, is a very fat man."

A ripple of laughter rises from the packed auditorium.

"You could give him a shove," he continues. "He would fall over onto the track, right in the way of the trolley car. He would die, but he would spare the five. How many would push the fat man over the bridge?"

A few hands go up, but most of the students just erupt in giggles.

And that's exactly why, some scientists argue, this well-known "trolley dilemma," shouldn't be used for psychology experiments as much as it is.

Number of Studies Discussing the Trolley Problem

Published psychology papers that discuss the "trolley dilemma," by year. (Bauman, McGraw, et al.)

Over the past few decades, the trolley dilemma has been at the center of dozens of experiments designed to gauge subjects' moral compasses. Some people think it can help answer Big Questions about everything from the use of drones to self-driving cars.

One recent paper [ http://www.ncbi.nlm.nih.gov/pubmed/11557895 ] by Harvard's Joshua Greene and others, which involved MRI scans of people contemplating the trolley, has been cited more than 2,000 times. In 2007 [ http://www.researchgate.net/publication/51522563_Patterns_of_moral_judgment_derive_from_nonmoral_psychological_representations ], the psychologists Fiery Cushman and Liane Young and the biologist Marc Hauser administered the test to thousands of web users and found that while 89 percent would flip the track switch, only about 11 percent would push the fat man.

That contradiction—that people find giving the man a fatal prod just too disturbing, even though the end result would be the same—is supposed to show how emotions can sometimes color our ethical judgments.

But one group of researchers thinks it might be time to retire the trolley. In an upcoming paper that will be published in Social and Personality Psychology Compass, Christopher Bauman of the University of California, Irvine, Peter McGraw of the University of Colorado, Boulder, and others argue that the dilemma is too silly and unrealistic to be applicable to real-life moral problems. Therefore, they contend, it doesn't tell us as much about the human condition as we might hope.

In a survey of undergraduates, Bauman and McGraw found that 63 percent laughed "at least a little bit" in the fat-man scenario and 33 percent did so in the track-switching scenario. And that's an issue, because "humor may alter the decision-making processes people normally use to evaluate moral situations," they note. "A large body of research shows how positivity is less motivating than negativity."

"If you study moral judgement and people are laughing about the experimental materials you're giving them, that might be a problem," McGraw, who also studies humor [ http://www.theatlantic.com/health/archive/2014/02/the-dark-psychology-of-being-a-good-comedian/284104/ ], told me in an interview.

McGraw also says the trolley problem isn't a realistic representation of actual crises of conscience. (When's the last time you even rode in a trolley?)

The dilemma was originally devised not by psychologists, after all, but by philosophers. In the 1960s, Philippa Foot and Judith Jarvis Thomson used it as a thought experiment; a way of laying bare the difference between peoples' convictions and their justifications.

In another survey, McGraw and his co-authors found that people "rated the trolley problems to be much less realistic than short scenarios about contemporary social issues."

Most real-life moral dilemmas, McGraw points out, are not of the sacrificial variety. We don't go through life shoving people in front of locomotives in order to rescue other people. More likely, you'd face something like, "should I lay off this department, or this other one?"

"In that case, use a scenario where you're laying people off, not pushing a fat man off a bridge," McGraw said.

To be fair, there's something to be said for what's become the "fruit fly" of a traditionally less-precise science. It can be helpful for different experiments to have the same starting point. The trolley problem may not be perfect, but it's the best lowest common denominator we have.

Charles Millar, a PhD candidate at Ontario's University of Waterloo, agrees that the footbridge variant, in particular, is far-fetched. "There is no way there is someone who is so large that they can be pushed in front of a train and stop it," he said. "And if there was, you wouldn't be able to push them."

But he says that there are ways to modify the story to be less over-the-top. One version, for example, stars a spilled container of bleach headed for a collection of precious tapestries. Would you throw one tapestry over the bleach, thereby soaking it up but destroying the one rug, or would you try to stem the flow of the bleach through some other, indirect means?

Others argue that the trolley problem isn't supposed to be representative of a real-world problem in the first place. Even if it doesn't accurately capture the nature of human sacrifice, it tells us interesting things about the brain. Joshua Greene, the Harvard researcher, told me that the trolley dilemma has been used to answer questions like, "Can visual imagery actually influence a moral judgment? Can a neurotransmitter have a directional influence on moral judgment? Could the language in which you read about a moral question influence your answer?" across dozens of published papers.

The trolley problems don't tell us what we'd do if we actually faced an out-of-control streetcar, he argues, they just highlight subtle quirks of our internal moral GPS systems.

"It's not 'Let's study trolley problems because they're representative of problems we face in everyday life,'" Greene told me. "It's 'Here's an interesting puzzle. If we follow it, we might learn something important.'"

Copyright © 2014 by The Atlantic Monthly Group

http://www.theatlantic.com/health/archive/2014/07/what-if-one-of-the-most-popular-experiments-in-psychology-is-worthless/374931/ [with comments]

---

(linked in):

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=96759860 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=104020025 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=104834973 and preceding and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=104923076 and preceding and following