“When She Looked Under Her Couch Cushions And Saw THIS… I Was SHOCKED!”
“He Put Garlic In His Shoes Before Going To Bed And What Happens Next Is Hard To Believe”
“The Dog Barked At The Deliveryman And His Reaction Was Priceless.”
Facebook is a vast platform, so its employees can’t identify all the headlines that lurk there one at a time. Instead, they have developed a system that identifies them automatically. An algorithm knows certain phrases that usually indicate clickbait and punishes them accordingly. “Links posted from or shared from Pages […] that consistently post clickbait headlines will appear lower in News Feed,” they write.
This could potentially be very good for readers and for the journalists who work hard to inform them. But at the same time, we don’t really have a choice about whether to comply. Along with Google, Facebook wields incredible power in the marketplace: By one estimate, the two companies direct 80 percent of all traffic [ http://www.niemanlab.org/2016/04/twitter-has-outsized-influence-but-it-doesnt-drive-much-traffic-for-most-news-orgs-a-new-report-says/ ] to news websites. Two years ago, the late media columnist David Carr fretted that news publishers would soon become “serfs in a kingdom that Facebook owns [ http://www.nytimes.com/2014/10/27/business/media/facebook-offers-life-raft-but-publishers-are-wary.html ]”; today, not only do we find ourselves ensconced in Castle Facebook, but the local lord’s laws and guidelines are not written on paper, but executed impersonally and algorithmically. Journalists have to guess to make sure that we’re not doing anything wrong.
So in the spirit of practice and loving fealty, we’ve rewritten some famous headlines to practice for this new age of Facebook. The last thing we would want to do is leave out crucial information.
Machines Can Improve Society if They’re Built in a Way That Values Information Sharing and Connectedness (and if You Read This Again in 50 Years It Will Blow Your Mind Because of How Much Technology I Accurately Predicted) [ http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ ]
A look at the surprisingly quarrelsome field of artificial intelligence
"Google Outlines Plan for a Kill Switch That Would Prevent a Robot Takeover"
Kaveh Waddell Nov 28
Illustration: Sarah Grillo/Axios
Machines as intelligent as humans will be invented by 2029, predicts technologist Ray Kurzweil. "Nonsense," retorts roboticist Rodney Brooks. By that time, he says, machines will only be as smart as a mouse. As for humanlike intelligence — that may arrive by 2200.
Between these two forecasts — machines with human intelligence in 11 or 182 years — lies much of the rest of the artificial intelligence community, a disputatious lot who disagree on nearly everything about their field, Martin Ford, author of "Architects of Intelligence .. https://www.packtpub.com/big-data-and-business-intelligence/architects-intelligence ," tells Axios.
His main takeaway: "This is an unsettled field. It's not like physics."
---- * AI may seem to be a smooth-running assembly line of startups, products and research projects. The reality, however, is a landscape clouded by uncertainty.
---- * Ford's interviewees could not agree on where their field stands, how to push it forward or when it will reach its ultimate goal: a machine with humanlike intelligence.
Why it matters: The embryonic state in which Ford found AI — so early in its development more than a half-century after its birth that the basics are still up for grabs — suggests how far it has to go before reaching maturity. On his blog, Brooks has said .. https://rodneybrooks.com/my-dated-predictions/ .. that AI is only 1% of the way toward human intelligence.
The big picture: Research in the field has progressed in fits and starts since the term "artificial intelligence" was coined in the 1950s by American computer scientist John McCarthy, alternating between periods of hibernation and feverish activity.
---- * The current frenzy is propelled by the wild success of deep learning, an AI technique that excels at finding patterns and identifying objects in photographs.
---- * Few of Ford's subjects said deep learning will arrive at humanlike intelligence on its own — they said something new must be developed to get there. But deep learning aficionados ridicule the alternatives suggested by others.
Kurzweil, Google’s director of engineering, stands out on many fronts, Ford says. For one, he has very little patience for colleagues suffering from "engineer’s pessimism."
--- * Humanlike intelligence will emerge in an exponential burst of innovation, Kurzweil said, not in linear fashion, as so many others seem to think.
---- * "What Ray says is correct," says Ford. "Engineer's pessimism is what's at play; the question is who's right" — the pessimist or the optimist.
Ford's follow-up book is a collection of interviews he conducted with the West's most illustrious artificial intelligence hands.
What's next: Not all was discord. There was a remarkable confluence, for instance, on the most promising coming step.
---- * Many called for an exploration of "unsupervised learning," an AI that, like a child, can wander around, poking and prodding, getting into trouble, and meanwhile learning a lot of important stuff about the world.
---- * That’s a stark departure from current AI training methods, which require reams of labeled data: think photos of cats that are explicitly labeled as cats.
"Today, in order to teach a computer what a coffee mug is, we show it thousands of coffee mugs. But no child’s parents, no matter how patient and loving, ever pointed out thousands of coffee mugs to that child." — Andrew Ng, former AI lead at Google and Baidu