InvestorsHub Logo

BullNBear52

10/20/18 12:12 PM

#291910 RE: fuagf #291904

The Poison on Facebook and Twitter Is Still Spreading
Social platforms have a responsibility to address misinformation as a systemic problem, instead of reacting to case after case.


By The Editorial Board
The editorial board represents the opinions of the board, its editor and the publisher. It is separate from the newsroom and the Op-Ed section.

Oct. 19, 2018

A network of Facebook troll accounts operated by the Myanmar military parrots hateful rhetoric against Rohingya Muslims. Viral misinformation runs rampant on WhatsApp in Brazil, even as marketing firms there buy databases of phone numbers in order to spam voters with right-wing messaging. Homegrown campaigns spread partisan lies in the United States.

The public knows about each of these incitements because of reporting by news organizations. Social media misinformation is becoming a newsroom beat in and of itself, as journalists find themselves acting as unpaid content moderators for these platforms.

It’s not just reporters, either. Academic researchers and self-taught vigilantes alike scour through networks of misinformation on social media platforms, their findings prompting — or sometimes, failing to prompt — the takedown of propaganda.

It’s the latest iteration of a journalistic cottage industry that started out by simply comparing and contrasting questionable moderation decisions — the censorship of a legitimate news article, perhaps, or an example of terrorist propaganda left untouched. Over time, the stakes have become greater and greater. Once upon a time, the big Facebook censorship controversy was the banning of female nipples in photos. That feels like a idyllic bygone era never to return.

The internet platforms will always make some mistakes, and it’s not fair to expect otherwise. And the task before Facebook, YouTube, Twitter, Instagram and others is admittedly herculean. No one can screen everything in the fire hose of content produced by users. Even if a platform makes the right call on 99 percent of its content, the remaining 1 percent can still be millions upon millions of postings. The platforms are due some forgiveness in this respect.

It’s increasingly clear, however, that at this stage of the internet’s evolution, content moderation can no longer be reduced to individual postings viewed in isolation and out of context. The problem is systemic, currently manifested in the form of coordinated campaigns both foreign and homegrown. While Facebook and Twitter have been making strides toward proactively staving off dubious influence campaigns, a tired old pattern is re-emerging — journalists and researchers find a problem, the platform reacts and the whole cycle begins anew. The merry-go-round spins yet again.

This week, a question from The New York Times prompted Facebook to take down a network of accounts linked to the Myanmar military. Although Facebook was already aware of the problem in general, the request for comment from The Times flagged specific instances of “seemingly independent entertainment, beauty and informational pages” that were tied to a military operation that sowed the internet with anti-Rohingya sentiment.

The week before, The Times found a number of suspicious pages spreading viral misinformation about Christine Blasey Ford, the woman who has accused Brett Kavanaugh of assault. After The Times showed Facebook some of those pages, the company said it had already been looking into the issue. Facebook took down the pages flagged by The Times, but similar pages that hadn’t yet been shown to the company stayed up.

It’s not just The Times, and it’s not just Facebook. Again and again, the act of reporting out a story gets reduced to outsourced content moderation.

“We all know that feeling,” says Charlie Warzel, a reporter at BuzzFeed who’s written about everything from viral misinformation on Twitter to exploitative child content on YouTube. “You flag a flagrant violation of terms of service and send out a request for comment. And you’re just sitting there refreshing, and then you see it come down — and afterward you get this boilerplate reply via email.” Mr. Warzel says his inbox is full of messages from people begging him to intercede with the platforms on their behalf — sometimes because they have been censored unfairly, sometimes because they want to point to disturbing content they believe should be taken offline.

Journalists are not in the business of resolving disputes for Facebook and Twitter. But disgruntled users might feel that they have a better chance of being listened to by a reporter than by someone who is actually paid to resolve user complaints.

Of course, it would be far worse if a company refused to patch a problem that journalists have uncovered. But at the same time, muckraking isn’t meant to fix the system one isolated instance at a time. Imagine if Nellie Bly had to infiltrate the same asylum over and over again, with each investigation prompting a single incremental change, like the removal of one abusive nurse.

The work of journalists is taken for granted, both implicitly and explicitly. In August, the Twitter CEO, Jack Dorsey, took to his own platform to defend his company’s decision to keep Alex Jones online. “Accounts like Jones’ can often sensationalize issues and spread unsubstantiated rumors, so it’s critical journalists document, validate, and refute such information directly so people can form their own opinions,” he said. “This is what serves the public conversation best.” But journalists and outside researchers do not have access to the wealth of data available internally to companies like Twitter and Facebook.

The companies have all the tools at their disposal and a profound responsibility to find exactly what journalists find — and yet, clearly, they don’t. The role that outsiders currently play, as consumer advocates and content screeners, can easily be filled in-house. And unlike journalists, companies have the power to change the very incentives that keep producing these troubling online phenomena.

The reliance on journalists’ time is particularly paradoxical given the damage that the tech companies are doing to the media industry. Small changes to how Facebook organizes its News Feed can radically change a news organization’s bottom line — layoffs and hiring sprees are spurred on by the whims of the algorithm. Even as the companies draw on journalistic resources to make their products better, the hegemony of Google and Facebook over digital advertising — estimated by some to be a combined 85 percent of the market — is strangling journalism.

But throwing light on the coordinated misinformation campaigns flaring up all around us is a matter that is much bigger than the death of print — it’s essential to democracy. It can change the course of elections and genocides. Social media platforms are doing society no favors by relying on journalists to leach the poison from their sites. Because none of this is sustainable — and we definitely don’t want to find out what happens when the merry-go-round stops working.

https://www.nytimes.com/2018/10/19/opinion/facebook-twitter-journalism-misinformation.html?

sortagreen

10/20/18 1:08 PM

#291913 RE: fuagf #291904

Sorry ... Missed that