Facebook’s strategy for handling fake news smells like surrender

By Mark Sullivan

July 24, 2018

Twenty-one months after the 2016 presidential election–which many believe was hacked by Russian propaganda farms–and with the midterm elections just around the corner, Facebook is finally forming up a holistic approach to fighting political disinformation on its platform. It’s an imperfect strategy, but it may be the best it can do. The company held a conference call with journalists on Tuesday to explain the strategy and detail how it’s being implemented.

 

Regarding the false news that was rampant on the platform in 2016, Facebook says that it’s now partnering with fact checkers to find bogus content, dial down the visibility of hoaxes in news feeds, and provide more contextual information about questionable news stories.

What it won’t do is completely remove false news content–even when it’s been debunked by fact checkers. And for that it’s let itself in for a boatload of criticism. As long as a piece of disinformation does not incite violence or violate other existing community guidelines, the company explains, it will stay on the platform for all to see.

“If you are who you say you are, we don’t believe we should stop you from posting content,” said Facebook product manager Tessa Lyons during the conference call. Put simply, Facebook doesn’t want to be put in the position of deciding if some piece of content is mostly true or mostly false. “Not everybody agrees where the line is,” Lyons said. “But just because something is allowed on Facebook doesn’t mean it should get distribution,” she added.

Lyons explained that each piece of content Facebookers see in their news feeds is given a score to reflect how closely it matches with the user’s identity and interests. If a news post has been shown to be false, points are deducted in a uniform way so that fewer people are exposed to the BS. The distribution of such posts is reduced by 80% on average, Facebook has said. The company says it may also limit the distribution of all content, not just single posts, published by pages that routinely post false news content.

Fake accounts, fake news

Facebook believes most of the disinformation on its platform comes from fake accounts, or accounts run by people or groups that aren’t who they say they are. For example, if it finds that a page that claims to be run by Americans but is actually operated out of Macedonia, Facebook will take the page down, it says. The company has said that it removed 583 million fake accounts in the first quarter of 2018.

“These threat actors run repeated and coordinated actions to spread false information for a political goal,” Lyons said. The Russian operatives that interfered with the 2016 election, for example, masqueraded as a variety of different groups with a variety of different causes–conservative and liberal–to sew division in the electorate. Facebook now uses a combination of manual investigations and automated monitoring to track down such bad actors and zap them from the platform.

But habitual spreaders of false news such as InfoWars would not be touched by such investigations. The group does not misrepresent itself, and yet the false news it spreads is hurtful, and arguably damaging to the democratic process.

And given the sheer number of new accounts (real and fake) created on Facebook every day, it seems impossible that the company could reliably weed out all the ones that intend to spew propaganda. If one of the Russian content farms wanted to use Facebook to influence the U.S. midterms, it would likely work through a front operation in the U.S. that wouldn’t need to lie to Facebook about its identity. As Facebook concedes, it’s truly a needle-in-a-haystack problem.

Finally, Facebook says it has taken several steps to increase transparency around political ads on its platform. It’s now labelling political ads by the people or groups who paid for them, and allowing journalists and academics to study an archive of Facebook political ads after they’ve run. The company says revenue from political ads represents but a small part of its ad revenue.

The company acknowledges that Facebook users will still encounter various forms of untrue or half-true political content at its site. The company doesn’t seem to have a solution for dealing with propagandists and provocateurs who don’t out themselves by hiding behind false identities from which they can be unmasked.

It’s an uneasy compromise. Facebook says it doesn’t want to stand in judgment of the truthiness of content. It refuses to expel false news and propaganda from its platform, and yet it’s willing to punish content it believes to be false by limiting its distribution. Facebook is making judgments–it’s just being wishy-washy about the action it takes once that determination has been made.

 
 

(45)