2017 Was So Fake

By Cale Guthrie Weissman

Disinformation was a major topic of discussion during the 2016 presidential campaign, but it became more weaponized than ever in 2017—a way to evade facts and massage reality to suit whatever political and ideological aims people wanted. The politically loaded term for this is “fake news,” and in true fake news fashion, the very meaning of that term has become an obfuscation. At the heart of the problem is the current state of media, the distribution systems that make spreading misinformation so easy, and the actors distributing these campaigns.

 

In some tellings, the issue of fake news hit the mainstream when Donald Trump popularized the term on the campaign trail, but disinformation has been rampant online since well before that–it’s only in this last year have we seen the real power it wields. Yet we’ve seen the issue come up in myriad stories and forms, which does not bode well for how our realities will be shaped in the coming year.

What’s most scary is that the biggest online platforms are also the biggest fake-news breeding grounds, and despite a lot of talk over the last 12 months, they have yet to make real strides in fixing it.

Social Media Platforms

When tragedy struck this year, platforms became riddled with seemingly journalistic content that peddled fabricated stories. YouTube, for instance, was instantly awash in conspiracy theories, especially in the wake of mass shootings. In October, a lone shooter in Las Vegas opened fire on a crowd of people, killing over 50 people. In a matter of days, conspiracy theorists flocked to platforms like YouTube and disseminated false information. Many deemed the attack a “false flag.” Others said the shooting was staged and the victims were “crisis actors.”

Even as I write this, the first recommended search term in YouTube after “Las Vegas shooting” is “Las Vegas shooting conspiracy.” This isn’t an isolated issue either–every highly publicized attack generated endless posts on YouTube claiming the attacks to be fake or from some highly organized conspiracy. And these videos raked in hundreds of thousands of views.

Similarly, Facebook and Google were also caught promoting fake content in the wake of the Las Vegas tragedy. As the news broke, both platforms began to promote content from unverified blogs that misidentified both victims and the shooter.

Both companies apologized for the mishaps, yet their platforms are still predicated on promoting the most shareable content. This is ultimately the genesis of fake news: clickable content from lord knows where, intended to be shared before being verified. Sometimes it’s the most sensational–as opposed to factual–that gets to the top of their pages. Indeed, watchdog groups like Snopes have existed for years because fake news has plagued social media since well before the election.

 

Foreign Interference

Beyond conspiracy theories and content farms, foreign trolls became a front-page issue this year. At first, the question was whether or not Russia attempted to interfere with the 2016 campaign. Then, it became not a question of if, but how. Over the last year, details have emerged about international agencies trying to wreak havoc on the American internet consciousness.

It’s unclear how far-reaching these campaigns were, but we now know that Russian organizations purchased digital political ads on platforms like Facebook. Some of these were simply to drum up unrest, others seemed like they were trying to sway public opinion.

Facebook was the focus of much of this disinformation, but it wasn’t the only place. There were a series of posts from these foreign actors on Instagram and other platforms, too. On the Instagram front, these posts came in the form of memes. Often they were meant to be shareable content that galvanized a certain demographic.

“For sowing division and finding wedge issues, Instagram is an ideal visual meme broadcast factory,” Jonathan Albright, research director of Columbia University’s Tow Center for Digital Journalism, told Fast Company earlier this year. In a sense, these posts–which were put online in 2016–were attempts to create fake virality and mislead certain groups of the U.S. electorate.

Citizen “Journalism”

On the opposite end of this spectrum are people who claim their sole purpose is to inform the online masses, or to write about things the mainstream media won’t. Since the election, a group of disillusioned Twitter users gained notoriety by crafting long tweet storms that alleged to prove political malfeasance at the highest level.

The biggest players are Eric Garland, Seth Abramson, and Louise Mensch. They all claim to be part of a new class of citizens—digitally snooping and connecting the dots that others can’t—but often their claims are either thin or simply make no sense. Mensch has been known to cite “anonymous sources” that asserted outrageous things, like the idea that Steve Bannon will be sentenced to death for espionage. Eric Garland continually tweeted about vast paranoia-laden Russian conspiracies while providing no reporting of his own. Abramson operates similarly to Garland–making grand generalizations about news already reported, but misconstruing even the easiest-to-understand parts in the name of an ideological goal.

 

Despite often sharing incorrect information, some of these actors remain verified on Twitter, a designation that gives their rants even more perceived validity.

These sorts of Twitter users operate a certain kind of insidious fake news–one that gives ammunition to the very people they’re trying to expose. By misconstruing public facts or citing the fanciful ideas of anonymous sources, these Twitter personalities create a false reality for many everyday people who use social media as a way to gather facts. It may feel good on the inside to read these yarns, but they are nonetheless dangerous.

What Happens Now?

Nearly all of the big tech platforms have vowed this year to fight fake news. Some have taken concrete steps to address the problem, but their efforts often feel like a game of whack-a-mole.

Twitter, for instance, is in the process of revamping its verification system. It also implemented new rules about what conduct is permitted on the platform. Facebook has tried to figure out a way to fact-check posts, but these attempts have already been proven not to work. YouTube, meanwhile, has been trying to curb the spread of conspiracy videos by hitting them in their pocketbooks. It has made it harder for accounts to monetize content, changed its algorithm so it doesn’t surface fake content, and made changes to its “news” section that it promoted more verified journalistic content.

The problem with all these fixes is that they feel like Band-Aids on a much bigger wound. Social media platforms are inherently set up to favor the most shareable content. This system, by virtue of how it was built, leans toward content like fake news, and tech platforms profit heavily from the engagement. Instead of figuring out a way to uproot the system that has made them some of the most powerful companies on the planet, platforms like Facebook and Google are more likely to continue with business as usual, implementing small fixes.

So while we enter 2018 hoping to turn over a new leaf. It’s important to keep in mind the mirages around us. They’ve been rearing their heads for some time, and will surely continue doing so in new and subtle ways.

 

 

 

Fast Company , Read Full Story

(20)