What Facebook Considers Hate Speech Depends On Who Is Posting It

By Cale Guthrie Weissman, June 29, 2017

Earlier this week Facebook announced that it hit 2 billion users, which is no small feat. For context, that’s over a quarter of the world’s population—making it without a doubt the most popular social network in the world. And founder and CEO Mark Zuckerberg is quick to tell us–the world–that he’s only getting started. “We still have a long way to go to connect everyone,” he wrote in a Facebook comment.

Left unsaid was that Facebook is also still trying to figure out its role and its responsibility for how those billion of users connect with each other. Once again, that challenge was made all too clear with a blockbuster ProPublica exposé, divulging a secret handbook used by the company’s content moderators. According to the leaked documents, Facebook had a very strict system for determining what sort of speech is and isn’t permissible on the site—but it had the unintended effect of harming the most vulnerable groups.

White Men, Not Black Children, Are Protected

These rules included almost legalistic specifications about what is written in a flagged post, advising content moderators to look out for posts that attack “protected” categories. According to ProPublica, these “PCs” are people who are grouped by “race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease.” But when one of these categories is grouped with a non-protected category, they lose their “PC” status. It’s an algebra of cultural control, leading to perverse results. For example, one training slide asks which of the following three groups are protected from speech on Facebook: female drivers, black children, or white men. “The correct answer: white men,” reports ProPublica. Why? Because both descriptors are considered “protected,” whereas the other two groups included non-protected descriptors like “drivers” and “children.”

These rules are meant to create an orderly and expedited process when dealing with the millions of posts uploaded to Facebook each day. But in their color-blindness, they remain strictly theoretical with no sense of real-world dynamics, protecting the powerful and harming the powerless.

The New Yorker’s Adrian Chen tweeted (July 14, 2017), in response to the ProPublica story, that “concerns about Facebook “hyper-democratizing” media tend to ignore all the ways it’s biased towards the powerful.” He was specifically highlighting how, despite headlines of globalized and interconnected social media helping oppressed voices during the Arab Spring, Facebook’s rules actually ended up favoring “elites and governments over grassroots and racial minorities.”

Facebook tells ProPublica that many of these rules are no longer in effect. That may be beside the point. Facebook, for all its posturing about being the open and connected platform for everyone, is essentially a black box. The workings of its internal mechanics is not visible to the public. When people question its algorithm and ask how it surfaces content, the company says as little as possible. We only ever learn more about its internal processes via tiny announcements (such as the recent press release saying that the company is going to hire more content moderators) or unsanctioned leaks like this.

A Company “Run By A Bunch Of Ivy League Graduates”

Before this news came to light, I spoke with privacy activist Jillian York of the Electronic Frontier Foundation. According to her, one of the most alarming problems with Facebook is that we know so little. “I want them to be more transparent about how content moderation works,” she says.

With every change the company makes, York says, Facebook is implicitly admitting that it plays a huge role in what sort of speech is acceptable. And every time it tweaks an algorithm or moderation system, it has huge implications for digital culture and how speech is disseminated. Despite all this, says York, “they don’t admit they have a huge problem.”

Only when the public sees that Facebook trains moderators to protect white men instead of black children do we get a sense of its definition of “free speech.” By creating a one size fits all system about how to censor content for billions of people representing millions of cultures, Facebook necessarily creates two distinct buckets for whom it protects and whom it doesn’t protect. And vulnerable groups are the ones who suffer.

One thing Facebook should do, says York, is finally open up. A company that is “run by a bunch of Ivy League graduates” is only going to propel this problem “unless they bring in real experts.” If a group of white men are making the decisions for Facebook, it becomes pretty obvious who will be considered a protected group. Creating an algorithm to comb through content won’t cut it. “This is not a technological problem,” says York.

Facebook, now bigger than any government in the world, wants to be the social network for everyone. As things stand now, it’s a hubristic mission. Mark Zuckerberg wants to get the remaining 6 billion people signed up. Yet will Facebook look out for them and protect them from harm?

If you ask the company, it will point to its new mission to be a “global community.” Phrases like “free speech” and “connectivity” will likely be thrown in. The slides from ProPublica, however, tell us a different story by showing the real-world consequences of its approach.

Leaked documents show who the social media giant protects and who it doesn’t.

Earlier this week Facebook announced that it hit 2 billion users, which is no small feat. For context, that’s over a quarter of the world’s population—making it without a doubt the most popular social network in the world. And founder and CEO Mark Zuckerberg is quick to tell us–the world–that he’s only getting started. “We still have a long way to go to connect everyone,” he wrote in a Facebook comment.

 

Fast Company , Read Full Story

(15)