AI helped Facebook crack down on 68% more hate posts in Q1

By Mark Sullivan

Facebook says it took down 68% more hate posts in the first quarter of 2020 than it did in the last quarter of 2019. The company says that the increase is due mainly to the marked improvements in the machine learning systems it uses to track down hateful content on its network.

The result is part of Facebook’s Community Standards Enforcement Report, released today, which details the company’s tactics and success rates in enforcing its community guidelines.

On a call with reporters on  Tuesday, CEO Mark Zuckerberg said that the company has been relying more heavily on its AI detection systems since early March, when it sent most of its human content moderators home to self-quarantine. The AI systems, he said, now detect 90% of hate speech posts on the platform before they’re reported by a user.

Zuckerberg said Facebook has been working to get its contract moderators set up to securely review user content at home. But he acknowledged that the company still has a lot of work to do: “Our effectiveness has certainly been impacted by having less human review during COVID-19, and we do expect to make more mistakes until we’re able to ramp back up.”

The company has also been battling misinformation and fraudulent content related to COVID-19. Facebook and Instagram have had to contend with people advertising false cures and fake tests, advocating for the spread of the virus, bullying others for virus-related reasons, and opportunistically selling masks and other PPE, said VP of Content Policy Monika Bickert.

Facebook says it’s developed new computer vision systems designed to sweep the network for that kind of content and label it as misinformation, downrank it to keep it from going viral, or to delete it outright.

Ten types of objectionable posts

Facebook’s Transparency Report covers ten categories of harmful content,  such as nudity, bullying, fake accounts, hate, and terrorism, on Facebook, and eight similar categories for Instagram. The company uses three main metrics to measure its effectiveness in removing hate speech and other types of harmful content from its platforms.

The first is “prevalence,” or the number of times that a user was exposed to some form of content that violates Facebook’s rules. However, FCTO Mike Schroepfer said Facebook has yet to generate enough data on hate speech as a category to generate a reliable prevalence number.

In a crisis, there’s a tendency to put your head down and turn inward, and we’re not going to do that.”

Mark Zuckerberg

The second is “actions taken,” meaning the labeling, downranking, or deleting of false or harmful content. Via human moderators or AI systems, the company says that it took action on 9.6 million pieces of hateful content in the first quarter of 2020 versus just 5.7 million pieces of hateful content in the final quarter of 2019.

The third major metric is the rate at which the company’s AI systems proactively detect a piece of hate content and either refer it to a human moderator or remove the content automatically. Schroepfer said the systems’ success rate in that area improved by 8 percentage points during the past two quarters, and by 20 percentage points over the past year.

On Instagram, Facebook says it took action on just under 806,000 pieces of hate content in the first quarter of 2020, down from about 843,000 in Q4 of 2019. The company says it’s been relying on new computer vision AI systems to detect what it calls “multimodal” harmful content, or content whose harmful message is conveyed using a combination of text and imagery (as in memes), or text and video. Schroepfer said the technology has increased its proactive detection rate on Instagram from 43.1% in Q4 2019 to 44.3% in Q1 2020.

“In a crisis, there’s a tendency to put your head down and turn inward, and we’re not going to do that,” said Zuckerberg. “We’re going to keep sharing these reports.” The truth is that Facebook only really got going with building AI and infrastructure for battling hate and misinformation a few years ago. All those years of  growth at any cost could have been spent getting ready for the massive problem the company is struggling to deal with now.

Fast Company , Read Full Story

(3)