A humbler Zuckerberg acknowledges Facebook’s struggle with the “ugliness of humanity”

By Steven Melendez

Even Mark Zuckerberg is acknowledging that social media hasn’t been all good.

“The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence,” he wrote in a blog post Thursday. “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.”

Speaking to media on Thursday, the Facebook CEO and other executives appeared to take an increasingly humble tone, conceding that the social network may always face challenges with undesirable and misleading posts–an issue Zuckerberg compared to real-world crime–and even pledging to allow an “independent body” to review the site’s rulings on content.

“There’s no perfect solutions here and these really aren’t problems that you ever fully fix,” said Zuckerberg. “No one expects crime to be eliminated completely, but you expect that things will get better over time.”

In contrast to Facebook’s pledges in years gone by to simply “make the world more open and connected,” and long-held tech industry assumptions that good speech will triumph over bad in the marketplace of ideas, Zuckerberg agreed that sensationalist, damaging posts can engage more users than positive content. He also acknowledged that the company must take a role in shaping what spreads across its network. And despite past insistence that Facebook is “not a media company,” Zuckerberg compared the sensational content on the site to cable news and tabloid newspapers.

“One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content,” he wrote. “This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization.”

In addition to issues with U.S. election manipulation and misinformation, Facebook has faced criticism that posts on the site ignite already-raw tensions in many other countries–fanning the flames of genocidal violence in Myanmar, for instance. The company recently agreed to let French regulators study its approach to stopping hate speech.

Zuckerberg said Facebook is continuing to develop artificial intelligence tools to spot content that violates its rules, from nudity to violence and hate speech, and either automatically take it down or bring it to a human’s attention.

“In the case of a post where someone is expressing thoughts of suicide, this could even mean the difference between life and death,” said Guy Rosen, a Facebook vice president for product management. In the past year, the company has helped first responders reach about 3,500 people who needed help, according Zuckerberg’s post.

An independent body

The company plans to create an independent body that will review appeals of decisions to take down content, after a “consultation period” to determine how it should work. Facebook, Twitter, and other social media platforms have long faced criticism for choices on what content to take down and what to allow to stand.

“In the next year, we’re planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding,” Zuckerberg wrote. “The purpose of this body would be to uphold the principle of giving people a voice while also recognizing the reality of keeping people safe.”

Facebook says it’s spotting more objectionable posts automatically, before users complain about them. More than half of hate speech posts are now taken down before complaints come in, along with 97% of points removed for “violent and graphic content,” according to a newly released Facebook Community Standards Enforcement Report. And 99% of the 8.7 million posts taken down for child nudity or sexual exploitation were taken down before complaints were received.

In the third quarter of this year, Facebook “took action” on 15.4 million pieces of graphic and violent content, more than 10 times the amount in the final quarter of last year. Actions taken included removing content, putting warnings before it’s shown, disabling accounts, and contacting law enforcement. Facebook also took down 800 million fake accounts in the second quarter and 754 million in the third, according to the company.

“We will continue making progress as we increase the effectiveness of our proactive enforcement and develop a more open, independent, and rigorous policy-making process,” Zuckerberg wrote. “And we will continue working to ensure that our services are a positive force for bringing people closer together.”

 
 

Fast Company , Read Full Story

(11)