admin
Pinned April 6, 2022

<> Embed

@  Email

Report

Uploaded by user
GGWP is an AI system that tracks and fights in-game toxicity
<> Embed @  Email Report

GGWP is an AI system that tracks and fights in-game toxicity

Internal Facebook documents highlight its moderation and misinformation issues

The ‘Facebook Papers’ show a company in turmoil.

Daniel Cooper
D. Cooper
October 25th, 2021
GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com
BRENDAN SMIALOWSKI via Getty Images

The Facebook Papers, a vast trove of documents supplied by whistleblower Frances Haugen to a consortium of news organizations, has been released. The reporting, by Reuters, Bloomberg, The Washington Post and others, paints a picture of a company that repeatedly sought to prioritize dominance and profit over user safety. This was, however, despite a large number of employees warning that the company’s focus on engagement put users at risk of real-world violence.

The Washington Post, for instance, claims that while Facebook CEO Mark Zuckerberg played down reports that the site amplified hate speech in testimony to Congress, he was aware that the problem was far broader than publicly declared. Internal documents seen by the Post claim that the social network had removed less than five percent of hate speech, and that executives — including Zuckerberg — were well aware that Facebook was polarizing people. The claims have already been rebutted by Facebook, which says that the documents have been misrepresented.

Zuckerberg is also accused of squashing a plan to run a Spanish-language voter-registration drive in the US before the 2020 elections. He said that the plan may have appeared “partisan,” with WhatsApp staffers subsequently offering a watered-down version partnering with outside agencies. The CEO was also reportedly behind the decision not to clamp down on COVID-19 misinformation in the early stages of the pandemic as there may be a “material tradeoff with MSI [Meaningful Social Interaction — an internal Facebook metric] impact.” Facebook has refuted the claim, saying that the documents have been mischaracterized.

Reuters reported that Facebook has serially neglected a number of developing nations, allowing hate speech and extremism to flourish. That includes not hiring enough staffers who can speak the local language, appreciate the cultural context and otherwise effectively moderate. The result is that the company has unjustified faith in its automatic moderation systems which are ineffective in non-English speaking countries. Again, Facebook has refuted the accusation that it is neglecting its users in those territories.

One specific region that is singled out for concern is Myanmar, where Facebook has been held responsible for amplifying local tensions. A 2020 document suggests that the company’s automatic moderation system could not flag problematic terms in (local language) Burmese. (It should be noted that, two years previously, Facebook’s failure to properly act to prevent civil unrest in Myanmar was highlighted in a report from Business for Social Responsibility.)

Similarly, Facebook reportedly did not have the tools in place to detect hate speech in the Ethiopian languages of Oromo or Amharic. Facebook has said that it is working to expand its content moderation team and, in the last two years, has recruited Oromo, Amharic and Burmese speakers (as well as a number of other languages).

Elsewhere, The New York Times reports that Facebook’s internal research was well-aware that the Like and Share functions — core elements of how the platform work — had accelerated the spread of hate speech. A document, titled “What Is Collateral Damage”, says that Facebook’s failure to remedy these issues will see the company “actively (if not necessarily consciously) promoting these types of activities.” Facebook says that, again, these statements are based on incorrect premises, and that it would be illogical for the company to try and actively harm its users.

Bloomberg, meanwhile, has focused on the supposed collapse in Facebook’s engagement metrics. Young people, a key target market for advertisers, are spending less time on Facebook’s platform, with fewer teens opting to sign up. At the same time, the number of users may be artificially inflated in these age groups, with users choosing to create multiple accounts — “Finstas” — to separate their online personas to cater to different groups. Haugen alleges that Facebook “has misrepresented core metrics to investors and advertisers,” and that duplicate accounts are leading to “extensive fraud” against advertisers. Facebook says that it already notifies advertisers of the risk that purchases will reach duplicate accounts in its Help Center, and lists the issue in its SEC filings.

Wired focuses on how Facebook’s employees regularly leave valedictions when they leave the company. And how these missives have become increasingly gloomy, with one departing employee writing that the platform has a “net negative influence on politics.” Another said that they felt that they had “blood on their hands,” while a third said that their ability to effect changes to Facebook’s systems to improve matters was hampered by internal roadblocks.

The Verge offered more context on the company’s work in Ethiopia, where the lack of language and cultural experience was a huge problem. This meant that Facebook’s community standards weren’t available in all of the country’s official languages and automated moderation models were unavailable. In addition, there was no reliable access to fact-checking, and no ability to open a “war room” to help monitor activity during major events. (Facebook’s documentation says that it takes around a year to build the necessary capacity to address hate speech in a specific country.)

This same report adds that, in order to reduce the burden on the company’s limited human moderators, it would make it harder for users to report hate speech. In addition, those reports of hate speech would be automatically closed in cases where the post in question had received little attention. There is also a statement saying, broadly, that the team had exhausted its budget for the year and, as a consequence, there would be fewer moderators working on these issues towards the end of that period.

And, shortly after these reports were published, Frances Haugen sat down with the UK’s select committee looking at its forthcoming Online Safety Bill. Much of what she said has already been expressed to regulators in the US, but her comments have been highly critical of Facebook. At one point, Haugen said that Facebook has been unwilling to sacrifice even “little slivers of profit” in order to make its product safer. She added that Facebook CEO Mark Zuckerberg “has unilateral control over three billion people, [and that] there’s no will at the top [of the company] to make sure these systems are run in an adequately safe way.” 

Over the weekend, Axios reported that Facebook’s Sir Nick Clegg warned that the site should expect “more bad headlines” in the coming weeks. It’s likely that whatever happens at the company’s third-quarter announcement later today won’t be sufficient to dispel the storm of bad press it is currently weathering. 

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

GGWP is an AI system that tracks and fights in-game toxicity | DeviceDaily.com

 

Updated 10:41am ET to include comments from Frances Haugen made to the select committee.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics   

(23)


Top