Google’s Perspective AI Takes On ‘Toxic’ Comments For Publishers

Google’s Perspective AI Takes On ‘Toxic’ Comments For Publishers

by Laurie Sullivan @lauriesullivan, February 23, 2017

Google’s focus on ridding the Internet of harassment and name calling has led the company to release an artificial intelligence tool called Perspective that automatically scans online content and rates how “toxic” the meaning of the content is based on ratings by thousands of people.

Google's Perspective AI Takes On 'Toxic' Comments For Publishers

Through an API, the tool provides a new way for blogs, publishers and Web sites to moderate online discussions and content. The tool was developed by the engineers at Jigsaw, a subsidiary of Google’s holding company Alphabet.

“Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime,” Jared Cohen, Jigsaw president, wrote in a post. “You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.”

About 72% of American Internet users have witnessed online harassment and 47% have personally experienced it, and 27% self-censor for fear of negative retaliation, according to a report from the Data & Society Research Institute.

The Perspective tool, an experiment, allows people to see the potential impact of their writing and allows people to try out the tool on a Web site.

Learning how to spot toxic meaning in content, Perspective examined hundreds of thousands of comments labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it improves at scoring future comments with help from machine-learning technology.

Jigsaw announced in September 2016 that it had researched the technology with The New York Times and the Wikimedia Foundation. The research initiative called “Conversation AI” was created jointly by Jigsaw and Google’s counter-abuse technology team.

The New York Times

is planning to use Conversation AI to verify all the comments that get posted on the site, automatically flagging abusive ones for a team of human moderators to review.

MediaPost.com: Search Marketing Daily

(9)