Facebook Inc. is seeking to automatically flag offensive material in live video streams, part of a growing effort to use artificial intelligence to monitor content, Reuters reports.
Joaquin Candela, the company’s director of applied machine learning, said Facebook increasingly was using artificial intelligence to find offensive material.
It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he said.
The company already had been working on using automation to flag extremist video content, as Reuters reported in June.
Now the automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.
The social media company has been embroiled in a number of content moderation controversies this year.
It faced international outcry after removing an iconic Vietnam War photo due to nudity and allowing the spread of fake news on its site.
Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company “community standards.”
Decisions on especially thorny content issues that might require policy changes are made by top executives.
Using artificial intelligence to flag live video is still at the research stage, Candela said.
Facebook said it also uses automation to process the tens of millions of reports it gets each week, to recognize duplicate reports and route the flagged content to reviewers with the appropriate subject matter expertise.
Chief executive Mark Zuckerberg in November said Facebook would turn to automation as part of a plan to identify fake news.
Ahead of the Nov. 8 US election, Facebook users saw fake news reports erroneously alleging that Pope Francis endorsed Donald Trump and that a federal agent who had been investigating Democratic candidate Hillary Clinton was found dead.
– Contact us at [email protected]