News

Facebook enhanced artificial intelligence to block terrorist propaganda

Under great political pressure in order to better to block terrorist propaganda on the internet, Facebook relies more on artificial intelligence.

The social-media company said Thursday that it has expanded the use of A. I. in the past few months to identify potential terrorist messages and accounts on its platform—and at times to remove or block, without review by a human. In the past, Facebook and other tech giants are used mainly on users and human moderators to identify offensive content. Even when algorithms, content marked for the disposal of these companies generally appeared to be the man to make a final call.

Companies have considerably enhanced the volume of the content that they have deleted in the past two years, but these efforts have not yet proven effective enough to tamp down a wave of criticism from governments and advertisers. They have accused you of Facebook, Google, older, Alphabet Inc. and others of complacency about the spread of inappropriate content—in particular, messages or videos considered to be an extremist propaganda or communications on their social networks.

In response, Facebook announced new software that it says that it is used to better police its content. An instrument which is in use for a number of months now, combing the site for the known terrorist footage, such as videos decapitation, to prevent them from being republished, executives said Thursday. Another set of algorithms tries to determine—and sometimes autonomous block—propagandists of to open any new accounts after they have been kicked off the platform. Another experimental tool makes use of A. I. that is trained to identify language that is used by terrorist propagandists.

This story originally appeared in The Wall Street Journal.

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular