News

Facebook set AI to fight terrorism on the network

SAN FRANCISCO – Facebook has begun with the implementation of artificial intelligence capabilities to help in the fight against terrorists’ use of the service.

Company officials said in a blog post Thursday that Facebook will make use of AI in combination with the human reviewers to find and remove “terrorist content” immediately, before other users see. This technology is already being used to block child pornography from Facebook and other services, such as YouTube, but Facebook was coy about the application of the other may be less clear.

In most cases, Facebook only removes objectionable material as users of the first report.

Facebook and other internet companies face increasing pressure from the government to identify and prevent the dissemination of terrorist propaganda and recruiting messages on their services. Earlier this month the British Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online. Some of the proposed measures would hold companies legally responsible for material posted on their sites.

The Facebook post by Monika Bickert, director of global policy management, and Brian Fishman, counter-terrorism policy manager is not specifically mentioned to call it. But it is recognized that “in the wake of the recent terrorist attacks, people have questioned the role of tech companies in the fight against terrorism online.”

“We want to answer these questions head on. We agree with those who say that social media should not be a place where the terrorists have a vote,” they wrote.

The AI techniques used in this work are image matching, which compares photos and videos people upload to Facebook are “known” terrorism images or video. Races generally mean that Facebook had previously deleted, so that the material, or that it ended up in a database of such images that Facebook shares with Microsoft, Twitter, and YouTube.

Facebook is also the development of “text-based signals” of previously deleted messages that praised or supported terrorist organizations. It will feed those signals into a machine-learning system in the course of time will learn how to detect similar messages.

Bickert and a Fisherman said that when Facebook receives reports of potential “terrorism messages,” the reviews these reports urgently. In addition, says that in the rare cases in which it submits a proof of imminent threat of damage, immediately inform authorities.

But AI is only a part of the process. The technology is not yet at the point where it can understand the nuances of the language and the context, so the people are still in the loop.

Facebook says it has more than 150 people who “exclusively or primarily focused on the fight against terrorism as their main responsibility.” This includes academic experts on counterterrorism, former prosecutor, former law enforcement agents and analysts, and engineers, according to the blog post.

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular