A statue is seen in front of the Facebook logo in this picture, March 20, 2018. REUTERS/dado Ruvic – RC155C02C7D0
This week, Facebook announced that the implementation of a large-scale machine learning system called Rosetta, that the use of automatically and proactively identify “inappropriate or harmful content” in images on the social network. In other words, Facebook has developed an AI that can tell if a meme is offensive.
The social media giant is not planning to just use this information for memes, of course. In a blog post, Facebook describes how the algorithm can be used to detect text in a picture of a store window, street sign, or the menu of the restaurant.
With recent appearances on Capitol Hill, the news that 26% of Americans removed the app from their phones, and the concerns about false news that are nipping at Facebook’s heels since the elections, with the curb offensive content will certainly be a priority.
Rosetta is the intelligence of start with a two-step process: the detection of images that may contain text, and then recognizing what the text in the image actually is. This model is not only for the English language, as Facebook says that it supports various languages and encodings, including Arabic and Hindi, which means that the system will be able to read from right to left.
More From PCmag
10 More Cities Whole Foods Prime Delivery
Apple’s New iPhones Have Really Exciting… Modems? Yep
Verizon Brings Home 5G Service to Four AMERICAN Cities on Oct. 1
Get Your Money Faster With PayPal Money Now
Rosetta is already used by teams at Facebook and Instagram for the improvement of the quality of the image search, improve the accuracy of the photos in the News Feed, and to identify hate speech.
Facebook has struggled in the past to adequately identify incitement to hatred or misleading information and documents given by the system board, Facebook’s own teaching material erroneously said that an image of the 2010 earthquake in Jeiegu, China was a photo of a Myanmar genocide.
Using artificial intelligence to assess the severity of the speech is already in trouble. It was recently discovered that the Google Perspective AI, which is used for the detection of malicious comments, can be tricked easily with typos, spaces between words, and adding non-related words to the original sentence.
Suffice to say, with approximately 350 million photos uploaded to the social network every day, Rosetta and Facebook are against a heavy battle.
This article originally appeared on PCMag.com.