LONDON (Reuters) – The success of viral memes, videos, and photographs in the spread of online misinformation was fueling the organized social, media, manipulation, Instagram, and YouTube, researchers at the University of Oxford’s said on Thursday.
FILE PHOTO: a Silhouette of the user of the mobile device are displayed next to the projection screen and the Youtube logo, in this picture, the image, March 28, 2018. REUTERS/dado Ruvic/Image/File Photo
In a report about the misinformation, trends, Oxford Internet Institute, Computational Propaganda and the project of research, said Facebook remained the most popular platform for social media manipulation, by reason of the scale and global reach.
However, a focus on visual content is more likely to be shared online, meaning that users of Google’s YouTube video platform, and Facebook’s Instagram photo-sharing site are increasingly becoming the target of false or misleading news, said Samantha Bradshaw, one of the co-authors of the report.
“On Instagram, YouTube and more, it’s about the ever-changing nature of the fake news, now there will be less text-based websites, to articles, to sections, and it’s more about the content with a fast, useful content,” she said. “Memes and the videos are easy to consume in a a lot of attention-close-up.”
The findings of the report highlight the challenges facing Facebook, Google + and other social media companies in the fight against the spread of political and financially motivated to spread misinformation, such as, tactics, and technologies for development and change.
A Facebook spokesman said that the display of the users have access to accurate information was an important priority for the company.
“We need smarter tools, more transparency and better partnerships in order to better identify emerging threats, to stop the bad actors, and to reduce the spread of false information on Facebook, Instagram, and WhatsApp,” the spokesman said.
YouTube said it had invested in the policy, resources and products for dealing with misleading information on the website, and periodically remove any content that violates the terms and conditions of use. A spokesman declined to comment on the University of Oxford’s findings.
Bradshaw said the move to target the internet user with a visual content, it would make it more difficult for the social media platforms to identify and stamp out the manipulated activity.
Facebook and YouTube both came under close control of their ability to monitor and police the visual content of a mass-shooting in New Zealand at the end of March.
In that incident, a man was able to get to a live-stream of the deaths of 51 people on Facebook, internet users will be repeatedly shared and uploaded the video through multiple social media platforms.
“It is easier for automatic analysis of the words, it’s an image,” Bradshaw said. “And the pictures often speak louder than words, and with more potential to go viral.”
The Oxford University report said that with the increased awareness of the social, media, manipulation, provided that such activity had been identified in over 70 countries around the world, up from 28 a year in 2017.
“Computational propaganda and has become a part of the digital public sphere,” the report said. “These technologies will continue to evolve with new technology … that’s all there is to fundamentally reform society and in the political arena.”
Reporting by Jack Stubbs; Editing by Alexandra Hudson