Will Facebook, YouTube and Twitter to police their content?
Kurt ‘the CyberGuy’ comments on new services and apps that do.
YouTube, which has suffered a rash of recent tax content that is violent, offensive to children, or a part of a “fake news” scam — can’t even seem to protect a fellow tech company advertising on its platform.
A recent screenshot of the Google-owned platform sees an ad for Instagram will appear next to a video of an Islamic clergyman who was expelled from Israel in 1993 because of his alleged membership of Hamas.
The video is called “the Lessons of Sheikh Bassam icipated” and the Instagram ad appears along the right side of the video. Hamas is considered a terrorist organization by the US, Israel and the European Union.
YouTube, which has faced intense scrutiny from Capitol Hill lawmakers, now claims its machine learning technology makes it possible to take “nearly 70 percent of violent extremist content within 8 hours to upload.”
Still, the fact that they can’t even protect against other tech firm from exposure to extreme content, is a bad sign, according to experts.
YouTube-systems Instagram ads will be shown in addition to the extreme content.
(YouTube/Thanks Eric Feinberg)
GERMAN COURT RULES FACEBOOK’S REAL NAME POLICY IS ILLEGAL
“YouTube may not have the protection of a Facebook company, Instagram, appearing at extremist content,” says Eric Feinberg, one of the founders of deep web analysis company GIPEC. “If Google cannot protect, Facebook, how can a brand expect to be protected from their ad with an extremist content.”
The video-sharing platform is also under the gun abroad. A draft European Commission document that was published on Tuesday, calls for companies to remove posts promoting terrorism within one hour after receiving complaints.
Platforms need to make “quick decisions regarding possible actions with regard to illegal content online without the need to do this on the basis of an order of a court or of an administrative decision,” the draft said.
Apart from the issue of the terrorist content, YouTube is also a part of the European Commission-led agreement on taking posts with hate within 24 hours after being notified.
UNILEVER THREATENS FACEBOOK, GOOGLE, ONLINE ADVERTISING CUTS
The tech giant has cracked down on what it calls “borderline videos” — content placed who espouses hateful or racist views, but is not technically in violation of the site’s community guidelines against direct calls for violence — and they will be now harder to find, will not be recommended or by money and not have features, such as comments, suggestions, videos, and likes.
More than 400 hours of content is uploaded every minute to the platform, more than 1.5 billion users logged in, according to YouTube CEO Susan Wojcicki.
In addition to the use of computers, YouTube is the use of human experts to help flag problematic videos. They have added 15 non-governmental organizations, including the Anti-Defamation League, the No Hate Movement and the Institute for Strategic Dialogue.
YouTube-star Paul Logan prompted a backlash and was temporarily banned from the site after the posting of a video in Japan shows the body of an apparent suicide of the victim. More recently, he came under fire for uploading a video where he used a stun gun on rats. In addition, the company recently banned the viral Tide Pod challenge videos, after a lot of complaints and instances of young people taking the detergent.
Fox News’ Chris Ciaccia, contributed to this report.
Christopher Carbone is a reporter for FoxNews.com. Follow him on Twitter @christocarbone.