News

Government steps in to regulate social media, but perhaps there is a better way

FILE – In this Thursday, Jan. 4, 2018, file photo, a man shows how he comes on his Facebook page as he works on his computer in a restaurant in Brasilia, Brazil. (AP Photo/Eraldo Peres, File)

(Copyright 2018 The Associated Press. All rights reserved.)

Criticism for incitement to hatred, extremism, fake news and other content that is contrary to the community standards has the largest social media networks strengthen policy, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore the social media degree, looking for what works and what does not work, while exploring the possibilities for improvement.

Social media degree is often about finding a balance between creating a safe online environment and inhibiting free speech. In many cases, the social media platform itself steps to protect users, as with Twitter, the last rule revision, or to keep advertisers, as YouTube’s recent changes after advertisers boycotted the video platform. But, in other cases, such as in Germany, the new hate law and a potential new similar legislation of the European Union, moderation is the government obliged.

Earlier in January, Facebook, Twitter and YouTube, testified before a senate committee about what steps the platforms to take into account terrorist propaganda offline. During the hearing it appears to be uncommon, the same groups also testified before Congress about Russian involvement in the 2016 U.S. elections.

So should the government regulate social media platforms — or is there another option? In a recent white paper from the New York University Stern Center for business and Human Rights proposed a different option based on their research — moderation of the social media companies themselves, with a limited interference of the government. The report, Harmful Content: The Role of Internet Platform Companies in the Fight against Terrorist Incitement and Politically Motivated Disinformation, looks specifically at political propaganda and extremism. While the group says social media platforms should not be held liable for the content, the research suggests that the platforms can — and should — do more to regulate content.

More From Digital Trends

  • 8 things you need to know about the Russian social media election ads

  • Here is what social media giants are doing to keep extremism out of your screen

  • Social giants to testify before Congress on extremist content

The group suggests that, because social media platforms have made progress in the prevention or removal of such content, as moderation is not only possible, but preferable to the intervention of the government. Social media platforms are earlier leaned in the direction of moderation at all, which, in contrast to a newspaper that chooses what news to publish, meant that the platforms had no legal obligation. Recent laws aimed at social media is that is changing — in Germany, social networks can pay up to $60 million in fines if hate speech is not removed within 24 hours.

The report does not have to worry that social networks be liable for the information that users share on the platform, but suggests a new category outside the categories of traditional news editors, and publishers that do not regulate content. “This long position is based on an incorrect premise that the platforms act as fully responsible (and potentially liable) news editors, or they do not make any statements about the obscene content,” the white paper reads. “We are advocating for a third way — a new paradigm for the way in which internet platforms govern themselves.”

Statista/Martin Armstrong

The dissemination of misinformation with a political motivation is not new, the group points out, as evidenced by the “coffin handbills” were issued during Andrew Jackson’s campaign in 1828 that accused the future president of murder and cannibalism. At a given moment, the wrong information could be reduced by as Supreme Court justice Louis Brandeis once said: “more freedom of expression.” The faster speed at which information is spread on social media, however, changes that. The top 20 of fake news stories on Facebook during the 2016 election had more involvement than the same number of stories from the major media, according to BuzzFeed News.

“The problem with the running of the government to regulate more aggressively is that it can easily, and would be likely to lead to an overreaction by the companies to avoid what the penalty was put in place,” Paul Barrett, deputy director at the NYU Center for the Business Rights and Human Development, told Digital Trends. “That would interfere with the freedom of speech, which is one of the benefits of social media… As the platforms to do this work themselves they do it accurately and do it without government escape.”

The group is not to suggest that the government stay out of social media entirely — the legislation to apply the same laws to the social media ads that apply to political ads on TV and radio, Barrett says, is an example of laws, which would not escape. But, the article states, if social media companies in their efforts against the politically motivated disinformation, and terrorist propaganda, the government involvement would not be necessary.

The white paper sets social networks to improve their own administration, to further refine the algorithms to use more “friction” — like alerts and notifications for suspicious content — expand human control, to customize the advertising, and, furthermore, to share knowledge with other networks to reach those goals. Finally, the group suggests identifying exactly what the government’s role in the process.

Barrett acknowledges that these suggestions are not going to be free for the companies, but refers to the steps that in the short-term investments for the long-term results. These changes are for a part already in motion — such as Facebook CEO Mark Zuckerberg’s note that the profit of the company would be affected by the safety of the changes of the platform plans to implement, including an increase of the human review staff 20,000 of this year.

The expansion of Facebook’s review staff are participating in a handful of other changes social media companies have been started since the report. Twitter has started hate groups, YouTube is adding additional human review staff, and extending algorithms to more categories, and Zuckerberg has made in reducing abuse on Facebook to achieve its goal for 2018.

“The form of free speech that we are most interested in promoting and that the first amendment is aimed at — is speech in connection with political affairs, public affairs, and personal expression,” said Barrett. “None of those kinds of speech would be affected by an attempt to the screen, disguised, false advertising that claims to originate from organisations that do not really exist and is actually trying to undermine discussions about the elections. There will be a number of restrictions on fraudulent speech, and violent speech, but those are not the types of speech protected by the first amendment. We can afford to lose this form of expression to create an environment where free expression is promoted.”

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular