connectVideoDealing with toxic Twitter
Torrent of criticism can be brutal.
Women on Twitter can be sent abusive or problematic content every 30 seconds, according to a new survey by Amnesty International.
The rights of man of the group, together with a global artificial intelligence software company called Element AI survey millions of tweets received by 778 female journalists and politicians from the united states and the united kingdom in 2017. With the help of machine learning, the study found that abuse is very widespread, and targets black women the most.
A total of 7.1 percent of the tweets sent to the women in the study were considered “problematic” or “abuse.” Although the social platform has a definition of unlawful content, the “problematic” label was defined by Amnesty International as content that is “offensive or hostile, especially if repeated to an individual on multiple or cumulative events.”
“We discovered that, although the abuse is aimed at women in the political spectrum, women of color were much more likely to be affected, and black women are disproportionately targeted,” Milena Marin, senior advisor, tactical research at Amnesty International, said in a blog post.
MYSTERIOUS TWITTER BUG LINKED TO THE ‘UNUSUAL ACTIVITY’ OF CHINA, SAUDI ARABIA
The study included politicians from across the political spectrum and journalists from a range of publications, including The New York Times, The Guardian, The Sun, Pink News and Breitbart. It had more than 6,500 volunteers to sort through the tweets that Amnesty calls on the world’s largest crowdsourced dataset on online violence against women.
Under the ‘Troll Patrol ‘volunteers’ other findings:
– Black women were 84 percent more likely to be sent offensive tweets than white women.
– Women of color, including Asian, Latinx, Black and mixed race women, 34 percent more likely to be mentioned in abusive or problematic tweets than white women.
– Online abuse has two sides: the Liberal and conservative women, together with the women of the liberal and conservative publications, all of which face a similar level of abuse.
BIG TECH’S DISASTERS AND DISASTERS: FACEBOOK, TWITTER, GOOGLE AND AMAZON’S TORTUROUS 2018
“Troll Patrol is not about policing Twitter, or compel to remove content. We ask that it be more transparent, and we hope that the findings of Troll Patrol will force to change that. And that is crucial, Twitter should start to be transparent about how exactly they are using machine learning for the detection of misuse and publishing of technical information about the algorithms on which they depend,” Marin said in a statement.
Twitter CEO Jack Dorsey said the company is open to a range of changes in the platform of the structure.
Marin added: “We have the data to back up what women have for a long time to tell us that Twitter is a place where racism, misogyny and homophobia are allowed to flourish, in principle, not checked.”
Vijaya Gadde, Twitter’s legal, policy and trust & safety global lead, told Fox News in a statement:
“I would note that the concept of “problematic” content for the purposes of the classification of content is one that warrants further discussion. It is unclear how you have defined or categorized content, or if you suggests, should be removed from Twitter. We work hard to build a worldwide enforceable rules and started consulting the public as part of the process — a new approach within the industry.”
A source familiar with the company’s thinking on the matter told Fox News that Twitter has dozens of changes in the course of the last two years to improve the safety of the platform. According to the latest biannual transparency report, Twitter received reports on more than 2.8 million unique accounts” for abuse, almost 2.7 million accounts for “hateful” speech, and 1.35 million, accounts for violent threats. Of that, the company took action — such as the suspension of the account — at about 250,000 for the abuse, 285,000 for hateful behavior and something more than 42,000 for violent threats.
In addition, Twitter has said in the past that investing in better technology to identify offensive contents or behavior in advance and limit the spread.