Robots can develop prejudices as racism and sexism are” all on their own, a shocking new research has found.
Artificial intelligence experts have performed thousands of simulations on the robot’s brain, to see how they split off into groups and the treatment of “outsiders” differently.
Computer scientists and psychologists from the University of Cardiff and the US, MIT teamed up to test how the robots recognize each other.
But they have also tested how they copy, and learning the behavior of each other.
The study, published in Scientific Reports, showed that the simulated virtual robots would be afraid of others, forming their own groups.
The experiment involved a “give and take” system, where robots can choose which of their colleagues to donate.
If the virtual game unfolders, people would learn new donation strategies by copying other robots to benefit themselves.
It turned out that robots would be to donate to each other in small groups, denying outsiders to improve their own businesses
“By running these simulations thousands and thousands of times over, we begin to get an idea of how prejudice develops and the conditions that promote or hinder,” explains co-author Professor Roger Whitaker, Cardiff University.
“Our simulations show that the bias is a powerful force of nature and by evolution, it can be easily motivated with the eye on virtual populations, at the expense of wider connection with others.”
The professor explained that the research also shows how the “disadvantageous groups” accidentally led to outsiders forming their own rival groups, resulting in a broken population”.
He added: “So widespread prejudice is difficult to reverse.”
The study explains how the learning from these adverse behaviours not require a lot of mental strength.
Instead, it was just a matter of copying from others on the basis of their “give and take” game success, which inevitably led to prejudices.
According to the scientists behind the project, is it possible that once robots are widespread, they could pick up common human prejudices.
Cardiff University noted that robots threatened to expose prejudices as racism and sexism”.
Professor Whitaker said: “It is not inconceivable that autonomous machines with the ability to identify discrimination and to copy others may in the future be susceptible to the adverse phenomena that we see in the human population.
“A lot of the AI developments that we see involve autonomy and self-control, which means that the behavior of the devices is also influenced by others around them.”
This is not the first warning that we’ve seen robots go rogue.
Back in February, leading futurologist, Dr. Ian Pearson told The Sun that robots would eventually be the treatment of us as “guinea pigs”.
And in April, Dr. Pearson warned that the Earth’s robot population would grow to 9.4 billion in the next 30 years – overtaking of humanity by 2048.
“Today is the global robot population is probably around 57million.
“That will grow rapidly in the near future, and 2048 robots will overtake humans.
“If we allow for the expected acceleration, which can happen as early as 2033.
“By 2028, some of these robots already begin to feel, real emotions, and to respond to us emotionally,” he added.
“We have trained [artificial intelligence] to be as we are, trained it to feel emotions like us, but it will not be like us. It will be a bit like aliens from Star Trek is smarter and more calculated in its actions,” he explained.
“It is insensitive for the man to view us as barbaric. So if the decision to carry out its own experiments with viruses that it is made, will treat us as guinea pigs.”
The late Professor Stephen Hawking once said: “I fear that the AI can replace humans completely. When people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that performs better than humans.”
And Tesla, PayPal and SpaceX founder Elon Musk warned that AI poses a “fundamental risk to the existence of civilization.”
This story originally appeared in The Sun.