When people see the machines that react like humans, or computers that are to be performed in the field of strategy and cognition the reproduction of the human ingenuity, which they sometimes joke about a future in which humanity will have to accept robot-overlords.
But buried in the joke is a seed of unease. Science-fiction writing and popular films, from “2001: A Space Odyssey” (1968) and “Avengers: Age of Ultron” (2015), have speculated about artificial intelligence (AI) that is higher than the expectations of the creators and their control and escaped, eventually outcompeting and the enslavement of people or targeting them for extinction.
The Conflict between the human and the AI is front and center in the AMC’s sci-fi series “Humans,” which is back for a third season on Tuesday (5 June). In the new episodes, aware synthetic humans face hostile people who treat them with distrust, fear and hatred. Violence roils as Synths find themselves fighting for not only basic rights, but their very survival, against those who them as less than human and as a dangerous threat. [Can Machines Be Creative? To 9 AI ‘Artists’]
Even in the real world, not everyone is ready to welcome AI with open arms. In recent years, as the computer scientists have pushed the limits of what AI can achieve, leading figures in technology and science have warned of the impending dangers of artificial intelligence may pose to humanity, even the suggestion that AI capabilities could ruin the human race.
But why are people so unnerved by the idea of AI?
An “existential threat”
Elon Musk is one of the prominent voices that has raised red flags on AI. In July 2017, Musk told attendees at a meeting of the National Governors Association, “I have exposure to the very advanced AI, and I think people should really worry about it.”
“I like alarm, clock,” Musk added. “But until the people see robots going down the street killing people, they do not know how to respond, because it seems so fleeting.”
Earlier in 2014, Musk had with the label AI “our biggest existential threat,” and in August 2017, he declared that humanity is faced with a higher risk of AI than from North Korea.
Physicist Stephen Hawking, who died on March 14, also expressed concernsabout malicious AI, told the BBC in 2014 that “the development of full artificial intelligence could mean the end of the human race.”
It is also less than reassuring that some programmers — particularly those with the MIT Media Lab in Cambridge, Massachusetts — seems determined to prove that AI can be terrifying.
A neural network with the name “Nightmare Machine”, introduced by the MIT computer scientists in 2016, transforming ordinary photos into demonic, disturbing hellscapes. AI that the MIT group called “Shelley” composed scary stories, trained on 140,000 stories of horror that Reddit users posted in the forum r/nosleep.
“We are interested in how AI induces emotions — fear, in this case,” Manuel Cebrian, a research manager at the MIT Media Lab, previously told Science in an e-mail about Shelley’s scary stories.
Fear and loathing
Negative feelings about the AI in general, can be divided into two categories: the idea that the AI will be aware and seek to destroy us, and the idea that immoral people will use AI for evil purposes, Kilian Weinberger, an associate professor in the Department of Computer Science at Cornell University, told Science. [Artificial Intelligence: Friendly or Frightening?]
“If super-intelligent AI more intelligent than our conscious, can treat us as lower beings, such as we treat the monkeys,” he said. “That would certainly be undesirable.”
However, the fear of AI is the development of the consciousness and the overthrow of the human race are based on misconceptions of what AI is, Weinberger noted. AI works under very specific limitations defined by the algorithms that determine the behavior. Some types of problems in the map and to AI and skills, making certain tasks relatively easy for AI to fill in. “But most things, no card, and they do not apply,” he said.
This means that, while AI might be capable of impressive performance within carefully defined limits — the play of a master-level chess or quickly identify objects in an image, for example — that is where the skills end.
“AI to achieve consciousness — there is absolutely no progress in the research in that area,” Weinberger said. “I don’t think that somewhere in the neighborhood of our future.”
The other worrisome idea that an unscrupulous man would harness AI for malicious reasons — is, unfortunately, much more likely, Weinberger added. Virtually any type of machine or the tool can be used for good or bad purposes, depending on the intention of users, and the prospect of weapons use of artificial intelligence is certainly frightening and would benefit from a strict government regulation, Weinberger said.
Perhaps, if people could have their fears aside, the enemy AI, they would be more open to recognizing the benefits, Weinberger suggested. Improved image recognition algorithms, for example, could help dermatologists identify moles that may be cancerous, while the self-driving cars may one day reduce the number of deaths as a result of car accidents, many of which are caused by a human error, he told Live Science.
But in the “Human” in the world of the self-conscious Synths, fears of conscious AI spark violent clashes between Synths and people, and the battle between the humans and the AI will probably continue to unspool and escalate — during the current season, at least.
Editors ‘ note: This is the last feature in a three-part series of articles related to AMC’s “Humans.” The third season debuted June 5 at 10 pm EDT/9 pm CDT.
Original article on Live Science.