File photo – A robot from the movie is on display for the movie premiere of Terminator 3 “Rise of the Machines.” (Mike Blake/Reuters)
“Killer robots” to take over. Also known as autonomous weapons, these devices, once activated, can destroy targets without human intervention.
The technology already exists for years. In 1959, the US Navy started with the Phalanx Close-In Weapon System, an autonomous defense device that can recognize and attack of anti-ship missiles, helicopters and similar threats. In 2014, Russia announced that robots would guard five ballistic missile installations. In that same year, Israel deployed the Harpy, an autonomous weapon that the air for nine hours to identify and pick off enemy targets from vast distances. In 2017, China has its own Harpy-type weapon.
But, with the AMERICAN plans to launch drones are based on the X-47B in 2023, the invasion of robots is going to a new level. This stealth, jet-powered, autonomous aircraft that can air refuel, and penetrate deep inside a well-defended territory to gather intelligence and, if an enemy targets, a more aggressive and deadly tool than we have previously seen.
Is it ethical to deploy “killer robots?” The International Human Rights Clinic at Harvard Law School says no, arguing that artificially intelligent weapons do not meet the “principles of humanity” and “dictates of the public conscience” in the Geneva Convention.
The height of the resistance against killer robots, the US Department of Defense issued Directive 3000.09, which requires that the weapons “are designed to provide commanders and operators to exercise appropriate levels of human judgment over the use of force.” The word “appropriate” requires a human operator to be “in-the-loop’ (that is to say, the control of the weapon) or “on-the-loop’ (that is to say, to supervise the weapon) and the last word in the taking of human life. As a result, the Navy currently only makes use of the X-47B prototype in a semiautonomous mode, the keep of a human operator is involved.
The pace of the war is escalating exponentially, driven by the increasing use of computer technology. If the arms race continues, the potential for an unintentional conflict is. As they did during the Cold War between the US and Russia, we came dangerously close to nuclear war on a number of occasions. Only human judgment averted all-out armageddon.
So, where does this leave us?
As I described in my book “the Genie Arms”, there are only three ways to ensure killer robots will be held:
Focus autonomous weapons on defense, not offense. In a defensive role, autonomous weapon systems have the potential to lower the chance of a conflict. For example, if the United States deployed autonomous weapons that can destroy missile aimed to hit the US and its allies, with a possible opponent would be the court such an attack as senseless and to avoid conflict.
Focus on semiautonomous weapons, such as the current AMERICAN policy. Semiautonomous weapons that a man is either in-the-loop-or-the-loop injects human judgment and provides some guarantee that the weapons will be able to distinguish combatants from non-combatants. It also adds a responsibility, in line with international humanitarian law.
Limit which weapons we give the autonomy. It would be reckless to weapons of mass destruction, autonomous. As the countries of the world automate their nuclear tipped missiles, a wrong line of computer code could ignite world War III. It is of crucial importance that countries such as the USA and Russia, which have the ability to destroy the Earth, follow the policy. Fortunately, North Korea cannot have a nuclear capacity anywhere near to the US and Russia, and even China at this time.
If the current AI technology is not able to replicate human judgment, all countries need to take these three measures, for now and for good. Anything less threatens the survival of humanity.
Louis A. Del Monte is the author of “Genius Weapons: Artificial Intelligence, Autonomous Weapons, and the Future of Warfare” (Prometheus Books), out now.
This story was previously published in the New York Post.