The Air Force's Predators and Reapers are not drones. Indeed, they are not even "unmanned vehicles." Instead they are remotely piloted vehicles with as much (if not more) human control as an F-16 or F-15. Human beings are still very much in the "kill chain." The technology to allow completely autonomous weapon systems, however, does exist, and there are serious moral and legal issues that need to be considered before we think about adopting these autonomous weapon systems. Paul Robinson of the University of Ottawa that describes some of these issues:
Underlying the debate about “killer robots” is concern that machines are not, and cannot, be legally accountable for their actions. As professor Oren Gross of the University of Miami School of Law told this year’s inaugural “We Robot” conference on robots and the law in April, domestic and international law are not well suited to dealing with robots that commit war crimes.
As technology advances, we face a very real danger that it will become increasingly difficult to hold those who wage war on our behalf accountable for what they do. Artificial intelligence is not the only technology to pose such accountability problems. Others, such as bio-enhancement, do, too, although of a different sort.
Machines entirely capable of replacing humans are not yet on the market, but robotic systems capable of using lethal force without a human in the loop do already exist. The U.S. Navy’s Aegis Combat System, which is capable of autonomously tracking enemy aircraft and guiding weapons onto them, is an example. But if a robot system goes “rogue” and commits what for a human would be a crime, there would not be much point in arresting the machine. Our gut instinct is that somebody should be held accountable, but it is difficult to see who. When the USS Vincennes shot down an Iranian airliner in 1988, killing 290 civilians, there were real people whose behavior one could investigate. If the Aegis system on the Vincennes had made the decision to shoot all by itself, it would have been much harder. When a robot decides, clear lines of responsibility are absent.
For obvious reasons, human beings do not like this. An experiment conducted by the Human Interaction With Nature and Technological Systems Lab at the University of Washington had a robot, named Robovie, lie to students and cheat them out of a $20 reward. Sixty percent of victims could not help feeling that Robovie was morally responsible for deceiving them. Commenting on the future use of robots in war, the HINTS experimenters noted in their final report that a military robot will probably be perceived by most “as partly, in some way, morally accountable for the harm it causes. This psychology will have to be factored into ongoing philosophical debate about robot ethics, jurisprudence, and the Laws of Armed Conflict.” Quite how this could be done is unclear.