It’s a maxim in journalism that when a headline is a question, the answer is usually no. And so you have to be concerned when you read “Can robots make ethical decisions in battle?”
That said, the scientist quoted knows his stuff.
“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” said Ronald Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the U.S. Army. “That’s the case I make.”
His research has a very solid background:
In a report to the army last year, Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built to show no anger or recklessness, Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,”‘ which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
Arkin’s report drew on a 2006 survey by the surgeon general of the army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.
Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled the dead were more likely to say they had mistreated civilian noncombatants, the survey said. (The survey can be read by searching for 1117mhatreport at www.global policy.org.)
Of course, the problem is that just because a robot can, in theory, make less judgemental decisions does not necessarily mean these are better. The human element, where the individual brings his or her experience and intuition to bear on a situation, is lost. I would raise the concern that an automated defence system could open fire when there is no need to, simply because it is programmed to take certain actions and cannot ‘read’ the circumstances.
Atkins is fully aware of these challenges, which is good because it means there is some action being taken to address them. Ultimately, a higher use of technology, which could potentially lead to fewer human casualties, can only be a good thing.
by
Ethics aside, wouldn’t you still want one of these things?
http://blog.wired.com/defense/2007/11/video-fix-super.html
Oh hell yes.