Monday, April 27, 2009

Responses to "Robots at War" in Wilson Quarterly


WQ (Wilson Quarterly Spring 2009) contains letters responding to an article by P.W. Singer "Robots at War: The New Battlefield" (WQ Winter 2009). These include a joint letter from Allen and Wallach requested by the editior of WQ, as well as letters from David Axe (author of War Bots), William Waddell (Director of the Command and Control Group, Center for Strategic Leadership at the U.S. Army War College), and Professor Alex Roland (Department of History, Duke University).

Colin Allen and Wendell Wallach's letter follows:

P.W. Singer's article "Robots at War: The New Battlefield" (WQ Winter 2009) contributes significantly to a discussion that is long overdue. Should the U.S. and other countries slip into the roboticization of warfare, or is this a bad idea? Singer illustrates clearly how the trend towards autonomous fighting machines is inexorably driven by the logic of war. He correctly notes that these developments carry grave ethical risks and that the idea of "keeping humans in the loop" is already an illusion because there are pressures leading to greater autonomy for lethal weapons-carrying robots.



In our recent book Moral Machines: Teaching Robots Right From Wrong (OUP 2009), we focus on the prospect of building moral decision making faculties into autonomous systems, an area that is already being explored by researchers with military funding. Surveying the limitations of existing technology, readers of the book may reasonably conclude that it will be impossible to meet the challenges.

Progress in artificial intelligence is a primary determiner in Singer's scenario of mechanized war. Machines are, he suggests, easier to program for intelligent warfare than human soldiers are to train. But here, too, he and the military may underestimate the challenges involved. If not, however, the very developments in robotics and artificial intelligence that Singer mentions also open up new avenues for making fighting machines sensitive to the ethical situations that arise during warfare.

Singer does not mention the possibility of using A.I. to mitigate ethical problems. Nevertheless, his explicit concern for the ethical issues is a significant step in the right direction. Indeed, a recent, comprehensive report on military robotics, the Unmanned Systems Roadmap 2007-2032, does not mention the word 'ethics' once nor does it mention the risks raised by robotics, with the exception of one sentence that merely acknowledges that "privacy issues [have been] raised in some quarters".

Can robots be made to respect the differences between right and wrong? Without this ability, autonomous robots are a bad idea not just for military contexts, but also in other situations, such as care of the elderly. Overly optimistic assessments of technological capacities could lead to a dangerous reliance on autonomous systems that are not sufficiently sensitive to ethical considerations. Overly pessimistic assessments could stymie the development of some truly useful technologies or induce a kind of fatalistic attitude towards such systems.

National and international mechanisms for discriminating real dangers from speculative dangers are needed. This is easier said than done. It is not clear whether legislatures and international bodies such as the UN have the will to create effective mechanisms for the oversight of military robots.

Colin Allen
Wendell Wallach


No comments: