It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.
Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.
"Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making"
"Moral Appearances: Emotions, Robots and Human Morality"
"Robot Rights? Toward a Social-Relational Justification of Moral Consideration"
"RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield"
"The Cubical Warrior: The Marionette of Digitized Warfare"
"Robot Caregivers: Harbingers of Expanded Freedom for All"
Yvette Pearson and Jason Borenstein
"Implications and Consequences of Robots with Biological Brains"
"Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform"
Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
- Anthony F. Beavers
- Vincent Wiegel
- Jeff Buechner