Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Monday, March 2, 2009
Extended Review of Moral Machines by Peter Danielson
Peter Danielson has extended his comments on Moral Machines for a review in the Notre Dame Philosophical Reviews.
2009-03-01 : View this Review Online
Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2008, 275pp., $29.95 (hbk), ISBN 9870195374049.
Reviewed by Peter Danielson, University of British Columbia
________________________________
This is a book in ethics and cognitive science, broadly conceived, on the philosophical and engineering problems of constructing artificial moral agents (AMAs). It is written for a general audience, with minimal endnotes (but a full bibliography) and helpful introductory discussions of the ethical, philosophical, and engineering issues. In contrast to Joy's (2000) prediction of robotic disasters and Moravec's (1988) dream of machine transcendence, Wallach and Allen argue that AMAs are possible and desirable, and cover the main lines of current research. Their general framework is excellent. First, they focus on what they call "functional morality": "Moral agents monitor and regulate their behavior in light of the harms their actions may cause or the duties they may neglect" (p. 16). This does not set the criterion so high (full conscious moral agency) as to exclude the possibility of artificial moral agents. Second, they divide recent research into top-down approaches, which attempt to program ethical theories directly into robots, bottom-up approaches that use artificial evolution or marching learning, and hybrids of the two. Third, they stress the importance of research on the role of emotions in ethical decision-making and research that implements human cognitive architecture in robots.
Wallach and Allen admit that it is early to start a discussion of AMAs, so I will focus, in this review, on the framework that they set out. Frameworks are important, especially so far upstream in the development of a field, because their salience can exert powerful influence on later work. Isaac Asimov's three laws of robotics, designed to generate stories, not moral decisions, are a good example.
Once people understand that machine ethics has to do with how intelligent machines, rather than human beings, should behave, they often maintain that Isaac Asimov has already given us an ideal set of rules for such machines. They have in mind Asimov's 'three laws of robotics' (Anderson, 2008, p. 477).
Philosophy and Engineering
"Our goal is to frame discussion in a way that constructively guides the engineering task of designing AMAs" (p. 6). To this end, Wallach and Allen pose three questions: "Does the world need AMAs? Do people want computers making moral decisions? . . . [H]ow should engineers and philosophers proceed to design AMAs?" (p. 9). Most of the book is devoted to this third question. If we look into the first of the substantive chapters on it ("Top-down Morality"), we see that their approach owes more to philosophy than engineering. The chapter argues that the ethical theories, utilitarianism and Kantian deontology, cannot be implemented computationally. "[W]e found that top-down ethical theorizing is computationally unworkable for real-time decisions. . . . [T]he prospect of reducing ethics to a logically consistent principle or set of laws is suspect, given the complex intuitions people have about right and wrong" (p. 215).
Contrast an engineering approach to top-down AMA design. Where Wallach and Allen (p. 14) see the trolley cases as showing "the complexity of human responses to ethical questions", Pereira & Saptawijaya (2007, p. 103) use Hauser's (2006) and Mikhail's (2007) trolley thought experiments to ground their moral goal in "judgments . . . widely shared among [demographically] diverse populations." Second, they find these judgments "to be consistent with the so-called . . . principle of double-effect" (Pereira & Saptawijaya, 2007, p. 104). They then implement this principle using logic programming extended to support forward-looking agents. This promising engineering approach undercuts Wallach and Allen's doubts about top-down design for AMAs. Indeed, the more concrete rules that Wallach and Allen consider in their chapter are Asimov's laws, which lack the ethical rationale of Pereira and Saptawijaya's empirically based principle. Moreover, top-down approaches have the ethical advantage that they are easier for us to understand. Pereira and Saptawijaya's logic programming provides rationales for the AMAs' choices that humans can understand.
Turning to ethics, I have three criticisms of Wallach and Allen's framework. First, their focus on the inclusive category of (ro)bots distorts the field and the moral urgency of the robot ethics project. Second, their definitional stress on autonomy is tendentious and distracts us from alternative approaches to the moral problems of powerful technology. Third, their focus on the model of human morality seems under-motivated.
Scope
To answer the first question, "Does the world need AMAs?", Wallach and Allen use the example of the U.S. power blackout in 2003, where "software agents and control systems at . . . power plants activated shutdown procedures, leaving almost no time for human intervention" (p. 18). This example may surprise: large scale networked power plants and their distribution system are a long way from service robots. To cover this gap, the term 'robots' of the book's title gets expanded to "(ro)bots -- a term we'll use to encompass both physical robots and software agents" (p. 3).
This broadened scope can be misleading in several ways. First, in terms of scale, it is very difficult to focus on large distributed systems. Robots (in the ordinary sense) are at the local end of this scale, supporting clearer moral judgments about responsibility. Second, the expanded definition includes "data-mining bots that roam the Web" (p. 19) to collect personal information. Yet much of the moral urgency of introducing AMAs is real-time criticality, driven by hardware based engineering systems like the electrical power grid. By treating these very different kinds of artifact together as (ro)bots, the former inherits the time critical status of the latter. This seems misleading. There is no excuse for those companies that deploy data-mining bots to avoid (perhaps costly) human oversight based on the issue of time-criticality. Indeed, having looked into the ethics of the commercial data-mining industry, which routinely takes a call to a 1-800 help number as consent to use the resulting personal information, I would not accept assurances that their software was now protecting my privacy through built-in moral competence (Danielson, 2009).
Conversely, non-physical bots inhabit a simpler environment than the hardware robot's physical world. This is, after all, the appeal of using simulation to ease the design of real robots and AMAs. But some of the book's arguments assume the physical world as the target environment: "The decision-making processes of an agent whose moral capacities have been evolved in a virtual environment are not necessarily going to work well in the physical world" (p. 104). For example, we should be able to build bots that respect privacy more easily than robots that do, because we can log access to and tag data records more readily than real stuff. Consider how simple the network Robot Exclusion Protocol is (Koster, 1994).
Autonomy
Where we have seen that AMAs include more than one might have thought, the class is also narrower. In particular, AMAs are autonomous; Wallach and Allen consider "independence from direct human oversight" to be crucial (p. 3). From an ethical point of view, this is not so clear. Stressing autonomy may lead to the neglect of alternative non-autonomous strategies for dealing with potentially harmful interactions with robots. Wallach and Allen note that "engineers often think that if a (ro)bot encounters a difficult situation, it should just stop and wait for a human to resolve the problem" (p. 15). But they don't follow this line of engineering thought, asking, "Should a good autonomous agent alert a human overseer if it cannot take action without causing some harm to humans? (if so, it is sufficiently autonomous?)" (p. 16). But we should separate two questions: Sufficiently autonomous to do the needed job well? Or to meet the definition of AMA? The leading role of autonomy in ethical accounts seems to weigh in favor of autonomous solutions.
To address the ethical question of whether robots need to be autonomous to do their job well, we would need to consider the alternatives to autonomy. They include:
· Segregation from most humans (as is the case for factory and mining robots). Indeed, to consider a commonplace example, there are no trolley problems for the "robot" trains we call elevators because their enclosed and vertical tracks physically exclude the problem.
· Human intervention (involving more than stopping and waiting for a human). Typically ownership and licensing link to a responsible person through a technology of secure sensing and control.
While remote controlled devices and robotic extensions raise fewer new -- and therefore philosophically interesting -- ethical issues, they do pose serious ethical questions. For example, introducing trash collecting robotic arms in Los Angeles put a powerful extension of the driver on the side of his or her large truck and at the same time eliminated the other member of the trash collection team that might have overseen its use. Finally, if the term "robot" seems to require autonomy, consider the counter-example of robotic surgery: "Modern-day surgical robots are a form of computer-assisted surgery using a 'master-slave relationship' in which the surgeon is able to control the actions of the robot in real time, using the robot to improve upon his/her vision, dexterity and overall surgical precision" (Patel & Notarmarco, 2007, p. 2). Of course, it is close remote control that allows these robots to work in the ethically highly constrained field of human medicine. Robot surgery raises important moral problems; they improve outcomes but at a high cost (Picard, 2009), stress-testing our standards of human-provided care.
Lethal military robots may be an especially significant case for autonomy. Sparrow (2007) makes a good case that the stated plans to have small groups of soldiers invoke large numbers of robots would make human oversight impossible. Thus these planned lethal robots would need to be autonomous moral agents. Sparrow is highly skeptical that AMAs will be achieved in time for this deployment, so he calls for a ban on this technology.
The Model of Human Ethics
Wallach and Allen advocate that AMAs implement morality by explicitly following what we know about human ethics. They distinguish their approach from one that might have greater appeal to engineers in their discussion of Arkin's (2007) "hybrid deliberative/reactive robot architecture." "Like many projects in AI, Arkin's architecture owes little to what is known of human cognitive architecture. There's nothing wrong with this as an engineering approach to getting a job done. However, our focus here will be an alternative approach" (p. 172). They focus on implementing a general model of human cognition and emotion.
While it is true that the ethics we know the most about is human ethics, we also know very little about how to construct it. Consider the disputes about the roles of biological versus social evolution in human morality, a dispute about the basic mechanisms responsible (Mesoudi & Danielson, 2008). More specifically, I suggest that robotics provides an opportunity to research and construct explicitly non-human forms of AMAs that is important for several reasons.
First, the human model of ethics is a general purpose one. While a morality specific to special purpose robots may be more tractable. We have already seen this in the case of trolley robots above; Arkin's research provides a more developed example for lethal military robots. Arkin can bypass general ethical theories (and their controversies) by focusing on the agreed international rules of war.
Second, human ethics is fundamentally egalitarian, stressing the equality of all moral agents. But, again as Arkin points out, robots are expendable; they are different from humans. In the case of lethal force, a robot cannot appeal, as a human soldier can, to a right of self-defense that might balance threatening or killing a non-combatant. Less drastically, we are all familiar with moral relations with lesser moral agents, like dogs, that can be trusted to follow some rules under some temptations, but remain the responsibility of their owners. I suggest that our experience with lesser moral agents will be more useful for the foreseeable future than the full human model of ethics.
Third, the model of human ethics appears to provide a misleadingly fixed goal. Wallach and Allen close their helpful discussion of ways an AMA could be held responsible with:
We wish to emphasize once more, however, that while these post hoc questions about moral accountability are important, they do not provide obvious solutions to the primary technological challenge of building AMAs that have the capacity to assess the effect of their actions on sentient beings, and to use those assessments to make appropriate decisions (p. 204).
On the contrary, I suggest that the institutions constructing responsibility can decisively frame the engineering goal. Consider the so-called "black box" data recorder found in airplanes and increasingly in trucks, but in few private autos (Danielson, 2006). Considering that automobile accidents are the leading cause of accidental death, this may surprise. The difference is explained by the difference in ownership and responsibility. Trucks are typically owned by a fleet operator, whose interests may differ from the truck drivers'. Cars are typically privately owned -- rental cars are an exception that supports my point -- and the owner/driver has little interest in a source of evidence against him in a legal action. So one can't simply conclude that building a sturdy, accurate data recorder is the engineering goal. One needs to design a device that fits with the allocation of responsibility in the technological field.
Conclusion
Moral Machines is a fine introduction to the emerging field of robot ethics. There is much here that will interest ethicists, philosophers, cognitive scientists, and roboticists.
References
Anderson, S. (2008). Asimov's "three laws of robotics" and machine metaethics. AI & Society, 22(4), 477-493.
Picard, A. (2009). For prostate removal, robots rule. Globe and Mail, p. 4.
Danielson, P. (2006). Monitoring Technology. In A.-V. Anttiroiko, & M. Malkia (Eds.), Encyclopedia of Digital Government. Idea Group.
Danielson, P. (2009). Metaphors and Models for Data Mining Ethics. In E. Eyob (Ed.), Social Implications of Data Mining and Information Privacy: Interdisciplinary Frameworks and Solutions (pp. 33-47). Hershey, PA: IGI Global.
Hauser, M. (2006). Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: HarperCollins.
Joy, B. (2000). Why the future doesn't need us. Wired, 8 (4).
Koster, M. (1994). A Standard for Robot Exclusion. Retrieved Jan 14, 2009, from http://www.robotstxt.org/orig.html.
Mesoudi, A., & Danielson, P. (2008). Ethics, Evolution and Culture. Theory in Biosciences, 127(3), 229-240.
Mikhail, J. (2007). Universal moral grammar: theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143-152.
Moravec, H. (1988). Mind Children. Cambridge, Mass.: Harvard Univ. Press.
Patel, V., & Notarmarco, C. (2007). Journal of Robotic Surgery: introducing the new publication. Journal of Robotic Surgery, 1(1), 1-2.
Pereira, L., & Saptawijaya, A. (2007). Modelling Morality with Prospective Logic. Progress in Artificial Intelligence, 99-111.
Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report GIT-GVU-07011 http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62-77.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment