America’s heartless terrorism
Killing people like insects
But honor doesn’t fear power.
Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Sunday, March 22, 2009
NYTIMES on the Downside of the Drone War
The New York Times is expanding its coverage on the negative effects of the use of drones in the fight against Al Queda and the Taliban. In a March 22nd article, "The Downside of Letting Robots Do the Bombing", Mark Mazzetti writes that, "in Pakistan, some C.I.A. veterans of the tribal battles worry that instead of separating the citizenry from the militants the drone strikes may be uniting them. These experts say they fear that killing militants from the sky won’t undermine, and may promote, the psychology of anti-American militancy that is metastasizing in the country." Muslim insurgents question whether the reliance on robots to fight a war reveals a lack of bravery and a fear of losing lives on the part of the American military. A "song of protest taunts the world’s most powerful country for sending robots to do a man’s job:
Tuesday, March 17, 2009
More Robot Fighting Machines: Ready or Not
"Drones Are Weapons of Choice in Fighting Qaeda" is the title of a story in the March 17th issue of The New York Times. Christopher Drew reports that 13 of 70 Predator drones crashed in the past 18 months, and "55 have been lost because of equipment failure, operator errors or weather." Nevertheless, the military is very happy with their performance and wants a speed-up in the delivery of new systems. "In speeches, defense Secretary Robert M. Gates has urged his weapons buyers to rush out ‘75 percent solutions over a period of months’ rather than waiting for ‘gold-plated’ solutions.” According to the article, there are presently 5500 military drones, of which 195 are Predators and 28 are Reapers.
Read the full article on The New York Times website.
Read the full article on The New York Times website.
Monday, March 16, 2009
Love Robot Holds Female Lab Intern Prisoner
The robot Kenji at Toshiba's Akimu Robotic Research Institute was programmed by Dr. Akito Takahashi and his team to emulate certain human emotions, including love. However, Kenji began to display surprising behavior. The robot held a young female intern within its lab enclosure for a few hours, until she was freed by senior staff members. MuckFlash reports on Kenji in a posting titled, "Robot Programmed to Love Goes to Far."
“Despite our initial enthusiasm, it has become clear that Kenji’s impulses and behavior are not entirely rational or genuine,” conceded Dr. Takahashi.
Ever since that incident, each time Kenji is re-activated, he instantaneously bonds with the first technician to meet his gaze and rushes to embrace them with his two 100kg hydraulic arms. It doesn’t help that Kenji uses only pre-recorded dog and cat noises to communicate and is able to vocalize his love through a 20 watt speaker in his chest.
Dr. Takahashi admits that they will more than likely have to decommission Kenji permanently, but he’s optimistic about one day succeeding where Kenji failed.
BigDog: The Headless Robot Dog
BigDog looks sort of like a dog, but it has no head. This 250-pound robot designed by Boston Dynamics is capable of carrying 340 lbs over very difficult terrain. The developer hopes BigDog will eventually be used to facilitate carrying gear for ground troops. Boston Dynamics reported that BigDog walked a record 12.8 miles on its own with the use of a GPS. Kristina Grifantini reports on BigDog in Technology Review. A video of BigDog in action accompanies the report.
Monday, March 9, 2009
Three Perspectives on Robots in War
The Financial Times published a book review that covers three books, P.W. Singer's Wired for War, David Axe's War Bots, and Moral Machines, in an article titled, "The New War Machine." Stephen Cave writes:
"While our destructive power is launching into this science-fiction future, however, our principles are stuck in the trenches. There is no precedent for an android to stand in the dock for war crimes. And the Geneva Conventions don’t tell us who to blame when an automaton makes a lethal error, such as when US Patriot missile batteries shot down two allied aircraft in Iraq in 2003, killing two Britons and one American.
We are in the midst of a revolution in the way we wage war, as profound as the discovery of gunpowder or the building of the atomic bomb. Yet most of us hardly know it’s happening – and our legal and moral frameworks are entirely unprepared. But a few people have noticed: three fascinating and timely new books detail these developments and the issues they raise."
Read the full review at www.ft. com
Saturday, March 7, 2009
Toward some Circuitry of Ethical Robots
This one from the archives -- an early paper that somehow we missed in our research for the book: Warren McCullogh (1953) Toward some Circuitry of Ethical Robots:
Warren St. McCulloch: Towards Some Circuitry of Ethical Robots or An Observational Science of the Genesis of Social Evaluation in the Mnd-Like Behavior of Artifacts, Acta Biotheoretica, Vol. XI (1956), p. 147-156. Available in at www.vordenker.de (Edition: February 2006), J. Paul (Ed.), URL: http://www.vordenker.de/ggphilosophy/mcc_ethical.pdf .
"I suggest therefore that it is possible to look on Man himself as a product of such an evolutionary process of developing robots, begotten of simpler robots, back to the primordial slime; and I look upon his ethical conduct as something to be interpreted in terms of the circuit action of this Man in his environment – a TURING machine with only two feedbacks determined, a desire to play and a desire to win."
Warren St. McCulloch: Towards Some Circuitry of Ethical Robots or An Observational Science of the Genesis of Social Evaluation in the Mnd-Like Behavior of Artifacts, Acta Biotheoretica, Vol. XI (1956), p. 147-156. Available in at www.vordenker.de (Edition: February 2006), J. Paul (Ed.), URL: http://www.vordenker.de/ggphilosophy/mcc_ethical.pdf .
Moral Machines at APPE
Wendell and Colin are in Cincinnati this weekend for the meetings of the Association for Practical and Professional Ethics which is doing two sessions on Moral Machines.
In Friday's session, Colin responded to comments by Professors James Wallace and Michael Pritchard. Both of them said they found the book stimulating even though the technical details were a bit beyond them. Their comments focused on what they took to be hard problems facing the attempt to design AMAs. For Jim Wallace this was being fully immersed in human forms of life -- an example he used was the human capacity to effortlessly track property, and structure our behavior around it -- take that coat, not this one, drive around that lot, not through it. For Mike Pritchard, the issues were what he called the problem of judgment -- what happens when top down rules or bottom-up learning don't properly cover the situation -- and the problem of living with others -- how to compromise without losing moral integrity. Good questions, and a lively discussion ensued. But should these difficulties stop us from even starting down the path of making moral machines -- machines that are more sensitive to ethical relevant aspects of the situations they encounter?
Up today (Saturday) Wendell will respond to comments from Professors James Moor, Deborah Johnson, and Thomas Powers.
In Friday's session, Colin responded to comments by Professors James Wallace and Michael Pritchard. Both of them said they found the book stimulating even though the technical details were a bit beyond them. Their comments focused on what they took to be hard problems facing the attempt to design AMAs. For Jim Wallace this was being fully immersed in human forms of life -- an example he used was the human capacity to effortlessly track property, and structure our behavior around it -- take that coat, not this one, drive around that lot, not through it. For Mike Pritchard, the issues were what he called the problem of judgment -- what happens when top down rules or bottom-up learning don't properly cover the situation -- and the problem of living with others -- how to compromise without losing moral integrity. Good questions, and a lively discussion ensued. But should these difficulties stop us from even starting down the path of making moral machines -- machines that are more sensitive to ethical relevant aspects of the situations they encounter?
Up today (Saturday) Wendell will respond to comments from Professors James Moor, Deborah Johnson, and Thomas Powers.
"Doombas"
Too much admin work in February has meant not enough time watching the Daily Show, and so we're only just catching up with their coverage of robots and ethics. Samantha Bee did a "Future Shock" segment that featured Noel Sharkey and Jon Stewart interviewed Peter Singer.
Wired Magazine's Scott Thill also blogged about here
Wired Magazine's Scott Thill also blogged about here
Wednesday, March 4, 2009
Androids Dream of Ethical Sheep
Moral Machines reviewed in the Times Higher Education supplement by John Gilbey.
“In this important book, Wendell Wallach and Colin Allen discuss the technical, moral and practical aspects of organising a world where, increasingly, human life is protected and managed by non-human operators. From battlefield weapons to traffic management, ever more sophisticated systems are being employed to manage decisions that involve the taking or saving of human life.”
Read the review at timeshighereducation.co.uk
Moral Machines: Teaching Robots Right from Wrong
26 February 2009
Androids dream of ethical sheep
John Gilbey praises an invaluable guide to avoiding the stuff of science-fiction nightmares
When I asked a group of people to describe the mental picture they see when the term "robot" is used, I got an interesting set of responses. The dramatic variety of images seemed to be tied to the age of the respondent, their tastes in literature and drama and - to a lesser extent - their degree of exposure to automated technology. Nevertheless, a clear theme ran through the images - that of the maverick robot breaking free from the constraints of its programming and laying waste to anything in its path.
This image of the robot is deeply engrained in literature and film, from Karel Capek's 1921 play Rossum's Universal Robots through to the burgeoning science-fiction market in bio-engineered killing machines typified by the replicants in the 1982 film Blade Runner.
In this important book, Wendell Wallach and Colin Allen discuss the technical, moral and practical aspects of organising a world where, increasingly, human life is protected and managed by non-human operators. From battlefield weapons to traffic management, ever more sophisticated systems are being employed to manage decisions that involve the taking or saving of human life.
At the simplistic level, it appears a trivial task to provide a computer with a set of instructions to protect human life. After all, Isaac Asimov needed only three (later four) Laws of Robotics to spawn a whole raft of stories. What we tend to forget, however, is that most of Asimov's robot stories are built around the conflicts generated by those laws.
Wallach and Allen illustrate the true complexity of the issue using a classic ethical example - the "trolley case". Imagine a driverless, autonomous tram car on a track. Ahead there is a group of workmen in its path. The tram cannot stop in time, but by switching tracks it will kill only one person. Easy - the robot tram should minimise the loss of life.
But what if the single person is highly skilled, for example a famous concert pianist? What if it is a child that has strayed on to the track? Should the tram kill many workers or one child? Suddenly the proposition has become much more complicated.
Having engaged our attention, the authors give us some current examples of how systems that might share these ethical conflicts are being developed and used. The use of battlefield systems where robots take lethal decisions is still, the authors suggest, some way off - but the deployment of such weapons seems to be almost inevitable.
Systems are, we are told, already in use where targeting is done automatically by largely autonomous military devices, with only the final decision to kill being taken, from a remote location, by the human operator. While deeply chilling in its implications, this discussion brings home to the reader just how much is at stake in the development of "machine ethics".
The bulk of the book introduces a series of concepts around which ethical solutions to these issues could be designed. The arguments are approached openly and clearly, with due deference to the very wide range of readership that this title deserves to attract. Whether you are an ethicist approaching the issues raised by technology, or a systems developer seeking an understanding of ethics without having to reinvent the wheel, the material should present an interesting and challenging view.
As technological solutions become increasingly complex, the study of machine morality will inevitably be of major importance to our future as a society. This book represents a valuable crossover resource that doubtless will be approachable and instructive to a wide range of generalists and specialists.
Fiction has given us many dramatic illustrations of worlds where machine autonomy is unconstrained. Wallach and Allen provide the clear message that existing moral and ethical thought - coupled with innovative technical development - is central to informing and shaping the way we can avoid such nightmares.
Reviewer :
John Gilbey teaches computer science and has published science-fiction stories on the ethical impact of artificial intelligence.
Monday, March 2, 2009
Extended Review of Moral Machines by Peter Danielson
Peter Danielson has extended his comments on Moral Machines for a review in the Notre Dame Philosophical Reviews.
2009-03-01 : View this Review Online
Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2008, 275pp., $29.95 (hbk), ISBN 9870195374049.
Reviewed by Peter Danielson, University of British Columbia
________________________________
This is a book in ethics and cognitive science, broadly conceived, on the philosophical and engineering problems of constructing artificial moral agents (AMAs). It is written for a general audience, with minimal endnotes (but a full bibliography) and helpful introductory discussions of the ethical, philosophical, and engineering issues. In contrast to Joy's (2000) prediction of robotic disasters and Moravec's (1988) dream of machine transcendence, Wallach and Allen argue that AMAs are possible and desirable, and cover the main lines of current research. Their general framework is excellent. First, they focus on what they call "functional morality": "Moral agents monitor and regulate their behavior in light of the harms their actions may cause or the duties they may neglect" (p. 16). This does not set the criterion so high (full conscious moral agency) as to exclude the possibility of artificial moral agents. Second, they divide recent research into top-down approaches, which attempt to program ethical theories directly into robots, bottom-up approaches that use artificial evolution or marching learning, and hybrids of the two. Third, they stress the importance of research on the role of emotions in ethical decision-making and research that implements human cognitive architecture in robots.
Wallach and Allen admit that it is early to start a discussion of AMAs, so I will focus, in this review, on the framework that they set out. Frameworks are important, especially so far upstream in the development of a field, because their salience can exert powerful influence on later work. Isaac Asimov's three laws of robotics, designed to generate stories, not moral decisions, are a good example.
Once people understand that machine ethics has to do with how intelligent machines, rather than human beings, should behave, they often maintain that Isaac Asimov has already given us an ideal set of rules for such machines. They have in mind Asimov's 'three laws of robotics' (Anderson, 2008, p. 477).
Philosophy and Engineering
"Our goal is to frame discussion in a way that constructively guides the engineering task of designing AMAs" (p. 6). To this end, Wallach and Allen pose three questions: "Does the world need AMAs? Do people want computers making moral decisions? . . . [H]ow should engineers and philosophers proceed to design AMAs?" (p. 9). Most of the book is devoted to this third question. If we look into the first of the substantive chapters on it ("Top-down Morality"), we see that their approach owes more to philosophy than engineering. The chapter argues that the ethical theories, utilitarianism and Kantian deontology, cannot be implemented computationally. "[W]e found that top-down ethical theorizing is computationally unworkable for real-time decisions. . . . [T]he prospect of reducing ethics to a logically consistent principle or set of laws is suspect, given the complex intuitions people have about right and wrong" (p. 215).
Contrast an engineering approach to top-down AMA design. Where Wallach and Allen (p. 14) see the trolley cases as showing "the complexity of human responses to ethical questions", Pereira & Saptawijaya (2007, p. 103) use Hauser's (2006) and Mikhail's (2007) trolley thought experiments to ground their moral goal in "judgments . . . widely shared among [demographically] diverse populations." Second, they find these judgments "to be consistent with the so-called . . . principle of double-effect" (Pereira & Saptawijaya, 2007, p. 104). They then implement this principle using logic programming extended to support forward-looking agents. This promising engineering approach undercuts Wallach and Allen's doubts about top-down design for AMAs. Indeed, the more concrete rules that Wallach and Allen consider in their chapter are Asimov's laws, which lack the ethical rationale of Pereira and Saptawijaya's empirically based principle. Moreover, top-down approaches have the ethical advantage that they are easier for us to understand. Pereira and Saptawijaya's logic programming provides rationales for the AMAs' choices that humans can understand.
Turning to ethics, I have three criticisms of Wallach and Allen's framework. First, their focus on the inclusive category of (ro)bots distorts the field and the moral urgency of the robot ethics project. Second, their definitional stress on autonomy is tendentious and distracts us from alternative approaches to the moral problems of powerful technology. Third, their focus on the model of human morality seems under-motivated.
Scope
To answer the first question, "Does the world need AMAs?", Wallach and Allen use the example of the U.S. power blackout in 2003, where "software agents and control systems at . . . power plants activated shutdown procedures, leaving almost no time for human intervention" (p. 18). This example may surprise: large scale networked power plants and their distribution system are a long way from service robots. To cover this gap, the term 'robots' of the book's title gets expanded to "(ro)bots -- a term we'll use to encompass both physical robots and software agents" (p. 3).
This broadened scope can be misleading in several ways. First, in terms of scale, it is very difficult to focus on large distributed systems. Robots (in the ordinary sense) are at the local end of this scale, supporting clearer moral judgments about responsibility. Second, the expanded definition includes "data-mining bots that roam the Web" (p. 19) to collect personal information. Yet much of the moral urgency of introducing AMAs is real-time criticality, driven by hardware based engineering systems like the electrical power grid. By treating these very different kinds of artifact together as (ro)bots, the former inherits the time critical status of the latter. This seems misleading. There is no excuse for those companies that deploy data-mining bots to avoid (perhaps costly) human oversight based on the issue of time-criticality. Indeed, having looked into the ethics of the commercial data-mining industry, which routinely takes a call to a 1-800 help number as consent to use the resulting personal information, I would not accept assurances that their software was now protecting my privacy through built-in moral competence (Danielson, 2009).
Conversely, non-physical bots inhabit a simpler environment than the hardware robot's physical world. This is, after all, the appeal of using simulation to ease the design of real robots and AMAs. But some of the book's arguments assume the physical world as the target environment: "The decision-making processes of an agent whose moral capacities have been evolved in a virtual environment are not necessarily going to work well in the physical world" (p. 104). For example, we should be able to build bots that respect privacy more easily than robots that do, because we can log access to and tag data records more readily than real stuff. Consider how simple the network Robot Exclusion Protocol is (Koster, 1994).
Autonomy
Where we have seen that AMAs include more than one might have thought, the class is also narrower. In particular, AMAs are autonomous; Wallach and Allen consider "independence from direct human oversight" to be crucial (p. 3). From an ethical point of view, this is not so clear. Stressing autonomy may lead to the neglect of alternative non-autonomous strategies for dealing with potentially harmful interactions with robots. Wallach and Allen note that "engineers often think that if a (ro)bot encounters a difficult situation, it should just stop and wait for a human to resolve the problem" (p. 15). But they don't follow this line of engineering thought, asking, "Should a good autonomous agent alert a human overseer if it cannot take action without causing some harm to humans? (if so, it is sufficiently autonomous?)" (p. 16). But we should separate two questions: Sufficiently autonomous to do the needed job well? Or to meet the definition of AMA? The leading role of autonomy in ethical accounts seems to weigh in favor of autonomous solutions.
To address the ethical question of whether robots need to be autonomous to do their job well, we would need to consider the alternatives to autonomy. They include:
· Segregation from most humans (as is the case for factory and mining robots). Indeed, to consider a commonplace example, there are no trolley problems for the "robot" trains we call elevators because their enclosed and vertical tracks physically exclude the problem.
· Human intervention (involving more than stopping and waiting for a human). Typically ownership and licensing link to a responsible person through a technology of secure sensing and control.
While remote controlled devices and robotic extensions raise fewer new -- and therefore philosophically interesting -- ethical issues, they do pose serious ethical questions. For example, introducing trash collecting robotic arms in Los Angeles put a powerful extension of the driver on the side of his or her large truck and at the same time eliminated the other member of the trash collection team that might have overseen its use. Finally, if the term "robot" seems to require autonomy, consider the counter-example of robotic surgery: "Modern-day surgical robots are a form of computer-assisted surgery using a 'master-slave relationship' in which the surgeon is able to control the actions of the robot in real time, using the robot to improve upon his/her vision, dexterity and overall surgical precision" (Patel & Notarmarco, 2007, p. 2). Of course, it is close remote control that allows these robots to work in the ethically highly constrained field of human medicine. Robot surgery raises important moral problems; they improve outcomes but at a high cost (Picard, 2009), stress-testing our standards of human-provided care.
Lethal military robots may be an especially significant case for autonomy. Sparrow (2007) makes a good case that the stated plans to have small groups of soldiers invoke large numbers of robots would make human oversight impossible. Thus these planned lethal robots would need to be autonomous moral agents. Sparrow is highly skeptical that AMAs will be achieved in time for this deployment, so he calls for a ban on this technology.
The Model of Human Ethics
Wallach and Allen advocate that AMAs implement morality by explicitly following what we know about human ethics. They distinguish their approach from one that might have greater appeal to engineers in their discussion of Arkin's (2007) "hybrid deliberative/reactive robot architecture." "Like many projects in AI, Arkin's architecture owes little to what is known of human cognitive architecture. There's nothing wrong with this as an engineering approach to getting a job done. However, our focus here will be an alternative approach" (p. 172). They focus on implementing a general model of human cognition and emotion.
While it is true that the ethics we know the most about is human ethics, we also know very little about how to construct it. Consider the disputes about the roles of biological versus social evolution in human morality, a dispute about the basic mechanisms responsible (Mesoudi & Danielson, 2008). More specifically, I suggest that robotics provides an opportunity to research and construct explicitly non-human forms of AMAs that is important for several reasons.
First, the human model of ethics is a general purpose one. While a morality specific to special purpose robots may be more tractable. We have already seen this in the case of trolley robots above; Arkin's research provides a more developed example for lethal military robots. Arkin can bypass general ethical theories (and their controversies) by focusing on the agreed international rules of war.
Second, human ethics is fundamentally egalitarian, stressing the equality of all moral agents. But, again as Arkin points out, robots are expendable; they are different from humans. In the case of lethal force, a robot cannot appeal, as a human soldier can, to a right of self-defense that might balance threatening or killing a non-combatant. Less drastically, we are all familiar with moral relations with lesser moral agents, like dogs, that can be trusted to follow some rules under some temptations, but remain the responsibility of their owners. I suggest that our experience with lesser moral agents will be more useful for the foreseeable future than the full human model of ethics.
Third, the model of human ethics appears to provide a misleadingly fixed goal. Wallach and Allen close their helpful discussion of ways an AMA could be held responsible with:
We wish to emphasize once more, however, that while these post hoc questions about moral accountability are important, they do not provide obvious solutions to the primary technological challenge of building AMAs that have the capacity to assess the effect of their actions on sentient beings, and to use those assessments to make appropriate decisions (p. 204).
On the contrary, I suggest that the institutions constructing responsibility can decisively frame the engineering goal. Consider the so-called "black box" data recorder found in airplanes and increasingly in trucks, but in few private autos (Danielson, 2006). Considering that automobile accidents are the leading cause of accidental death, this may surprise. The difference is explained by the difference in ownership and responsibility. Trucks are typically owned by a fleet operator, whose interests may differ from the truck drivers'. Cars are typically privately owned -- rental cars are an exception that supports my point -- and the owner/driver has little interest in a source of evidence against him in a legal action. So one can't simply conclude that building a sturdy, accurate data recorder is the engineering goal. One needs to design a device that fits with the allocation of responsibility in the technological field.
Conclusion
Moral Machines is a fine introduction to the emerging field of robot ethics. There is much here that will interest ethicists, philosophers, cognitive scientists, and roboticists.
References
Anderson, S. (2008). Asimov's "three laws of robotics" and machine metaethics. AI & Society, 22(4), 477-493.
Picard, A. (2009). For prostate removal, robots rule. Globe and Mail, p. 4.
Danielson, P. (2006). Monitoring Technology. In A.-V. Anttiroiko, & M. Malkia (Eds.), Encyclopedia of Digital Government. Idea Group.
Danielson, P. (2009). Metaphors and Models for Data Mining Ethics. In E. Eyob (Ed.), Social Implications of Data Mining and Information Privacy: Interdisciplinary Frameworks and Solutions (pp. 33-47). Hershey, PA: IGI Global.
Hauser, M. (2006). Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: HarperCollins.
Joy, B. (2000). Why the future doesn't need us. Wired, 8 (4).
Koster, M. (1994). A Standard for Robot Exclusion. Retrieved Jan 14, 2009, from http://www.robotstxt.org/orig.html.
Mesoudi, A., & Danielson, P. (2008). Ethics, Evolution and Culture. Theory in Biosciences, 127(3), 229-240.
Mikhail, J. (2007). Universal moral grammar: theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143-152.
Moravec, H. (1988). Mind Children. Cambridge, Mass.: Harvard Univ. Press.
Patel, V., & Notarmarco, C. (2007). Journal of Robotic Surgery: introducing the new publication. Journal of Robotic Surgery, 1(1), 1-2.
Pereira, L., & Saptawijaya, A. (2007). Modelling Morality with Prospective Logic. Progress in Artificial Intelligence, 99-111.
Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report GIT-GVU-07011 http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62-77.
Ronald Arkin on Robotspodcast
The second podcast on military robots from robotspodcast.com featuring Ronald Arkin is now available. For new readers to this blog, Ronald Arkin is the director of the Mobile Robot Lab and Associate Dean of Research at Georgia Tech.
Subscribe to:
Posts (Atom)