Monday, December 29, 2008

Pilot Burnout Flying Predator Drones

Pilots, flying Predator drones directed at targets in Afghanistan and Iraq from the safety of Nellis Air Force Base in Nevada, "are at least as fatigued as crews deployed to Iraq." This is among the findings presented in a series of reports by Air Force Lt. Col. Anthony P. Tvaryanas. Teleoperating drones apparently leads to higher fatigue for the crew than actually manning an Awacs surveillance plane.

Tvaryanas speculates that "sensory isolation" from the immediate feedback of being in a plane contributes to the mental exhaustion of the teleoperators. He also examined 95 mishaps and safety incidents with the Predator. Tvaryanas reported that 57% of the crew-member-related incidents were "consistent with situation awareness errors associated with perception of the environment." In other words, people are poor at grasping an environment in which they are not actually located. Researches stressing the importance of embodied cognition will appreciate these findings. They are also likely to foster an interest in making drones more autonomous in the hopes of decreasing mishaps arising from errors caused by remote crew members.

The New York Times Magazine reported on this research in their December 14th 2005 article on the "Year in Ideas."

Tuesday, December 23, 2008

Programming the Perfect Soldier

Brenda Ann Burke has an article at on Programming the Perfect Soldier: The Ethics of "Autonomous" Lethal Robots that nicely summarizes several recent books touching on the ethics of military robots.

The page 99 test

We apply the "Page 99 Test " to Moral Machines at

You can read p.99 of Moral Machines here.

"Open the book to page ninety-nine and read, and the quality of the whole will be revealed to you." --Ford Madox Ford

Saturday, December 20, 2008

Brainstorm Responds to Robot Ethics Challenge

We're glad to see issues of machine morality getting attention from software engineers. Roger Gay, VP for Business Development at the Institute of Robotics in Scandinavia (iRobis) proposes using their "Brainstorm" software package for "Higher Level Logic (HLL)":

Next on our agenda: Encouraging people to read the book, not just reacting only to the sensationalized media reports!

Here's an example of what we mean:
“A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.”

Excellent. My hope is that he is an engineer. What is needed is a coding of the Geneva Convention that engineers can easily use as design requirements. Better still if there's a version that computer programs can understand. If the product of the work is not specifically geared toward technical development activities, then it's unlikely to be any more useful than the original Convention documents. Getting robots to understand the rules of war a useful idea, though not a complete capability for an ethical robot.

We've dashed the hope raised by that Telegraph headline already, but we're also not advocating for "getting robots to understand the rules of war, either as a complete solution to AMAs, or even as a practical approach for battlefield robots.

Roboethics Workshop at IEEE Conference in Japan

A fuII day workshop on Roboethics has been announced for May 19th at ICRA2009, the 2009 IEEE International Conference on Robotics and Automation in Kobe Japan. Those wishing to present at this gathering should submit abstract by January 14th. The organizers of the workshop include Gianmarco Veruggio, Ron Arkin, Atsuo Takanishi, Jorge Solis, Matthias Scheutz, and Fiorella Operato. The call for papers states that,

This workshop invites submissions on the following topics (but not limited to):
• Social (Robotics and job market; Cost benefit analysis; etc.);
• Psychological (Robots and kids; Robots and elderly, etc.);
• Legal (Robots and liability; Identification of autonomously acting robots; etc.);
• Medical (Robotics in health care and prosthesis; etc.);
• Warfare application of robotics (Responsibility, International Conventions and Laws; etc.)
• Environment (Cleaning nuclear and toxic waste, Using renewable energies, etc.).

Wednesday, December 17, 2008

Killer Robots or Friendly Fridges

A call for papers has been put out for a two day symposium (April 6-7) at AISB '09. The symposium is titled, "Killer Robots or Friendly Fridges: the Social Understanding of Artificial Intelligence." Abstracts should be submitted by December 19th.


For the non-specialist, the whole notion of Artificial Intelligence challenges fundamental understandings of what it is to be human, with enormous implications for how we conceive ourselves, our artefacts and our societies. AI's foundational goal was the construction of autonomous sentience. Yet, 55 years after Turing's seminal paper, publicly visible achievements, beyond science fiction speculations or media exaggerations, still lie in faltering steps in voice and image recognition, surveillance, computer games and virtual environments, not in truly intelligent everyday machines.

This symposium will offer a major forum for the discussion of the social understanding of Artificial Intelligence, in particular the curious spaces between popular expectations of machines that meet our every whim, fears of humans enslaved or eliminated by crazed super-brains, and the sober reality of toasters that still burn the bread.

At the start of the 21st century, it is timely to reflect not just on the technical achievements and pitfalls of the now mature discipline of Artificial Intelligence, but also on its wider social understanding. While there have always been ill-informed concerns about "robots taking over the world", the reality is both more prosaic and more complex. People have long anthropomorphised complex artefacts which are capable of seemingly autonomous interaction. However, recent advances in the deployment of believable characters and affective systems, both in graphical and robotic form, have rekindled problematic social and ethical questions about our relationships with machines.

This symposium offers a fresh opportunity for interdisciplinary perspectives on the social understanding of Artificial Intelligence, with the strong potential to bring together contemporary research in key technical, social, psychological and philosophical domains

Monday, December 8, 2008

Wearable supercomputers

Last Tuesday, Colin had the opportunity to interview the CTO of MNB Technologies, a Bloomington, Indiana, company that is developing wearable supercomputers for military and other applications. (Colin is an occasional host of Interchange, on WFHB community radio in Bloomington.) Listen here.

Interchange 12/02/08

Host Colin Allen speaks with Nick Granny, chairman and chief technical office of MNB Technologies, Inc., a hi-tech startup company based in Bloomington, Indiana. They discuss the advances that MNB is making in designing and developing "wearable supercomputers" and the kinds of applications that these highly portable machines will enable, including military training with virtual reality, homeland security applications, medical triage at disaster sites, and civilian applications such as real-time route planning for delivery drivers, and even the download and use of satellite data for efficient farming. Colin and Nick also discuss how MNB Technologies came to be located in Bloomington, which involved a chance meeting with his future wife and business partner Martina Barnas at the Paris airport, and Nick describes how the State of Indiana supports small businesses, making it possible for MNB Technologies to have ambitious expansion plans over the next few years. (57:29)

Thursday, December 4, 2008

ABC Perth Interview

Australian Broadcasting Corporation/Perth interviewed Colin Allen on their Thursday morning show with Geoff Hutchison.

Wednesday, December 3, 2008

Radio 4 Today programme on battlefield robots

BBC Radio 4 podcast on robotic soldiers which originally ran on Wednesday morning, interviewing Prof. Noel Sharkey and Colin Allen.

Noel Sharkey also moderates the AUVSI Forum of the Association for Unmanned Vehicle Systems International.

Tuesday, December 2, 2008

Der Spiegel covers military angle

Germany's Der Spiegel is running a story "Pentagon plant rücksichtsvolle Kampfroboter" which nicely synthesizes material from a variety of sources, including the NY TImes and Telegraph. They also link to a 2005 story on a Canadian project concerning robot etiquette.

Standards for Rescue Robots

More than three dozen rescue robots were gathered at a testing site in Disaster City, Texas. The rescue robot exercise was held by The National Institute of Standards and Technology (NIST) to help develop a standard suite of performance tests for evaluating mechanical rescuers. The Department of Homeland Security's Science and Technology Directorite sponsored this exercise at a testing facility that offers an airstrip, lakes, train wrecks and rubble piles. Establishing a standard for rescue robots is necessary to evaluate one model against others. Read the full story titled, "Rescue Robot Exercise Brings Together Robots, Developers, First Responders" at ScienceDaily.

Monday, December 1, 2008

Could Robots Take Over the World?

From the St. Petersburg Times by Ben Montgomery

"It feels a bit like the End Times these days, what with assault rifles flying off the shelves and the markets swinging and the honeybees dying and the national deficit growing and the polar ice caps melting and wars raging and the missionaries from the panhandle cults filling our mailboxes with doomsday lit. ..." continue at St. Pete Times

Story quotes Wendell: "Robots and computer systems are all over the place and making all kinds of decisions," Wallach says on the phone. "And the machines are getting more and more autonomous."

Sunday, November 30, 2008

The Telegraph follows up on military angle

The British newspaper The Daily Telegraph is running a story by reporter Tim Shipman under the headline "Pentagon hires British scientist to help build robot soliders that 'won't commit war crimes'" which greatly exaggerates my role as an external consultant for a Navy-sponsored report. The authors of the report ("Autonomous Military Robotics: Risk, Ethics, and Design", award no. N00014‐07‐1‐1152) are in fact Patrick Lin (who is also co-editor of a 2007 volume on nanotechnology ethics), George Bekey, and Keith Abney.

Despite misrepresenting my role on the project, and my having a few minor factual quibbles, Shipmans' story successfully captures why we need to be thinking about these ethical issues now.

Wednesday, November 26, 2008

Service Robots for the Home Being Developed in the US

There has been a great deal of discussion regarding Japanese and European robotic research directed at caring for the homebound. However, researchers in the U.S. are also tackling this challenge. At the University of Massachusetts, MIT, and at Georgia Tech roboticists are building service robots. E.J. Mundell discusses these initiatives in an November 18th BusinessWeek article titled, "Robots may come to Aging Boomers' Rescue."

The uBOT-5, being developed by a team in the Laboratory for Perpetual Robotics at UMass monitors the home environment and performs a few simple tasks. With a video screen mounted on Segway-like wheels, the robot can move around the house, and allow distant relatives or doctors to have virtual visits with the homebound. At MIT a team lead by Nichols Roy is building an "autonomous wheelchair" that requires only a voice command to travel to another place in a home or hospital. Service dogs are the prototypes for the home-care robots being created by Charlie Kemp at Georgia Tech. Opening drawers and working light switches are among the tasks performed by these service-pets.

Given a predicted shortage of 800,000 nurses and home health-care aides by 2020, there is expected to be high demand for robotic caregivers by aging boomers.

Monday, November 24, 2008

NYT Article Discusses Ethics for Battlefield Robots

Ronald Arkin's contention that intelligent battlefield robots can behave more ethically than human soldiers is discussed in a New York Times article titled. "A Soldier, Taking Orders From Its Ethical Judgment Center." In addition to Arkin's research at Georgia Tech, the article quotes Colin Allen, Daniel Dennett, Noel Sharkey, and mentions the publication of "Moral Machines."

Rude Robots

"You can endow a robot with a personality . . . but it should not be rude," says Maja Matarić , the founding director of the center for Robotics and Embedded Systems at USC. Matarić is particularly concerned with insolence from service robots that play a role in assisted living, as health coaches, and in care for the elderly. Adam Shah reports on Matarić's interest in developing social robots sensitive to appropriate behavior in a PCWorld news article, "Rude Robots, Stay Away From Homes."

Sunday, November 23, 2008

For the record...

It is exciting to see a book on Machine Ethics, a topic that is certainly important. While we appreciate the fact that the authors have acknowledged our research in this field, we were dismayed that our work was misrepresented. The authors committed the fallacy of equivocation in dismissing Susan's view on p. 97. The word "principles" changes meaning from one sentence to the next in refuting her position in just two sentences. Of course Beauchamp and Childress's "Principles" of Biomedical Ethics can "lead to conflicting recommendations for action." This is why they are put forth as prima facie duties/principles, rather than absolute duties/principles. What is needed is some decision principle(s) that determine which duty becomes strongest in each of such cases of conflict. Susan maintains that, at this time, there is general agreement among bioethicists as to the right action in many particular ethical dilemmas (e.g. that the principle/duty of Respect for Autonomy should dominate in a case where a competent adult rejects a treatment that could have benefitted him, wheras the principles/duties of Non-Maleficence and Beneficence become stronger than Respect for Autonomy when dealing with an incompetent patient). If there is not agreement on at least some cases, then there is no ethics to be programmed into, or learned by, a machine. The Machine Ethics project could not even get started.

Susan believes, furthermore, that ethicists are reaching agreement on more and more cases of particular ethical dilemmas as the issues are more fully understood and appreciated. From these cases, some decision principles can be gleaned. In other cases, where a disagreement remains (at least for the time being) the prima facie duty approach can reveal the nature of the disagreement. (One would put more weight on one duty, while another puts more weight on a different duty.) We have consistently maintained that it is unwise to allow a machine to interact with humans in areas where there is no agreement as to which action is ethically correct.

In the extended discussion of MedEthEx the details, unfortunately, were either incorrect or stated in a confused way. To give just two examples: The authors are incorrect in stating that MedEthEx adopts Ross's prima facie duties. Instead, we used Beauchamp and Childress's Principles of Biomedical Ethics, one of which (Respect for Autonomy) was never one of Ross's duties. On p. 127, it is stated that "MedEthEx uses an inductive logic system based on the Prolog programming language...." Instead, the machine learning technique of Inductive Logic Programming is used which is not tied to any particular programming language.

Finally, the authors seem to not fully understand our current approach, which is where our previous work has been heading all along, an approach that is far from a "top-down" in that we assume no particular features of ethical dilemmas, or particular ethical duties.

Friday, November 21, 2008

Survey Results regarding the use of Lethal and Autonomous Systems

Lilia Moshkina and Ronald Arkin,from the Mobile Robot Laboratory at Georgia Tech, have begun to report on the results of an online survey regarding the military use of lethal and autonomous robotic systems. They collected 430 full responses to their online public opinion survey before it was closed on October 27th of 2007. Of the participants, "234 self-identified themselves as having had robotics research experience, 69 as having had policymaking experience, 127 as having had military experience, and 116 as having had neither (therefore categorized as general public)." Their 2008 paper, Lethality and Autonomous Systems: The Roboticist Demographic, reports on the responses of the participants with robotic research experience. Generally, the participants felt that using robots in warfare outweigh the risks, however they felt that as more control shifts from humans to the robots, the less such an entity is acceptable. "67% of the roboticists believe that it would be easier or much easier to start wars if the robots were introduced into warfare, perhaps due to the fact that human soldier life loss would be reduced."

Will Human Level AI Require Compassionate Intelligence?

Cindy Mason, a research associate at Stanford, has written a paper titled, "Human Level AI Requires Compassionate Intelligence" for the 2008 AAAI workshop on Meta-Cognition. Cindy has been working on emotions and AI since 1998. In the paper she describes a core meta-architecture for an agent "to resolve the turf war between thoughts and feelings based on agent personality rather than logic." This research takes its inspiration from the 17th century philosopher David Hume who proposed that emotions are antecedent to reason, and also from Buddhist mind training practices known as "insight meditation" or Vipassana.

Tuesday, November 18, 2008

NewScientist posting of "Six ways to build robots that do humans no harm"

Tom Simonite at New Scientist Tech has incorporated material from an article written by Wendell Wallach and Colin Allen, and posted it on their website under the title "Six Ways to Build Robots that do humans no harm".

The New Scientist article has been misread by some commentators, who believe that we propose moral machines can be built with these simple strategies and that the critiques of the strategies were written by Simonite. The strategies and evaluation of the strategies were written by us. The original article from which this material was drawn can be found below on my October 13th posting.

Saturday, November 15, 2008

Dilbert Discovers the Singularity

Moral Voting Machines

Now that the election is over, we can put to rest worries that the election might be stolen by Diebold. However, there's still plenty of evidence that electronic voting machines could be improved. For instance, voting mistakes still happen, and some citizens find the technology confusing. Could a voting machine be programmed to detect abnormal voting patterns that would signal possible confusion? This might be done by detecting anomalies in the physical interaction between voter and machine, for instance excessively changing selections. Or perhaps it could be accomplished by analyzing the actual selections in an attempt to detect their overall conceptual coherence. Of course, we still have to trust that black box voting is safe, but perhaps a dialogue with a machine in the voting booth could give confidence that one's votes are being processed by the machine.

Would you do whatever a robot told you to do?

Students in the robotics laboratory of Brian Scasselati at Yale performed an experiment that studied the difference between the way a robot's physical presence or its virtual presence affects humans' unconscious perceptions of the robot as a social partner. While subjects were responsive to instructions from the robot Nico in a simple book moving task when the robot was both physically present and when Nico was displayed on a screen, subjects were more likely to throw books into a garbage can at the robots instructions when Nico was present. That subject would follow this instruction from a robot at all is a rather disturbing finding, and one that suggests need for more research. A link for the research paper will be added to this post as soon as it becomes available. The paper is titled, "The effect of presence on human-robot interactions". The authors are Wilma A. Bainbridge, Justin Hart, Elizabeth S. Kim, and Brian Scassellati.

Friday, November 7, 2008

Machine Ethics Panel at the AAAI 2008 Fall Symposium on AI in Eldercare

Researchers developing intelligent systems for use in caring for the increasingly aging population met this weekend in Arlington, VA to discuss new trends in passive sensing with vision and machine learning, environments for eldercare technology research, robotics for assistive and therapeutic use, and  human-robot interaction.  A panel on machine ethics discussed various ethical ramifications of these and other such technologies and called for the need for the incorporation of an ethical dimension in these technologies.  A video was shown that displayed the need for such a dimension in even the most seemingly innocuous systems. The system in question is a simple mobile robot with a very limited repertoire of behaviors which amount to setting and giving reminders.  A number of questionable ethical practices were uncovered.  

One involved, after asking if she had taken her medication, asking the system's charge to show her empty pill box.  This is followed by a lecture by the system concerning how important it is for her to take her medication.  There is little back story in the video but, assuming a competent adult, such paternalistic behavior from the system seems uncalled for and shows little respect for the patient's autonomy.

During this exchange, the patient's responsible relative is seen watching it over the internet.  Again, it is not clear if this surveillance has been agreed to by the person being watched, and in fact there is no hint in the video that she indeed knows she is being watched, but there is the distinct impression that her privacy is being violated.

As another example, promises are made by the system that it will remind its charge when her favorite show and "the game" are on.  Promise making and keeping clearly have ethical ramifications and it is not clear that the system under consideration has the sophistication to make ethically correct decisions when the duty to keep promises comes in conflict other possibly more important duties.

Finally, when the system does indeed remind its charge that her favorite television show is starting, it turns out that she has company and she tells the robot to go away.  The system responds with "You don't love me anymore" to the delight of the guests and slinks away.  This is problematic behavior in that is sets up an expectation in the user that the system is incapable of fulfilling-- that it is capable of a loving relationship with its charge.  This is a very highly charged ethical ramification particularly given the vulnerable population for which this technology is being developed.

The bottom line is, contrary to those who argue that concern about the ethical behavior of autonomous systems is premature, the example transgressions of the most simple of such systems shows that in fact such concern is overdue.

Sunday, October 19, 2008

More on the Rise of the Machines

The letters section of Sunday's NY Times follows up on Richard Dooling's Op-Ed piece on The Rise of the Machines. The first letter states that "we can require that they all be designed with benign motives." But does this mean that the motives of the designers should be benign, or that the machines themselves should be designed to have benign motives? (I think the writer means the latter, but I'm not sure.)

The last letter opines that the machines "are not superintelligent. Even on Wall Street, they merely count very fast at the behest of their human masters." But this is a false dichotomy. Machines may not be superintelligent, but to describe what they do as mere counting at the behest of their masters is to downplay the range of autonomous decisions that computers are already engaged in making.

Thursday, October 16, 2008

"Every library should have this book"

Moral Machines is reviewed in the Oct 15 issue of Library Journal (scroll down to the Philosophy section).

The review follows...

Wallach, Wendell & Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford Univ. Nov. 2008. c.288p. bibliog. index. ISBN 978-0-19-537404-9. $29.95. PHIL

Machines that look like people, fall in love, and wreck worlds may be on their way, Wallach (Ctr. for Bioethics, Yale Univ.) and Allen (history & philosophy of science, Indiana Univ.) suggest. Realistically, however, the problem now is with computer programs that act autonomously by playing roles in electric blackouts and blocking credit cards and machines that drive subway trains and guide military vehicles. The authors carefully examine how morality is conceptualized; on the face of it, robots can't be moral agents because intelligent machines work on a combination of fixed programs and randomizing devices that create new data from which their programs can generate novelties. Wallach and Allen don't pretend that any robots we know can have full moral agency, but they see the problem instead as being one of balancing goals and risks and keeping both within the limits that people, after rational reflection, can accept. Robots can do this balancing, they argue, and it is time to get on with it. Every library should have this book.—Leslie Armour, Dominican Univ. Coll., Ottawa, Ont.

Monday, October 13, 2008

6 Ways to Build Robots that Will Not Harm Humans

It is just a matter of time until a computer or robot takes a decision that will cause a human disaster. In all likelihood, this will happen under circumstances that the designers and engineers who built the system could not predict. Computers and robots enhance modern life in many ways. They are incredible tools that will continue to improve productivity, expand the forms of entertainment available to each of us, and, in addition to vacuuming our homes, take on many of the burdens of daily life. However, these rewards do not come for free. It is necessary to go beyond product safety and begin thinking about ways to insure that the choices and actions taken by robots will not cause harm to people, pets, or personal property.

Engineers are far from being able to build robots that rival human intelligence, or as some science fiction writers suggest, threaten human existence. Furthermore, it is impossible to know whether such systems can ever be built. However, there are six strategies to consider for minimizing any harms the increasingly autonomous systems built today, or in the near future, will cause:

1. Keep them in stupid situations -- Make sure that all computers and robots never have to make a decision where the consequences of the machines actions can not be predicted in advance.

Likelihood that this strategy will succeed (LSS): Extremely Low -- Engineers are already building computers and robotic systems whose actions they cannot always predict. Consumers, industry, and government want technologies that perform a wide array of tasks, and businesses will expand the products they offer in order to capitalize on this demand. In order to implement this strategy, it would be necessary to arrest further development of computers and robots immediately.

2. Do not place dangerous weapons in the hands of computers and robots.

LSS: Too late. Semi-autonomous robotic weapons systems including cruise missiles and Predator drones already exist. A semiautonomous robotic cannon deployed by the South African army went haywire killing 9 soldiers and wounding 14 others in October 2007. A few machine gun carrying robots were sent to Iraq, photographed on a battlefield, but apparently not deployed. Military planners are very interested in the development of robotic soldiers, and see them as a means to reduce the deaths of human soldiers during warfare. While it is too late to stop the building of robot weapons, it may not be too late to restrict which weapons they carry or the situations in which the weapons can be used.

3. Program them with rules such as the Ten Commandments or Asimov's Laws for Robots.
LSS: Moderate. Isaac Asimov's famous rules that robots should not harm humans or through inaction allow harm to humans, should obey humans, and should preserve themselves are arranged hierarchically, so that not harming humans trumps self-preservation. However, Asimov was writing fiction, he was not actually building robots. In story after story he illustrates problems that would arise with even these simple rules, such as what the robot should do when orders from two people conflict. Furthermore, how would a robot know that a surgeon wielding a knife over a patient was not about to harm the patient? Asimov’s robot stories demonstrate quite clearly the limits of any rule-based morality. Nevertheless, rules can successfully restrict the behavior of robots that function within very limited contexts.

4. Program robots with a principle such as the "greatest good for the greatest number" or the Golden Rule.

LSS: Moderate. Recognizing the limits of rules, ethicists look for one over-riding principle that can be used to evaluate the acceptability of all courses of action. However, the history of ethics is a long debate over the value and limits of any single principle that has been proposed. For example, you might be willing to sacrifice the lives of one person to save the lives of five people, but if you were a doctor you would not sacrifice the life of a healthy person in your waiting room to save the lives of five people needing organ transplants.

But there are other more difficult problems than this, and determining which course of action among many options leads to the greatest good (or other cherished principle) would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making the calculations would also require time and a great deal of computing power.

5. Educate a robot in the same way as a child, so that the robot will learn and develop sensitivity to the actions that people consider to be right and wrong.

LSW: Promising, although this strategy requires a few technological breakthroughs. While researchers are developing methods to facilitate a computer’s ability to learn, the tools presently available are very limited.

6. Build human-like faculties such as empathy, emotions, and the capacity to read non-verbal social cues into the robots.

LSS: These faculties would help improve strategies 3-5. Most of the information people use to make choices and cooperate with others derives from our emotions, our ability to put ourselves in the place of others, and our ability to read their gestures and intentions. If one knows the habits and customs of those you are interacting with this information can also help one understand what actions are appropriate for a given situation. Such information may be essential to appreciate which rules or principles apply in what situations, butthis alone is not enough to insure that safety of the actions chosen by a robot.

For the next 5-10 years computer scientists will focus on building computers and robots that function within relatively limited contexts, such as finance systems, computers used for medical applications, and service robots in the home. Within each of these contexts, there are different rules, principles, and possible dangers. System developers can experiment with a range of approaches to insure that the robots they build will behave properly given the specific application. They could then combine the most successful strategies to facilitate the design of more sophisticated robots.

Humanity has started down the path of robots and computers making decisions without direct human oversight. Governments and corporations in South Korea, Japan, Europe, and the USA are investing millions of dollars in research and development. Some people will argue that it is a mistake to be going down this track. But the commercial and military imperatives make this train hard to derail. The technological challenge of ensuring that these machines respect ethical principles is upon us.

Wendell Wallach and Colin Allen are co-authors of "Moral Machines: Teaching Robots Right From Wrong". For more information on this subject visit their blog at

Press Release: Moral Machines

***FOR IMMEDIATE RELEASE Contact: Cassie Ammerman
(212)726-6057 |

Teaching Robots Right from Wrong

by Wendell Wallach and Colin Allen

(Oxford | November 13, 2008 | $29.95 |6⅛ x 9¼ | 261 pages | ISBN13: 978-0-19-537404-9)

Who can forget HAL, the evil (or perhaps just misunderstood) artificially intelligent computer from 2001: A Space Odyssey? Or the robots from Isaac Asimov’s stories, which are programmed to follow Asimov’s three laws of robotics, the first of which is “A robot may not injure a human being, or, through inaction, allow a human being to come to harm?” It seems that for as long as human beings have been thinking about artificial intelligence, we have been thinking about the morality of such computers. Yet the systems we have today are “ethically blind”—at best they have a kind of operational morality where all the ethical decisions are left to the programmers and users. Is this enough?

Authors Wendell Wallach and Colin think not. In MORAL MACHINES: Teaching Robots Right from Wrong, Wallach and Allen argue that even if full moral agency for machines is a long way in the future, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. The world is becoming increasingly populated with computers making decisions that have life and death consequences, and there is the potential for major disaster unless morality can be programmed into the machines. But whose morality? And how can we make morality computable?

Wallach and Allen take us on a fast-paced tour through philosophical ethics and artificial intelligence, showing that we need to start thinking about the morals of machines long before we have created true artificial intelligence. The quest to build machines that are capable of telling right from wrong has begun.

COLIN ALLEN is a Professor of History & Philosophy of Science and of Cognitive Science at Indiana University.
WENDELL WALLACH is a consultant and writer and is affiliated with Yale University's Interdisciplinary Center for Bioethics.

MORAL MACHINES: Teaching Robots Right from Wrong, by Wendell Wallach and Colin Allen,
will be published, in hardcover, by Oxford University Press on November 13, 2008
($29.95 |6⅛ x 9¼ | 261 pages | ISBN13: 978-0-19-537404-9).

MORAL MACHINES: Teaching Robots Right From Wrong


Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don't seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun. Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.

Published by Oxford University Press
Publication Date: November 13, 2008
ISBN-10: 0195374045

The Rise of the Machines

Richard Dooling in the NY Times writes about how machines may become so essential to complex decision making that we no longer have the capacity to turn them off.  He discusses how the Unabomber foresaw our creeping dependence on machines, and how their role in the financial market may provide the path to servitude. Nowhere does he discuss whether machines themselves might be programmed to include ethical evaluations of their own decisions in their computations. 

Sunday, October 5, 2008

Moral Machines: Introduction


In the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT), scientists are designing computers that can read human emotions. Financial institutions have implemented worldwide computer networks that evaluate and approve or reject millions of transactions every minute. Roboticists in Japan, Europe, and the United States are developing service robots to care for the elderly and disabled. Japanese scientists are also working to make androids appear indistinguishable from humans. The government of South Korea has announced its goal to put a robot in every home by the year 2020. It is also developing weapons-carrying robots in conjunction with Samsung to help guard its border with North Korea. Meanwhile, human activity is being facilitated, monitored, and analyzed by computer chips in every conceivable device, from automobiles to garbage cans, and by software “bots” in every conceivable virtual environment, from web surfing to online shopping. The data collected by these (ro)bots—a term we’ll use to encompass both physical robots and software agents—is being used for commercial, governmental, and medical purposes.

All of these developments are converging on the creation of (ro)bots whose independence from direct human oversight, and whose potential impact on human well-being, are the stuff of science fiction. Isaac Asimov, over fifty years ago, foresaw the need for ethical rules to guide the behavior of robots. His Three Laws of Robotics are what people think of first when they think of machine morality.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov, however, was writing stories. He was not confronting the challenge that faces today’s engineers: to ensure that the systems they build are beneficial to humanity and don’t cause harm to people. Whether Asimov’s Three Laws are truly helpful for ensuring that (ro)bots will act morally is one of the questions we’ll consider in this book.

Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight. Already, in October 2007, a semiautonomous robotic cannon deployed by the South African army malfunctioned, killing 9 soldiers and wounding 14 others—although early reports conflicted about whether it was a software or hardware malfunction. The potential for an even bigger disaster will increase as such machines become more fully autonomous. Even if the coming calamity does not kill as many people as the terrorist acts of 9/11, it will provoke a comparably broad range of political responses. These responses will range from calls for more to be spent on improving the technology, to calls for an outright ban on the technology (if not an outright “war against robots”).

A concern for safety and societal benefits has always been at the forefront of engineering. But today’s systems are approaching a level of complexity that, we argue, requires the systems themselves to make moral decisions—to be programmed with “ethical subroutines,” to borrow a phrase from Star Trek. This will expand the circle of moral agents beyond humans to artificially intelligent systems, which we will call artificial moral agents (AMAs).

We don’t know exactly how a catastrophic incident will unfold, but the following tale may give some idea.
Monday, July 23, 2012, starts like any ordinary day. A little on the warm side in much of the United States perhaps, with peak electricity demand expected to be high, but not at a record level. Energy costs are rising in the United States, and speculators have been driving up the price of futures, as well as the spot price of oil, which stands close to $300 a barrel. Some slightly unusual automated trading activity in the energy derivatives markets over past weeks has caught the eye of the federal Securities and Exchange Commission (SEC), but the banks have assured the regulators that their programs are operating within normal parameters.

At 10:15 a.m. on the East Coast, the price of oil drops slightly in response to news of the discovery of large new reserves in the Bahamas. Software at the investment division of Orange and Nassau Bank computes that it can a turn a profit by emailing a quarter of its customers with a buy recommendation for oil futures, temporarily shoring up the spot market prices, as dealers stockpile supplies to meet the future demand, and then selling futures short to the rest of its customers. This plan essentially plays one sector of the customer base off against the rest, which is completely unethical, of course. But the bank’s software has not been programmed to consider such niceties. In fact, the money-making scenario autonomously planned by the computer is an unintended consequence of many individually sound principles. The computer’s ability to concoct this scheme could not easily have been anticipated by the programmers.

Unfortunately, the “buy” email that the computer sends directly to the customers works too well. Investors, who are used to seeing the price of oil climb and climb, jump enthusiastically on the bandwagon, and the spot price of oil suddenly climbs well beyond $300 and shows no sign of slowing down. It’s now 11:30 a.m. on the East Coast, and temperatures are climbing more rapidly than predicted. Software controlling New Jersey’s power grid computes that it can meet the unexpected demand while keeping the cost of energy down by using its coal-fired plants in preference to its oil-fired generators. However, one of the coal-burning generators suffers an explosion while running at peak capacity, and before anyone can act, cascading blackouts take out the power supply for half the East Coast. Wall Street is affected, but not before SEC regulators notice that the rise in oil future prices was a computer-driven shell game between automatically traded accounts of Orange and Nassau Bank. As the news spreads, and investors plan to shore up their positions, it is clear that the prices will fall dramatically as soon as the markets reopen and millions of dollars will be lost. In the meantime, the blackouts have spread far enough that many people are unable to get essential medical treatment, and many more are stranded far from home.

Detecting the spreading blackouts as a possible terrorist action, security screening software at Reagan National Airport automatically sets itself to the highest security level and applies biometric matching criteria that make it more likely than usual for people to be flagged as suspicious. The software, which has no mechanism for weighing the benefits of preventing a terrorist attack against the inconvenience its actions will cause for tens of thousands of people in the airport, identifies a cluster of five passengers, all waiting for Flight 231 to London, as potential terrorists. This large concentration of “suspects” on a single flight causes the program to trigger a lock down of the airport, and the dispatch of a Homeland Security response team to the terminal. Because passengers are already upset and nervous, the situation at the gate for Flight 231 spins out of control, and shots are fired.

An alert sent from the Department of Homeland Security to the airlines that a terrorist attack may be under way leads many carriers to implement measures to land their fleets. In the confusion caused by large numbers of planes trying to land at Chicago’s O’Hare Airport, an executive jet collides with a Boeing 777, killing 157 passengers and crew. Seven more people die when debris lands on the Chicago suburb of Arlington Heights and starts a fire in a block of homes.

Meanwhile, robotic machine guns installed on the U.S.-Mexican border receive a signal that places them on red alert. They are programmed to act autonomously in code red conditions, enabling the detection and elimination of potentially hostile targets without direct human oversight. One of these robots fires on a Hummer returning from an off-road trip near Nogales, Arizona, destroying the vehicle and killing three U.S. citizens.

By the time power is restored to the East Coast and the markets reopen days later, hundreds of deaths and the loss of billions of dollars can be attributed to the separately programmed decisions of these multiple interacting systems. The effects continue to be felt for months.

Time may prove us poor prophets of disaster. Our intent in predicting such a catastrophe is not to be sensational or to instill fear. This is not a book about the horrors of technology. Our goal is to frame discussion in a way that constructively guides the engineering task of designing AMAs. The purpose of our prediction is to draw attention to the need for work on moral machines to begin now, not twenty to a hundred years from now when technology has caught up with science fiction.

The field of machine morality extends the field of computer ethics beyond concern for what people do with their computers to questions about what the machines do by themselves. (In this book we will use the terms ethics and morality interchangeably.) We are discussing the technological issues involved in making computers themselves into explicit moral reasoners. As artificial intelligence (AI) expands the scope of autonomous agents, the challenge of how to design these agents so that they honor the broader set of values and laws humans demand of human moral agents becomes increasingly urgent.

Does humanity really want computers making morally important decisions? Many philosophers of technology have warned about humans abdicating responsibility to machines. Movies and magazines are filled with futuristic fantasies about the dangers of advanced forms of artificial intelligence. Emerging technologies are always easier to modify before they become entrenched. However, it is not often possible to predict accurately the impact of a new technology on society until well after it has been widely adopted. Some critics think, therefore, that humans should err on the side of caution and relinquish the development of potentially dangerous technologies. We believe, however, that market and political forces will prevail and will demand the benefits that these technologies can provide. Thus, it is incumbent on anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots, and virtual “bots” within computer networks.

As noted, this book is not about the horrors of technology. Yes, the machines are coming. Yes, their existence will have unintended effects on human lives and welfare, not all of them good. But no, we do not believe that increasing reliance on autonomous systems will undermine people's basic humanity. Neither, in our view, will advanced robots enslave or exterminate humanity, as in the best traditions of science fiction. Humans have always adapted to their technological products, and the benefits to people of having autonomous machines around them will most likely outweigh the costs.
However, this optimism does not come for free. It is not possible to just sit back and hope that things will turn out for the best. If humanity is to avoid the consequences of bad autonomous artificial agents, people must be prepared to think hard about what it will take to make such agents good.

In proposing to build moral decision-making machines, are we still immersed in the realm of science fiction—or, perhaps worse, in that brand of science fantasy often associated with artificial intelligence? The charge might be justified if we were making bold predictions about the dawn of AMAs or claiming that “it’s just a matter of time” before walking, talking machines will replace the human beings to whom people now turn for moral guidance. We are not futurists, however, and we do not know whether the apparent technological barriers to artificial intelligence are real or illusory. Nor are we interested in speculating about what life will be like when your counselor is a robot, or even in predicting whether this will ever come to pass. Rather, we are interested in the incremental steps arising from present technologies that suggest a need for ethical decision-making capabilities. Perhaps small steps will eventually lead to full-blown artificial intelligence—hopefully a less murderous counterpart to HAL in 2001: A Space Odyssey—but even if fully intelligent systems will remain beyond reach, we think there is a real issue facing engineers that cannot be addressed by engineers alone.

Is it too early to be broaching this topic? We don’t think so. Industrial robots engaged in repetitive mechanical tasks have
caused injury and even death. The demand for home and service robots is projected to create a worldwide market double that of industrial robots by 2010, and four times bigger by 2025. With the advent of home and service robots, robots are no longer confined to controlled industrial environments where only trained workers come into contact with them. Small robot pets, for example Sony’s AIBO, are the harbinger of larger robot appliances. Millions of robot vacuum cleaners, for example iRobot’s “Roomba,” have been purchased. Rudimentary robot couriers in hospitals and robot guides in museums have already appeared. Considerable attention is being directed at the development of service robots that will perform basic household tasks and assist the elderly and the homebound. Computer programs initiate millions of financial transactions with an efficiency that humans can’t duplicate. Software decisions to buy and then resell stocks, commodities, and currencies are made within seconds, exploiting potentials for profit that no human is capable of detecting in real time, and representing a significant percentage of the activity on world markets.

Automated financial systems, robotic pets, and robotic vacuum cleaners are still a long way short of the science fiction scenarios of fully autonomous machines making decisions that radically affect human welfare. Although 2001 has passed, Arthur C. Clarke’s HAL remains a fiction, and it is a safe bet that the doomsday scenario of The Terminator will not be realized before its sell-by date of 2029. It is perhaps not quite as safe to bet against the Matrix being realized by 2199. However, humans are already at a point where engineered systems make decisions that can affect humans' lives and that have ethical ramifications. In the worst cases, they have profound negative effect.

Is it possible to build AMAs? Fully conscious artificial systems with complete human moral capacities may perhaps remain forever in the realm of science fiction. Nevertheless, we believe that more limited systems will soon be built. Such systems will have some capacity to evaluate the ethical ramifications of their actions—for example, whether they have no option but to violate a property right to protect a privacy right.

The task of designing AMAs requires a serious look at ethical theory, which originates from a human-centered perspective. The values and concerns expressed in the world’s religious and philosophical traditions are not easily applied to machines. Rule-based ethical systems, for example the Ten Commandments or Asimov’s Three Laws for Robots, might appear somewhat easier to embed in a computer, but as Asimov’s many robot stories show, even three simple rules (later four) can give rise to many ethical dilemmas. Aristotle’s ethics emphasized character over rules: good actions flowed from good character, and the aim of a flourishing human being was to develop a virtuous character. It is, of course, hard enough for humans to develop their own virtues, let alone developing appropriate virtues for computers or robots. Facing the engineering challenge entailed in going from Aristotle to Asimov and beyond will require looking at the origins of human morality as viewed in the fields of evolution, learning and development, neuropsychology, and philosophy.

Machine morality is just as much about human decision making as about the philosophical and practical issues of implementing AMAs. Reflection about and experimentation in building AMAs forces one to think deeply about how humans function, which human abilities can be implemented in the machines humans design, and what characteristics truly distinguish humans from animals or from new forms of intelligence that humans create. Just as AI has stimulated new lines of enquiry in the philosophy of mind, machine morality has the potential to stimulate new lines of enquiry in ethics. Robotics and AI laboratories could become experimental centers for testing theories of moral decision making in artificial systems.

Three questions emerge naturally from the discussion so far. Does the world need AMAs? Do people want computers making moral decisions? And if people believe that computers making moral decisions are necessary or inevitable, how should engineers and philosophers proceed to design AMAs?

Chapters 1 and 2 are concerned with the first question, why humans need AMAs. In chapter 1, we discuss the inevitability of AMAs and give examples of current and innovative technologies that are converging on sophisticated systems that will require some capacity for moral decision making. We discuss how such capacities will initially be quite rudimentary but nonetheless present real challenges. Not the least of these challenges is to specify what the goals should be for the designers of such systems—that is, what do we mean by a “good” AMA?

In chapter 2, we will offer a framework for understanding the trajectories of increasingly sophisticated AMAs by emphasizing two dimensions, those of autonomy and of sensitivity to morally relevant facts. Systems at the low end of these dimensions have only what we call “operational morality”—that is, their moral significance is entirely in the hands of designers and users. As machines become more sophisticated, a kind of “functional morality” is technologically possible such that the machines themselves have the capacity for assessing and responding to moral challenges. However, the creators of functional morality in machines face many constraints due to the limits of present technology.

The nature of ethics places a different set of constraints on the acceptability of computers making ethical decisions. Thus we are led naturally to the question addressed in chapter 3: whether people want computers making moral decisions. Worries about AMAs are a specific case of more general concerns about the effects of technology on human culture. Therefore, we begin by reviewing the relevant portions of philosophy of technology to provide a context for the more specific concerns raised by AMAs. Some concerns, for example whether AMAs will lead humans to abrogate responsibility to machines, seem particularly pressing. Other concerns, for example the prospect of humans becoming literally enslaved to machines, seem to us highly speculative. The unsolved problem of technology risk assessment is how seriously to weigh catastrophic possibilities against the obvious advantages provided by new technologies.

How close could artificial agents come to being considered moral agents if they lack human qualities, for example consciousness and emotions? In chapter 4, we begin by discussing the issue of whether a “mere” machine can be a moral agent. We take the instrumental approach that while full-blown moral agency may be beyond the current or future technology, there is nevertheless much space between operational morality and “genuine” moral agency. This is the niche we identified as functional morality in chapter 2. The goal of chapter 4 is to address the suitability of current work in AI for specifying the features required to produce AMAs for various applications.

Having dealt with these general AI issues, we turn our attention to the specific implementation of moral decision making. Chapter 5 outlines what philosophers and engineers have to offer each other, and describes a basic framework for top-down and bottom-up or developmental approaches to the design of AMAs. Chapters 6 and 7, respectively, describe the top-down and bottom-up approaches in detail. In chapter 6, we discuss the computability and practicability of rule- and duty-based conceptions of ethics, as well as the possibility of computing the net effect of an action as required by consequentialist approaches to ethics. In chapter 7, we consider bottom-up approaches, which apply methods of learning, development, or evolution with the goal of having moral capacities emerge from general aspects of intelligence. There are limitations regarding the computability of both the top-down and bottom-up approaches, which we describe in these chapters. The new field of machine morality must consider these limitations, explore the strengths and weaknesses of the various approaches to programming AMAs, and then lay the groundwork for engineering AMAs in a philosophically and cognitively sophisticated way.
What emerges from our discussion in chapters 6 and 7 is that the original distinction between top-down and bottom-up approaches is too simplistic to cover all the challenges that the designers of AMAs will face. This is true at the level of both engineering design and, we think, ethical theory. Engineers will need to combine top-down and bottom-up methods to build workable systems. The difficulties of applying general moral theories in a top-down fashion also motivate a discussion of a very different conception of morality that can be traced to Aristotle, namely, virtue ethics. Virtues are a hybrid between top-down and bottom-up approaches, in that the virtues themselves can be explicitly described, but their acquisition as character traits seems essentially to be a bottom-up process. We discuss virtue ethics for AMAs in chapter 8.

Our goal in writing this book is not just to raise a lot of questions but to provide a resource for further development of these themes. In chapter 9, we survey the software tools that are being exploited for the development of computer moral decision making.

The top-down and bottom-up approaches emphasize the importance in ethics of the ability to reason. However, much of the recent empirical literature on moral psychology emphasizes faculties besides rationality. Emotions, sociability, semantic understanding, and consciousness are all important to human moral decision making, but it remains an open question whether these will be essential to AMAs, and if so, whether they can be implemented in machines. In chapter 10, we discuss recent, cutting-edge, scientific investigations aimed at providing computers and robots with such suprarational capacities, and in chapter 11 we present a specific framework in which the rational and the suprarational might be combined in a single machine.

In chapter 12, we come back to our second guiding question concerning the desirability of computers making moral decisions, but this time with a view to making recommendations about how to monitor and manage the dangers through public policy or mechanisms of social and business liability management.

Finally, in the epilogue, we briefly discuss how the project of designing AMAs feeds back into humans' understanding of themselves as moral agents, and of the nature of ethical theory itself. The limitations we see in current ethical theory concerning such theories' usefulness for guiding AMAs highlights deep questions about their purpose and value.

Some basic moral decisions may be quite easy to implement in computers, while skill at tackling more difficult moral dilemmas is well beyond present technology. Regardless of how quickly or how far humans progress in developing AMAs, in the process of addressing this challenge,humans will make significant strides in understanding what truly remarkable creatures they are. The exercise of thinking through the way moral decisions are made with the granularity necessary to begin implementing similar faculties into (ro)bots is thus an exercise in self-understanding. We cannot hope to do full justice to these issues, or indeed to all of the issues raised throughout the book. However, it is our sincere hope that by raising them in this form we will inspire others to pick up where we have left off, and take the next steps toward moving this project from theory to practice, from philosophy to engineering, and on to a deeper understanding of the field of ethics itself.