Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Sunday, November 30, 2008
The Telegraph follows up on military angle
Despite misrepresenting my role on the project, and my having a few minor factual quibbles, Shipmans' story successfully captures why we need to be thinking about these ethical issues now.
Wednesday, November 26, 2008
Service Robots for the Home Being Developed in the US
There has been a great deal of discussion regarding Japanese and European robotic research directed at caring for the homebound. However, researchers in the U.S. are also tackling this challenge. At the University of Massachusetts, MIT, and at Georgia Tech roboticists are building service robots. E.J. Mundell discusses these initiatives in an November 18th BusinessWeek article titled, "Robots may come to Aging Boomers' Rescue."
The uBOT-5, being developed by a team in the Laboratory for Perpetual Robotics at UMass monitors the home environment and performs a few simple tasks. With a video screen mounted on Segway-like wheels, the robot can move around the house, and allow distant relatives or doctors to have virtual visits with the homebound. At MIT a team lead by Nichols Roy is building an "autonomous wheelchair" that requires only a voice command to travel to another place in a home or hospital. Service dogs are the prototypes for the home-care robots being created by Charlie Kemp at Georgia Tech. Opening drawers and working light switches are among the tasks performed by these service-pets.
Given a predicted shortage of 800,000 nurses and home health-care aides by 2020, there is expected to be high demand for robotic caregivers by aging boomers.
Monday, November 24, 2008
NYT Article Discusses Ethics for Battlefield Robots
Rude Robots
Sunday, November 23, 2008
For the record...
Susan believes, furthermore, that ethicists are reaching agreement on more and more cases of particular ethical dilemmas as the issues are more fully understood and appreciated. From these cases, some decision principles can be gleaned. In other cases, where a disagreement remains (at least for the time being) the prima facie duty approach can reveal the nature of the disagreement. (One would put more weight on one duty, while another puts more weight on a different duty.) We have consistently maintained that it is unwise to allow a machine to interact with humans in areas where there is no agreement as to which action is ethically correct.
In the extended discussion of MedEthEx the details, unfortunately, were either incorrect or stated in a confused way. To give just two examples: The authors are incorrect in stating that MedEthEx adopts Ross's prima facie duties. Instead, we used Beauchamp and Childress's Principles of Biomedical Ethics, one of which (Respect for Autonomy) was never one of Ross's duties. On p. 127, it is stated that "MedEthEx uses an inductive logic system based on the Prolog programming language...." Instead, the machine learning technique of Inductive Logic Programming is used which is not tied to any particular programming language.
Finally, the authors seem to not fully understand our current approach, which is where our previous work has been heading all along, an approach that is far from a "top-down" in that we assume no particular features of ethical dilemmas, or particular ethical duties.
Friday, November 21, 2008
Survey Results regarding the use of Lethal and Autonomous Systems
Will Human Level AI Require Compassionate Intelligence?
Tuesday, November 18, 2008
NewScientist posting of "Six ways to build robots that do humans no harm"
The New Scientist article has been misread by some commentators, who believe that we propose moral machines can be built with these simple strategies and that the critiques of the strategies were written by Simonite. The strategies and evaluation of the strategies were written by us. The original article from which this material was drawn can be found below on my October 13th posting.
Saturday, November 15, 2008
Moral Voting Machines
Would you do whatever a robot told you to do?
Friday, November 7, 2008
Machine Ethics Panel at the AAAI 2008 Fall Symposium on AI in Eldercare
Researchers developing intelligent systems for use in caring for the increasingly aging population met this weekend in Arlington, VA to discuss new trends in passive sensing with vision and machine learning, environments for eldercare technology research, robotics for assistive and therapeutic use, and human-robot interaction. A panel on machine ethics discussed various ethical ramifications of these and other such technologies and called for the need for the incorporation of an ethical dimension in these technologies. A video was shown that displayed the need for such a dimension in even the most seemingly innocuous systems. The system in question is a simple mobile robot with a very limited repertoire of behaviors which amount to setting and giving reminders. A number of questionable ethical practices were uncovered.
One involved, after asking if she had taken her medication, asking the system's charge to show her empty pill box. This is followed by a lecture by the system concerning how important it is for her to take her medication. There is little back story in the video but, assuming a competent adult, such paternalistic behavior from the system seems uncalled for and shows little respect for the patient's autonomy.
During this exchange, the patient's responsible relative is seen watching it over the internet. Again, it is not clear if this surveillance has been agreed to by the person being watched, and in fact there is no hint in the video that she indeed knows she is being watched, but there is the distinct impression that her privacy is being violated.
As another example, promises are made by the system that it will remind its charge when her favorite show and "the game" are on. Promise making and keeping clearly have ethical ramifications and it is not clear that the system under consideration has the sophistication to make ethically correct decisions when the duty to keep promises comes in conflict other possibly more important duties.
Finally, when the system does indeed remind its charge that her favorite television show is starting, it turns out that she has company and she tells the robot to go away. The system responds with "You don't love me anymore" to the delight of the guests and slinks away. This is problematic behavior in that is sets up an expectation in the user that the system is incapable of fulfilling-- that it is capable of a loving relationship with its charge. This is a very highly charged ethical ramification particularly given the vulnerable population for which this technology is being developed.
The bottom line is, contrary to those who argue that concern about the ethical behavior of autonomous systems is premature, the example transgressions of the most simple of such systems shows that in fact such concern is overdue.