Sunday, November 30, 2008

The Telegraph follows up on military angle

The British newspaper The Daily Telegraph is running a story by reporter Tim Shipman under the headline "Pentagon hires British scientist to help build robot soliders that 'won't commit war crimes'" which greatly exaggerates my role as an external consultant for a Navy-sponsored report. The authors of the report ("Autonomous Military Robotics: Risk, Ethics, and Design", award no. N00014‐07‐1‐1152) are in fact Patrick Lin (who is also co-editor of a 2007 volume on nanotechnology ethics), George Bekey, and Keith Abney.

Despite misrepresenting my role on the project, and my having a few minor factual quibbles, Shipmans' story successfully captures why we need to be thinking about these ethical issues now.

Wednesday, November 26, 2008

Service Robots for the Home Being Developed in the US


There has been a great deal of discussion regarding Japanese and European robotic research directed at caring for the homebound. However, researchers in the U.S. are also tackling this challenge. At the University of Massachusetts, MIT, and at Georgia Tech roboticists are building service robots. E.J. Mundell discusses these initiatives in an November 18th BusinessWeek article titled, "Robots may come to Aging Boomers' Rescue."

The uBOT-5, being developed by a team in the Laboratory for Perpetual Robotics at UMass monitors the home environment and performs a few simple tasks. With a video screen mounted on Segway-like wheels, the robot can move around the house, and allow distant relatives or doctors to have virtual visits with the homebound. At MIT a team lead by Nichols Roy is building an "autonomous wheelchair" that requires only a voice command to travel to another place in a home or hospital. Service dogs are the prototypes for the home-care robots being created by Charlie Kemp at Georgia Tech. Opening drawers and working light switches are among the tasks performed by these service-pets.

Given a predicted shortage of 800,000 nurses and home health-care aides by 2020, there is expected to be high demand for robotic caregivers by aging boomers.

Monday, November 24, 2008

NYT Article Discusses Ethics for Battlefield Robots

Ronald Arkin's contention that intelligent battlefield robots can behave more ethically than human soldiers is discussed in a New York Times article titled. "A Soldier, Taking Orders From Its Ethical Judgment Center." In addition to Arkin's research at Georgia Tech, the article quotes Colin Allen, Daniel Dennett, Noel Sharkey, and mentions the publication of "Moral Machines."

Rude Robots

"You can endow a robot with a personality . . . but it should not be rude," says Maja Matarić , the founding director of the center for Robotics and Embedded Systems at USC. Matarić is particularly concerned with insolence from service robots that play a role in assisted living, as health coaches, and in care for the elderly. Adam Shah reports on Matarić's interest in developing social robots sensitive to appropriate behavior in a PCWorld news article, "Rude Robots, Stay Away From Homes."

Sunday, November 23, 2008

For the record...

It is exciting to see a book on Machine Ethics, a topic that is certainly important. While we appreciate the fact that the authors have acknowledged our research in this field, we were dismayed that our work was misrepresented. The authors committed the fallacy of equivocation in dismissing Susan's view on p. 97. The word "principles" changes meaning from one sentence to the next in refuting her position in just two sentences. Of course Beauchamp and Childress's "Principles" of Biomedical Ethics can "lead to conflicting recommendations for action." This is why they are put forth as prima facie duties/principles, rather than absolute duties/principles. What is needed is some decision principle(s) that determine which duty becomes strongest in each of such cases of conflict. Susan maintains that, at this time, there is general agreement among bioethicists as to the right action in many particular ethical dilemmas (e.g. that the principle/duty of Respect for Autonomy should dominate in a case where a competent adult rejects a treatment that could have benefitted him, wheras the principles/duties of Non-Maleficence and Beneficence become stronger than Respect for Autonomy when dealing with an incompetent patient). If there is not agreement on at least some cases, then there is no ethics to be programmed into, or learned by, a machine. The Machine Ethics project could not even get started.

Susan believes, furthermore, that ethicists are reaching agreement on more and more cases of particular ethical dilemmas as the issues are more fully understood and appreciated. From these cases, some decision principles can be gleaned. In other cases, where a disagreement remains (at least for the time being) the prima facie duty approach can reveal the nature of the disagreement. (One would put more weight on one duty, while another puts more weight on a different duty.) We have consistently maintained that it is unwise to allow a machine to interact with humans in areas where there is no agreement as to which action is ethically correct.

In the extended discussion of MedEthEx the details, unfortunately, were either incorrect or stated in a confused way. To give just two examples: The authors are incorrect in stating that MedEthEx adopts Ross's prima facie duties. Instead, we used Beauchamp and Childress's Principles of Biomedical Ethics, one of which (Respect for Autonomy) was never one of Ross's duties. On p. 127, it is stated that "MedEthEx uses an inductive logic system based on the Prolog programming language...." Instead, the machine learning technique of Inductive Logic Programming is used which is not tied to any particular programming language.

Finally, the authors seem to not fully understand our current approach, which is where our previous work has been heading all along, an approach that is far from a "top-down" in that we assume no particular features of ethical dilemmas, or particular ethical duties.

Friday, November 21, 2008

Survey Results regarding the use of Lethal and Autonomous Systems

Lilia Moshkina and Ronald Arkin,from the Mobile Robot Laboratory at Georgia Tech, have begun to report on the results of an online survey regarding the military use of lethal and autonomous robotic systems. They collected 430 full responses to their online public opinion survey before it was closed on October 27th of 2007. Of the participants, "234 self-identified themselves as having had robotics research experience, 69 as having had policymaking experience, 127 as having had military experience, and 116 as having had neither (therefore categorized as general public)." Their 2008 paper, Lethality and Autonomous Systems: The Roboticist Demographic, reports on the responses of the participants with robotic research experience. Generally, the participants felt that using robots in warfare outweigh the risks, however they felt that as more control shifts from humans to the robots, the less such an entity is acceptable. "67% of the roboticists believe that it would be easier or much easier to start wars if the robots were introduced into warfare, perhaps due to the fact that human soldier life loss would be reduced."

Will Human Level AI Require Compassionate Intelligence?

Cindy Mason, a research associate at Stanford, has written a paper titled, "Human Level AI Requires Compassionate Intelligence" for the 2008 AAAI workshop on Meta-Cognition. Cindy has been working on emotions and AI since 1998. In the paper she describes a core meta-architecture for an agent "to resolve the turf war between thoughts and feelings based on agent personality rather than logic." This research takes its inspiration from the 17th century philosopher David Hume who proposed that emotions are antecedent to reason, and also from Buddhist mind training practices known as "insight meditation" or Vipassana.

Tuesday, November 18, 2008

NewScientist posting of "Six ways to build robots that do humans no harm"

Tom Simonite at New Scientist Tech has incorporated material from an article written by Wendell Wallach and Colin Allen, and posted it on their website under the title "Six Ways to Build Robots that do humans no harm".

The New Scientist article has been misread by some commentators, who believe that we propose moral machines can be built with these simple strategies and that the critiques of the strategies were written by Simonite. The strategies and evaluation of the strategies were written by us. The original article from which this material was drawn can be found below on my October 13th posting.

Saturday, November 15, 2008

Dilbert Discovers the Singularity

Dilbert.comDilbert.com

Dilbert.comDilbert.comDilbert.com

Moral Voting Machines

Now that the election is over, we can put to rest worries that the election might be stolen by Diebold. However, there's still plenty of evidence that electronic voting machines could be improved. For instance, voting mistakes still happen, and some citizens find the technology confusing. Could a voting machine be programmed to detect abnormal voting patterns that would signal possible confusion? This might be done by detecting anomalies in the physical interaction between voter and machine, for instance excessively changing selections. Or perhaps it could be accomplished by analyzing the actual selections in an attempt to detect their overall conceptual coherence. Of course, we still have to trust that black box voting is safe, but perhaps a dialogue with a machine in the voting booth could give confidence that one's votes are being processed by the machine.

Would you do whatever a robot told you to do?

Students in the robotics laboratory of Brian Scasselati at Yale performed an experiment that studied the difference between the way a robot's physical presence or its virtual presence affects humans' unconscious perceptions of the robot as a social partner. While subjects were responsive to instructions from the robot Nico in a simple book moving task when the robot was both physically present and when Nico was displayed on a screen, subjects were more likely to throw books into a garbage can at the robots instructions when Nico was present. That subject would follow this instruction from a robot at all is a rather disturbing finding, and one that suggests need for more research. A link for the research paper will be added to this post as soon as it becomes available. The paper is titled, "The effect of presence on human-robot interactions". The authors are Wilma A. Bainbridge, Justin Hart, Elizabeth S. Kim, and Brian Scassellati.

Friday, November 7, 2008

Machine Ethics Panel at the AAAI 2008 Fall Symposium on AI in Eldercare

Researchers developing intelligent systems for use in caring for the increasingly aging population met this weekend in Arlington, VA to discuss new trends in passive sensing with vision and machine learning, environments for eldercare technology research, robotics for assistive and therapeutic use, and  human-robot interaction.  A panel on machine ethics discussed various ethical ramifications of these and other such technologies and called for the need for the incorporation of an ethical dimension in these technologies.  A video was shown that displayed the need for such a dimension in even the most seemingly innocuous systems. The system in question is a simple mobile robot with a very limited repertoire of behaviors which amount to setting and giving reminders.  A number of questionable ethical practices were uncovered.  

One involved, after asking if she had taken her medication, asking the system's charge to show her empty pill box.  This is followed by a lecture by the system concerning how important it is for her to take her medication.  There is little back story in the video but, assuming a competent adult, such paternalistic behavior from the system seems uncalled for and shows little respect for the patient's autonomy.

During this exchange, the patient's responsible relative is seen watching it over the internet.  Again, it is not clear if this surveillance has been agreed to by the person being watched, and in fact there is no hint in the video that she indeed knows she is being watched, but there is the distinct impression that her privacy is being violated.

As another example, promises are made by the system that it will remind its charge when her favorite show and "the game" are on.  Promise making and keeping clearly have ethical ramifications and it is not clear that the system under consideration has the sophistication to make ethically correct decisions when the duty to keep promises comes in conflict other possibly more important duties.

Finally, when the system does indeed remind its charge that her favorite television show is starting, it turns out that she has company and she tells the robot to go away.  The system responds with "You don't love me anymore" to the delight of the guests and slinks away.  This is problematic behavior in that is sets up an expectation in the user that the system is incapable of fulfilling-- that it is capable of a loving relationship with its charge.  This is a very highly charged ethical ramification particularly given the vulnerable population for which this technology is being developed.

The bottom line is, contrary to those who argue that concern about the ethical behavior of autonomous systems is premature, the example transgressions of the most simple of such systems shows that in fact such concern is overdue.