It is exciting to see a book on Machine Ethics, a topic that is certainly important. While we appreciate the fact that the authors have acknowledged our research in this field, we were dismayed that our work was misrepresented. The authors committed the fallacy of equivocation in dismissing Susan's view on p. 97. The word "principles" changes meaning from one sentence to the next in refuting her position in just two sentences. Of course Beauchamp and Childress's "Principles" of Biomedical Ethics can "lead to conflicting recommendations for action." This is why they are put forth as prima facie duties/principles, rather than absolute duties/principles. What is needed is some decision principle(s) that determine which duty becomes strongest in each of such cases of conflict. Susan maintains that, at this time, there is general agreement among bioethicists as to the right action in many particular ethical dilemmas (e.g. that the principle/duty of Respect for Autonomy should dominate in a case where a competent adult rejects a treatment that could have benefitted him, wheras the principles/duties of Non-Maleficence and Beneficence become stronger than Respect for Autonomy when dealing with an incompetent patient). If there is not agreement on at least some cases, then there is no ethics to be programmed into, or learned by, a machine. The Machine Ethics project could not even get started.
Susan believes, furthermore, that ethicists are reaching agreement on more and more cases of particular ethical dilemmas as the issues are more fully understood and appreciated. From these cases, some decision principles can be gleaned. In other cases, where a disagreement remains (at least for the time being) the prima facie duty approach can reveal the nature of the disagreement. (One would put more weight on one duty, while another puts more weight on a different duty.) We have consistently maintained that it is unwise to allow a machine to interact with humans in areas where there is no agreement as to which action is ethically correct.
In the extended discussion of MedEthEx the details, unfortunately, were either incorrect or stated in a confused way. To give just two examples: The authors are incorrect in stating that MedEthEx adopts Ross's prima facie duties. Instead, we used Beauchamp and Childress's Principles of Biomedical Ethics, one of which (Respect for Autonomy) was never one of Ross's duties. On p. 127, it is stated that "MedEthEx uses an inductive logic system based on the Prolog programming language...." Instead, the machine learning technique of Inductive Logic Programming is used which is not tied to any particular programming language.
Finally, the authors seem to not fully understand our current approach, which is where our previous work has been heading all along, an approach that is far from a "top-down" in that we assume no particular features of ethical dilemmas, or particular ethical duties.
1 comment:
In chapter 9 of Moral Machines, we take on the task of succinctly introducing and commenting upon a number of research projects that should be considered as first steps in the larger project of building AMAs. This task differs from what might be the desire of researchers that their work be presented more fully. However, we don't believe that anything mentioned in the comment above rises to the level of a "misrepresentation" of the Anderson's work. There is, nevertheless, always the question as to when a less than complete representation of someone's work implicitly misrepresents their ideas. Moral Machines maps a broad interdisciplinary field, and therefore it was not possible for us to do justice to all the research introduced or the philosophical controversies touched upon.
Susan Anderson appears to be more optimistic than Colin Allen and myself regarding how far we will get in building artificial moral agents through approaches directed solely at implementing a theory of ethics within a computer. However, none of us knows how successful this venture will be. There is much at stake in these experiments, including questions regarding the role of theory in revealing the oughts of ethics. Furthermore, if there are limits on whether artificial agents will make appropriate judgments, the public and policy makers will want to know those limits as it decides where and when it is practical to deploy autonomous agents.
On not fully understanding the Anderson's current approach, I have to plead guilty, in that I have never seen anything written about this approach, and it was only discussed in a most cursory manner over lunch one day. However, it sounds very exciting, and that was why we noted this new trajectory in their research in a chapter that was largely about experiments that have already been completed.
Post a Comment