It is exciting to see a book on Machine Ethics, a topic that is certainly important. While we appreciate the fact that the authors have acknowledged our research in this field, we were dismayed that our work was misrepresented. The authors committed the fallacy of equivocation in dismissing Susan's view on p. 97. The word "principles" changes meaning from one sentence to the next in refuting her position in just two sentences. Of course Beauchamp and Childress's "Principles" of Biomedical Ethics can "lead to conflicting recommendations for action." This is why they are put forth as prima facie duties/principles, rather than absolute duties/principles. What is needed is some decision principle(s) that determine which duty becomes strongest in each of such cases of conflict. Susan maintains that, at this time, there is general agreement among bioethicists as to the right action in many particular ethical dilemmas (e.g. that the principle/duty of Respect for Autonomy should dominate in a case where a competent adult rejects a treatment that could have benefitted him, wheras the principles/duties of Non-Maleficence and Beneficence become stronger than Respect for Autonomy when dealing with an incompetent patient). If there is not agreement on at least some cases, then there is no ethics to be programmed into, or learned by, a machine. The Machine Ethics project could not even get started.
Susan believes, furthermore, that ethicists are reaching agreement on more and more cases of particular ethical dilemmas as the issues are more fully understood and appreciated. From these cases, some decision principles can be gleaned. In other cases, where a disagreement remains (at least for the time being) the prima facie duty approach can reveal the nature of the disagreement. (One would put more weight on one duty, while another puts more weight on a different duty.) We have consistently maintained that it is unwise to allow a machine to interact with humans in areas where there is no agreement as to which action is ethically correct.
In the extended discussion of MedEthEx the details, unfortunately, were either incorrect or stated in a confused way. To give just two examples: The authors are incorrect in stating that MedEthEx adopts Ross's prima facie duties. Instead, we used Beauchamp and Childress's Principles of Biomedical Ethics, one of which (Respect for Autonomy) was never one of Ross's duties. On p. 127, it is stated that "MedEthEx uses an inductive logic system based on the Prolog programming language...." Instead, the machine learning technique of Inductive Logic Programming is used which is not tied to any particular programming language.
Finally, the authors seem to not fully understand our current approach, which is where our previous work has been heading all along, an approach that is far from a "top-down" in that we assume no particular features of ethical dilemmas, or particular ethical duties.