From Philosophy Now, Jan/Feb 2009:
http://www.philosophynow.org/issue71/71beavers.htm
Can a machine be a genuine cause of harm? The obvious answer is ‘affirmative’. The toaster that flames up and burns down a house is said to be the cause of the fire, and in some weak sense we might even say that the toaster was responsible for it. But the toaster is broken or defective, not immoral and irresponsible – although possibly the engineer who designed it is. But what about machines that decide things before they act, that determine their own course of action? Currently somewhere between digital thermostats and the murderous HAL 9000 computer in 2001: A Space Odyssey, autonomous machines are quickly gaining in complexity, and most certainly a day is coming when we will want to blame them for deliberately causing harm, even if philosophical issues concerning their moral status have not been fully settled. When will that day be?
Without lapsing into futurology or science fiction, Wallach and Allen predict that within the next few years, “there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight” (p.4). In this light, philosophers and engineers should not wait for the threat of robot domination before determining how to make machines moral.
The practical concerns to motivate such an inquiry (and indeed this book) are already here. Moral Machines is an introduction to this newly emerging area of machine ethics. It is written primarily to stimulate further inquiry by both ethicists and engineers. As such, it does not get bogged down in dense philosophical or technical prose. In other words, it is comprehensible to the general reader, who will walk away informed about why machine morality is already necessary, where we are with various attempts to implement it, and the authors’ recommendations of where we need to be.
Chapter One notes the inevitable arrival of autonomous machines, and the possible harm that can come from them. Automated agents integrating into modern life do things like regulate the power grid in the United States, monitor financial transactions, make medical diagnoses, and fight on the battlefield. A failure of any of these systems to behave within moral parameters could have devastating consequences. As they become increasingly autonomous, Wallach and Allen argue, it becomes increasingly necessary that they employ ethical subroutines to evaluate their possible actions before carrying them out.
Chapter Two notes that machine morality should unfold in the interplay between ethical sensitivity and increasingly complex autonomy. Several models of automated moral agency are presented. Borrowing from computer ethicist Jim Moor, the authors indicate that machines can be ‘implicitly ethical’ in that their behavior conforms to moral standards. This is quite different from actually making decisions by employing explicit moral procedures – which is different again from what full moral agents, such as human beings, do. After a brief digress in Chapter Three to address whether we really want machines making moral decisions, the issue of agency reappears in Chapter Four. Here the ingredients of full moral agency – consciousness, understanding and free will – are addressed. Though machines do not currently have these faculties, and are not likely to have them soon, Wallach and Allen note that the “functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMAs [Automated Moral Agents]” (p.68), and that “human understanding and human consciousness emerged through biological evolution as solutions to specific challenges. They are not necessarily the only methods for meeting those challenges” (p.69). In other words, there may be more than one way to be a moral agent. Chapter Four ends with the provocative suggestion of a ‘Moral Turing Test’ to evaluate the success of an AMA, and the interesting idea that machines might actually exceed the moral capabilities of humans.
Chapter Five addresses the important matter of making ethical theory and engineering practices congruent, setting the stage for a conversation on top-down and bottom-up approaches which occupies Chapters Six through Eight. A top-down approach is one that “takes a specified ethical theory and analyzes its computational requirements to guide the design of algorithms and subsystems capable of implementing that theory” (pps.79-80). Rule-based systems fit this description – including Isaac Asimov’s ‘Three Laws of Robotics’ and ethical theories which apply principles, such as utilitarianism and Kantian ethics. This is contrasted in Chapter Seven with bottom-up approaches, which are developmental. One might think here in terms of the moral development of a child, or from the computer science perspective, of ‘genetic’ algorithms which mimic natural selection to find a workable solution to a problem. Bottom-up approaches could take the form of assembling program modules which allow a robot to learn from experience.
Various limitations make both of these approaches unlikely to succeed, and so the authors recommend a hybrid approach that uses both to meet in the middle. A good paradigm for this compromise is virtue ethics.
Chapters Nine through Eleven survey existing attempts to put morality into machines, especially in light of the role that emotion and cognition play in ethical deliberation, the topic of Chapter Ten. The LIDA [Learning Intelligent Distribution Agent] model, based on the work of Bernard Baars, is singled out for discussion in Chapter Eleven because it “implements ethical sensibility and reasoning capacities from more general perceptual, affective, and decision-making components” (p.172).
Finally, the closing chapter speculates on what machine morality might mean for questions of rights, responsibility, liability, and so on. When a robot errs, who is at fault, the programmer or the robot? If it is the robot, how can we hold it accountable? If robots can be held accountable, shouldn’t they also then be the recipients of rights?
Not all that long ago these questions, and the whole substance of this book, were the stuff of science fiction. But time and time again science fiction has become science fact. Without overstatement or alarm, Wallach and Allen make it patently clear that now is the time to consider seriously the methods for teaching robots right from wrong. Attempting to understand the details necessary to make robots moral and to determine what precisely we should want from them also sheds considerable light on our understanding of human morality. So in a single thought-provoking volume, the authors not only introduce machine ethics, but also an inquiry which penetrates to the deepest foundations of ethics. The conscientious reader will no doubt find many challenging ideas here that will require a reassessment of her own beliefs, making this text a must-read among recent books in philosophy, and specifically, in ethics.
© Dr Anthony F. Beavers, 2009
Tony Beavers is Professor of Philosophy and Director of Cognitive Science at the University of Evansville, Indiana .
• Moral Machines by Wendell Wallach and Colin Allen, OUP USA, 2008, 288 pages. $29.95, ISBN: 978-0195374049
.
No comments:
Post a Comment