Tuesday, February 24, 2009

Podcasts on Military Robots


Robotspodcast.com has begun a two-part look at ethical issues arising from robotics. The first podcast is with Noel Sharkey, a Professor of Public Engagement, Artificial Intelligence and Robotics at the University of Sheffield. A second interview with Ronald Arkin will be available in two weeks. Both researchers discuss military robots and the use of robots in society.

Wednesday, February 11, 2009

Bucknell University Lecture, 2/12

Colin Allen is in Lewisburg, PA, where on Thursday Feb 12 he will speak about Moral Machines. He will be giving the Arnold L. Putterman Memorial Lectureship at Bucknell University, at 7 p.m. in the Forum, Elaine Langone Center.

Governing Lethal Behavior in Military Robots

Ronald Arkin, Regents' Professor & Director of the Mobile Robot Laboratory at Georgia Institute of Technology, has provided links to us for his article titled, "Governing Lethal Behavior: Embedded Ethics in a Hybrid Deliberative/Reactive Robot Architecture". Both a short overview (8 pages) of the motivation and philosophy behind his research and a longer description ( 117 pages) of the research are available.

P.W. Singer and Weapons Carrying Robots Everywhere

Singer's book Wired for War, about the roboticization of warfare, climbed into the 29th position this week on The New York Times list of non-ficition bestsellers. In addition to The Wilson Quarterly article cited in our February 1st posting, there is also an article by Singer in The New Atlantis titled, "Military Robots and the Laws of War."

Robots that Evolve and Improve (Relentlessly?)

Roboticists at Aberdeen University, under team leader Christopher MacLeod, have been experimenting with robots that they program to fulfill one objective. The robots are designed around The Incremental Evolutionary Algorithm (IEA) using a neural network. They have learned to perform a task such as walking and then re-learned and improve this task when the research team added knees to a system that already had hips and legs. The system is able to "decide" to add more neurons in order to accommodate additional features. Eventually the robot may even learn to instruct the human engineers to add additional limbs or components in order to fulfill its task more effectively.

In reporting on this technology an article in the online The Daily Galaxy alludes to melodramatic prospects that such systems might be relentless in their evolution, and may request machine guns or other weaponry to fulfill their goal.

Moral Machines reviewed in Nature

In an article titled, "Can robots have a conscience", Peter Danielson reviews Moral Machines in the January 29th issue of Nature. Danielson, who pioneered research exploring whether artificial agents might interact in an ethical manner, writes, "Moral Machines looks well in advance at robot ethics, but the jury is out on whether this book will set the agenda or if it is too premature to be influential."

Sunday, February 8, 2009

Review of Wallach and Allen

From Philosophy Now, Jan/Feb 2009:

http://www.philosophynow.org/issue71/71beavers.htm

Can a machine be a genuine cause of harm? The obvious answer is ‘affirmative’. The toaster that flames up and burns down a house is said to be the cause of the fire, and in some weak sense we might even say that the toaster was responsible for it. But the toaster is broken or defective, not immoral and irresponsible – although possibly the engineer who designed it is. But what about machines that decide things before they act, that determine their own course of action? Currently somewhere between digital thermostats and the murderous HAL 9000 computer in 2001: A Space Odyssey, autonomous machines are quickly gaining in complexity, and most certainly a day is coming when we will want to blame them for deliberately causing harm, even if philosophical issues concerning their moral status have not been fully settled. When will that day be?

Without lapsing into futurology or science fiction, Wallach and Allen predict that within the next few years, “there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight” (p.4). In this light, philosophers and engineers should not wait for the threat of robot domination before determining how to make machines moral.

The practical concerns to motivate such an inquiry (and indeed this book) are already here. Moral Machines is an introduction to this newly emerging area of machine ethics. It is written primarily to stimulate further inquiry by both ethicists and engineers. As such, it does not get bogged down in dense philosophical or technical prose. In other words, it is comprehensible to the general reader, who will walk away informed about why machine morality is already necessary, where we are with various attempts to implement it, and the authors’ recommendations of where we need to be.

Chapter One notes the inevitable arrival of autonomous machines, and the possible harm that can come from them. Automated agents integrating into modern life do things like regulate the power grid in the United States, monitor financial transactions, make medical diagnoses, and fight on the battlefield. A failure of any of these systems to behave within moral parameters could have devastating consequences. As they become increasingly autonomous, Wallach and Allen argue, it becomes increasingly necessary that they employ ethical subroutines to evaluate their possible actions before carrying them out.

Chapter Two notes that machine morality should unfold in the interplay between ethical sensitivity and increasingly complex autonomy. Several models of automated moral agency are presented. Borrowing from computer ethicist Jim Moor, the authors indicate that machines can be ‘implicitly ethical’ in that their behavior conforms to moral standards. This is quite different from actually making decisions by employing explicit moral procedures – which is different again from what full moral agents, such as human beings, do. After a brief digress in Chapter Three to address whether we really want machines making moral decisions, the issue of agency reappears in Chapter Four. Here the ingredients of full moral agency – consciousness, understanding and free will – are addressed. Though machines do not currently have these faculties, and are not likely to have them soon, Wallach and Allen note that the “functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMAs [Automated Moral Agents]” (p.68), and that “human understanding and human consciousness emerged through biological evolution as solutions to specific challenges. They are not necessarily the only methods for meeting those challenges” (p.69). In other words, there may be more than one way to be a moral agent. Chapter Four ends with the provocative suggestion of a ‘Moral Turing Test’ to evaluate the success of an AMA, and the interesting idea that machines might actually exceed the moral capabilities of humans.

Chapter Five addresses the important matter of making ethical theory and engineering practices congruent, setting the stage for a conversation on top-down and bottom-up approaches which occupies Chapters Six through Eight. A top-down approach is one that “takes a specified ethical theory and analyzes its computational requirements to guide the design of algorithms and subsystems capable of implementing that theory” (pps.79-80). Rule-based systems fit this description – including Isaac Asimov’s ‘Three Laws of Robotics’ and ethical theories which apply principles, such as utilitarianism and Kantian ethics. This is contrasted in Chapter Seven with bottom-up approaches, which are developmental. One might think here in terms of the moral development of a child, or from the computer science perspective, of ‘genetic’ algorithms which mimic natural selection to find a workable solution to a problem. Bottom-up approaches could take the form of assembling program modules which allow a robot to learn from experience.

Various limitations make both of these approaches unlikely to succeed, and so the authors recommend a hybrid approach that uses both to meet in the middle. A good paradigm for this compromise is virtue ethics.

Chapters Nine through Eleven survey existing attempts to put morality into machines, especially in light of the role that emotion and cognition play in ethical deliberation, the topic of Chapter Ten. The LIDA [Learning Intelligent Distribution Agent] model, based on the work of Bernard Baars, is singled out for discussion in Chapter Eleven because it “implements ethical sensibility and reasoning capacities from more general perceptual, affective, and decision-making components” (p.172).

Finally, the closing chapter speculates on what machine morality might mean for questions of rights, responsibility, liability, and so on. When a robot errs, who is at fault, the programmer or the robot? If it is the robot, how can we hold it accountable? If robots can be held accountable, shouldn’t they also then be the recipients of rights?

Not all that long ago these questions, and the whole substance of this book, were the stuff of science fiction. But time and time again science fiction has become science fact. Without overstatement or alarm, Wallach and Allen make it patently clear that now is the time to consider seriously the methods for teaching robots right from wrong. Attempting to understand the details necessary to make robots moral and to determine what precisely we should want from them also sheds considerable light on our understanding of human morality. So in a single thought-provoking volume, the authors not only introduce machine ethics, but also an inquiry which penetrates to the deepest foundations of ethics. The conscientious reader will no doubt find many challenging ideas here that will require a reassessment of her own beliefs, making this text a must-read among recent books in philosophy, and specifically, in ethics.

© Dr Anthony F. Beavers, 2009

Tony Beavers is Professor of Philosophy and Director of Cognitive Science at the University of Evansville, Indiana .

Moral Machines by Wendell Wallach and Colin Allen, OUP USA, 2008, 288 pages. $29.95, ISBN: 978-0195374049

.

Monday, February 2, 2009

Remote-Controlled Beetles


The Defense Advanced Research Projects Agency (DARPA) funded the development of a wirelessly controlled beetle to be used one day for surveillance purposes or for search-and-rescue missions. No doubt devices like this will also soon be adopted by the porn industry and your local P.I.

Emily Singer wrote a story on this research for Technology Review.

Michel Maharbiz and colleagues at the University of California implanted electrodes and a radio receiver on the back of a giant flower beetle. Commands to takeoff, turn right or left, and hover are initiated from a computer to a radio receiver and microprocessor on a custom-printed circuit board mounted on the beetle's back. Six electrodes were implanted into the insect's optic lobes and flight muscles.

Beware of low flying objects once the plans for building the components of this system find their way into Popular Mechanics.

Sunday, February 1, 2009

Autonomous Military Robotics

As you might gather from many of our recent postings, the subject of military robots is heating up. Ethics + Emerging Technologies Group has just released a report titled, Autonomous Military Robotics: Risk, Ethics, and Design, which was funded by a grant from the US Dept. of Navy, Office of Naval Research. Colin Allen and Wendell Wallach served as consultants on this project.

Ethics + Emerging Technologies Group was formerly known as the Nanoethics Group. It is directed by Patrick Lin, and was established at California Polytechnic University (Cal Poly).

Robots at War: The New Battlefield

An article excerpted from P.W. Singer's new book, Wired for War is in the latest issue of The Wilson Quarterly. Throughout this article Singer reinforces our concern that the talk of "keeping humans in the loop" does not reflect that increasing autonomy of robots carrying lethal weapons. He also underscores our concern that robotic armies will make wars more likely.

Here are a few quotes from the article:

The reality is that the human location “in the loop” is already becoming, as retired Army colonel Thomas Adams notes, that of “a supervisor who serves in a fail-safe capacity in the event of a system malfunction.” Even then, he thinks that the speed, confusion, and information overload of modern-day war will soon move the whole process outside “human space.” He describes how the coming weapons “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.”

. . . Perhaps most telling is a report that the Joint Forces Command drew up in 2005, which suggested that autonomous robots on the battlefield would be the norm within 20 years. Its title is somewhat amusing, given the official line one usually hears: Unmanned Effects: Taking the Human Out of the Loop.

So, despite what one article called “all the lip service paid to keeping a human in the loop,” autonomous armed robots are coming to war. They simply make too much sense to the people who matter.

. . . Lawrence J. Korb is one of the deans of Washington's defense policy establishment. . . . In 2007, i asked him what he thought was the most important overlooked ussue in Washington defense circles. He answered, "Robotics and all this unmanned stuff. What are the effects? Will it make war more likely?"

Korb is a great supporter of unmanned systems for a simple reason: "They save lives." But he worries about their effect on the perceptions and psychologies of war, not merely among foreign publics and media, but also at home. . . . Robotics "will further disconnect the military from society. People are more likely to support the use of force as long as they view it as costless." Even more worrisome, a new kind of voyeurism enabled by the emerging technologies will make the public more susceptible to atttempts to sell the ease of a potenital war. "There will be more marketing of wars. More 'shock and awe' talk to defray discussion of the costs."

. . . Thus, robots may entail a dark irony. By appearing to lower the human costs of war, they may seduce us into more wars.