It is just a matter of time until a computer or robot takes a decision that will cause a human disaster. In all likelihood, this will happen under circumstances that the designers and engineers who built the system could not predict. Computers and robots enhance modern life in many ways. They are incredible tools that will continue to improve productivity, expand the forms of entertainment available to each of us, and, in addition to vacuuming our homes, take on many of the burdens of daily life. However, these rewards do not come for free. It is necessary to go beyond product safety and begin thinking about ways to insure that the choices and actions taken by robots will not cause harm to people, pets, or personal property.
Engineers are far from being able to build robots that rival human intelligence, or as some science fiction writers suggest, threaten human existence. Furthermore, it is impossible to know whether such systems can ever be built. However, there are six strategies to consider for minimizing any harms the increasingly autonomous systems built today, or in the near future, will cause:
1. Keep them in stupid situations -- Make sure that all computers and robots never have to make a decision where the consequences of the machines actions can not be predicted in advance.
Likelihood that this strategy will succeed (LSS): Extremely Low -- Engineers are already building computers and robotic systems whose actions they cannot always predict. Consumers, industry, and government want technologies that perform a wide array of tasks, and businesses will expand the products they offer in order to capitalize on this demand. In order to implement this strategy, it would be necessary to arrest further development of computers and robots immediately.
2. Do not place dangerous weapons in the hands of computers and robots.
LSS: Too late. Semi-autonomous robotic weapons systems including cruise missiles and Predator drones already exist. A semiautonomous robotic cannon deployed by the South African army went haywire killing 9 soldiers and wounding 14 others in October 2007. A few machine gun carrying robots were sent to Iraq, photographed on a battlefield, but apparently not deployed. Military planners are very interested in the development of robotic soldiers, and see them as a means to reduce the deaths of human soldiers during warfare. While it is too late to stop the building of robot weapons, it may not be too late to restrict which weapons they carry or the situations in which the weapons can be used.
3. Program them with rules such as the Ten Commandments or Asimov's Laws for Robots.
LSS: Moderate. Isaac Asimov's famous rules that robots should not harm humans or through inaction allow harm to humans, should obey humans, and should preserve themselves are arranged hierarchically, so that not harming humans trumps self-preservation. However, Asimov was writing fiction, he was not actually building robots. In story after story he illustrates problems that would arise with even these simple rules, such as what the robot should do when orders from two people conflict. Furthermore, how would a robot know that a surgeon wielding a knife over a patient was not about to harm the patient? Asimov’s robot stories demonstrate quite clearly the limits of any rule-based morality. Nevertheless, rules can successfully restrict the behavior of robots that function within very limited contexts.
4. Program robots with a principle such as the "greatest good for the greatest number" or the Golden Rule.
LSS: Moderate. Recognizing the limits of rules, ethicists look for one over-riding principle that can be used to evaluate the acceptability of all courses of action. However, the history of ethics is a long debate over the value and limits of any single principle that has been proposed. For example, you might be willing to sacrifice the lives of one person to save the lives of five people, but if you were a doctor you would not sacrifice the life of a healthy person in your waiting room to save the lives of five people needing organ transplants.
But there are other more difficult problems than this, and determining which course of action among many options leads to the greatest good (or other cherished principle) would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making the calculations would also require time and a great deal of computing power.
5. Educate a robot in the same way as a child, so that the robot will learn and develop sensitivity to the actions that people consider to be right and wrong.
LSW: Promising, although this strategy requires a few technological breakthroughs. While researchers are developing methods to facilitate a computer’s ability to learn, the tools presently available are very limited.
6. Build human-like faculties such as empathy, emotions, and the capacity to read non-verbal social cues into the robots.
LSS: These faculties would help improve strategies 3-5. Most of the information people use to make choices and cooperate with others derives from our emotions, our ability to put ourselves in the place of others, and our ability to read their gestures and intentions. If one knows the habits and customs of those you are interacting with this information can also help one understand what actions are appropriate for a given situation. Such information may be essential to appreciate which rules or principles apply in what situations, butthis alone is not enough to insure that safety of the actions chosen by a robot.
For the next 5-10 years computer scientists will focus on building computers and robots that function within relatively limited contexts, such as finance systems, computers used for medical applications, and service robots in the home. Within each of these contexts, there are different rules, principles, and possible dangers. System developers can experiment with a range of approaches to insure that the robots they build will behave properly given the specific application. They could then combine the most successful strategies to facilitate the design of more sophisticated robots.
Humanity has started down the path of robots and computers making decisions without direct human oversight. Governments and corporations in South Korea, Japan, Europe, and the USA are investing millions of dollars in research and development. Some people will argue that it is a mistake to be going down this track. But the commercial and military imperatives make this train hard to derail. The technological challenge of ensuring that these machines respect ethical principles is upon us.
Wendell Wallach and Colin Allen are co-authors of "Moral Machines: Teaching Robots Right From Wrong". For more information on this subject visit their blog at http://moralmachines.blogspot.com/
1 comment:
While the publication date is November 13th, the book is available as of October 13th.
Post a Comment