Saturday, December 20, 2008

Brainstorm Responds to Robot Ethics Challenge

We're glad to see issues of machine morality getting attention from software engineers. Roger Gay, VP for Business Development at the Institute of Robotics in Scandinavia (iRobis) proposes using their "Brainstorm" software package for "Higher Level Logic (HLL)": http://mensnewsdaily.com/2008/12/10/brainstorm-responds-to-robot-ethics-challenge/

Next on our agenda: Encouraging people to read the book, not just reacting only to the sensationalized media reports!


Here's an example of what we mean:
“A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.”

Excellent. My hope is that he is an engineer. What is needed is a coding of the Geneva Convention that engineers can easily use as design requirements. Better still if there's a version that computer programs can understand. If the product of the work is not specifically geared toward technical development activities, then it's unlikely to be any more useful than the original Convention documents. Getting robots to understand the rules of war a useful idea, though not a complete capability for an ethical robot.



We've dashed the hope raised by that Telegraph headline already, but we're also not advocating for "getting robots to understand the rules of war, either as a complete solution to AMAs, or even as a practical approach for battlefield robots.

3 comments:

rogerfgay said...

Also from the Brainstorm article:

We don’t think of intelligent behavior as something that once put in motion is out of control.

iRobis is not currently involved in development of autonomous robot soldiers, nor any armed platform. A more general question arises out of the robot ethics discussions however. Is the technology on offer today sufficient for creation of robots that can carry out complex missions successfully? Can we expect highly evolved autonomous robots to behave well?

A machine readable copy of the Geneva Convention would be a valuable piece of technology but there is plenty of wisdom suggesting that the idea of simply loading it into intelligent weapons systems to govern their behavior is not an acceptable design concept. Should the decision to go to war be left to a robot simply because the Geneva Convention allows a military response to an act of war? There has been decades of discussion solidly against putting machines in charge of ultimate decisions. Many of the ideas made popular in literature and film are taken seriously by roboticists as well. Autonomous capability is good, but we do not in fact want machines to take over the world.

A proper decision model already exists in the use of military technology under human command. Submarine commanders do not fire their missiles into foreign countries simply because they can. Fighter pilots do not drop bombs simply because they can. Each has an assigned role and level of authority defined within the context of an organization. They respect not only rules of engagement but also decision-making hierarchy - the chain of command.

As the level of autonomous capability grows in robots in field service, there may be an increased role for the type of control proposed in HLL. Application of the concept would endow robots with a natural connection to organizational structure and thinking. An executive would be assigned a specific role and level of authority. From that information, it would know the limits of its autonomous decision-making authority and when permission is needed to carry-out actions that it is capable of performing. Designing robots to operate within a familiar command structure also greatly simplifies autonomous machine-human interaction.

Brainstorm is not only aimed at the military market, but at the consumer market as well. It would be absurd to create a general public fear that manufacturers don’t care about the quality of robot behavior. It may be that the home service and autonomous industrial robots should come before autonomous robot soldiers. Once a robot understands how not to be destructive, it can be systematically endowed with specific decision-making instructions on what destruction is acceptable and under what circumstances, otherwise remaining non-destructive. At least in the short-term, the creation of highly trained specialists rather than 007 robots with a “license to kill” greatly simplifies the problem of robot ethics. But military robotics also has something general to offer. An owner of a home service robot should equally expect to have command authority over robot servants.

Michael T. Merren said...

“A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.”

I think it is marvelous that we are discussing robots and non-violation of the Geneva Convention, but what good does it do us to have protocols in place for computers as pertain to the implementation of Geneva if our own President, Vice-President, Chief of Staff, CIA, Pentagon circumvent the Convention each step of the way. Perhaps our defense department and elected officials could use some "reprograming".

rogerfgay said...

BTW: Roger Gay, VP for Business Development at the Institute of Robotics in Scandinavia (iRobis) suggests reading the book. (5 stars)