Thursday, July 30, 2009

Three Human-Centric Laws for Robots

David Woods of Ohio State and Robin Murphy from Texas A&M have proposed a new version of the three laws for robots. The laws first appear in an article titled, Below Asimov: The Three Laws of Responsible Robotics, which appears in the july/August issue of IEEE Intelligent Systems. The news laws suggested by Murphy and Woods are:
A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.

A robot must respond to humans as appropriate for their roles.

A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

David Wood is quoted discussing the new laws in an article titled,
Want Responsible Robotics? Start With Responsible Humans

“Robots exist in an open world where you can’t predict everything that’s going to happen. The robot has to have some autonomy in order to act and react in a real situation. It needs to make decisions to protect itself, but it also needs to transfer control to humans when appropriate. You don’t want a robot to drive off a ledge, for instance -- unless a human needs the robot to drive off the ledge. When those situations happen, you need to have smooth transfer of control from the robot to the appropriate human,” Woods said.

“The bottom line is, robots need to be responsive and resilient. They have to be able to protect themselves and also smoothly transfer control to humans when necessary.”

Woods admits that one thing is missing from the new laws: the romance of Asimov’s fiction -- the idea of a perfect, moral robot that sets engineers’ hearts fluttering.

Wednesday, July 29, 2009

Australian Government Enters Field of Autonomous Military Robotics

13 July, 2009: Greg Combet, Australian Minister for Defence Personnel, Materiel and Science, announced a million dollar international competition challenging research organisations to build fully autonomous ground robots able to operate effectively in military operations.

Read full story at

The competition, known as the Multi-Autonomous Ground-robotic International Challenge (MAGIC), is being organised by Australia’s Defence Science & Technology Organisation (DSTO) in partnership with the US Department of Defense.

“This competition aims to attract the most innovative solutions from around the world to address a technology gap currently faced by coalition forces operating in urban combat zones,” Mr Combet said.

“While remote-controlled robots are being deployed in operational areas, we need smart, intelligent and fully autonomous systems that can take over from humans in conducting intelligence, surveillance and reconnaissance missions,” Mr Combet said.

“The ultimate aim is to make these operations much safer for our military personnel, leaving the robots to carry out the dirty and dangerous work.”

“The challenge for the competitors is to develop a proposal demonstrating teams of robotic vehicles that can autonomously coordinate their activities and execute a mission in a changing urban environment. The robots must detect, monitor and neutralise a number of potential threats to meet the challenge goals and an international panel of experts will judge the entries.”

“The first five short-listed competitors will each receive research grants of $US100,000 to develop their proposals into prototypes,” Mr Combet said.

“After they have successfully demonstrated their prototypes at a location in South Australia next year, the top three finalists will receive research awards of $US750,000, $US250,000 and $US100,000 respectively.”

“These finalists also have the unique opportunity to qualify for further funding under the US Joint Concept Technology Demonstrator (JCTD) Program, so that their prototypes can be transitioned into operational capability. If an Australian competitor is among the top three finalists, that organisation would also be considered for funding under the Capability & Technology Demonstrator Program managed by DSTO”

“Australia will also have access to these capability developments,” Mr Combet added.

The five shortlisted competitors will be invited to present their projects at the Land Warfare Conference in November 2010.

The competition is valued at US$1.6 million.

Competition details are at:

This press release can be found on The Hon. Greg Combet AM MP Minister for Defence Personnel, Materiel and Science and the Minister Assisting the Minister for Climate Change website - press release.

Media contacts:
Rod Hilton (Greg Combet): 02 6277 4771 or 0458 276 619
Steve Butler (DSTO): 08 8259 6923 or 0418 800 323
Defence Media Liaison: 02 6265 3343 or 0408 498 664

The Defence Science and Technology Organisation (DSTO) is part of Australia's Department of Defence. DSTO's role is to ensure the expert, impartial and innovative application of science and technology to the defence of Australia and its national interests.

Tuesday, July 28, 2009

More on the AAAI Presidential Panel on Long-Term AI Futures

The attendees of the February Asilomar workshop and a description of their goals is available here.
The co-chairs were Eric Horvitz and Bart Selman. The panel included: Margaret Boden, Craig Boutilier, Greg Cooper, Tom Dean, Tom Dietterich, Oren Etzioni, Barbara Grosz, Eric Horvitz, Toru Ishida, Sarit Kraus, Alan Mackworth, David McAllester, Sheila McIlraith, Tom Mitchell, Andrew Ng, David Parkes, Edwina Rissland, Bart Selman, Diana Spears, Peter Stone, Milind Tambe, Sebastian Thrun, Manuela Veloso, David Waltz, Michael Wellman.

There were three focus groups:
Pace, Concerns, Control, Guidelines -- Chair: David McAllester
Potentially Disruptive Advances: Nature and timing -- Chair: Milind Tambe
Ethical and Legal Challenges -- Chair: David Waltz

While some of the scientists in attendance have commented on societal challenges posed by advances in AI in the past, interestingly no members of the machine ethics community were at this event.

Monday, July 27, 2009

Sunday NYTimes: Why is this robot story on the front page?

It is curious why a robot feature titled, Ay Robot! Scientists Worry Machines May Outsmart Man found its way onto the front page of the July 26th New York Times. The story is about a group of scientists who met on February 25th at the Asilomar Conference Grounds and will release a report later this year (when?) about their concerns over social challenges arising from research on artificial intelligence. Given the location, the site for a famous 1975 meeting and report on genomics, the conference may be seen as of symbolic importance. The AAAI organized the event, so perhaps the group includes many luminaries, but they are not listed in the article. Eric Horovitz, the organizer and Microsoft research who is presently president of the the AAAI, and Tom Mitchell of Carnegie Mellon University are the only attendees either mentioned or quoted in the article. Mitchell's reflections are of particular interest to readers of this blog.

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

I am not surprised that John Markoff wrote this features as he has been covering such concerns for a long time. But, for those of us with a long-standing interest in challenges arising from advanced research on AI, it is quite confusing as to why the editors of the newspaper decided to feature this story on the front page at this time given that there is no immediate news here.

"Flight plan" for Expanding the Air Force of Drones

The Air Force has released a report detailing its anticipated expansion of remotely piloted planes over the coming decades, according to a story in The New York Times. Air Force Report Envisions a Broader Use of Drones opens with:

Small remotely piloted planes are now used mainly to gather intelligence and fire missiles at insurgents. But over the next several decades, the Air Force envisions building larger ones that could do the work of bombers and cargo planes and even tiny ones that could spy inside a room.

The word ethical does appear later in the story in the context of swarming drones that might initiate attacks autonomously.
Perhaps the most controversial is the idea of drones swarming on attack. Advances in computing power could enable them to mount preprogrammed attacks on their own, though that would be a difficult legal and ethical barrier for the military to cross.

Thursday, July 23, 2009

Wired: Robo-Ethicists Want to Revamp Asimov’s 3 Laws

Wired magazine feature on how to stop building psychopathic robots.

Full post

How robot drones revolutionized the face of warfare

This CNN story discusses the effects of unmanned Predator and Reaper drones on warfare. The article manages to discuss safety/malfunction issues without talking about ethics or morality.

Full post

Mind over matter

"All in the Mind" ABC Podcast from Adelaide Festival of Ideas discussion of "Mind over Matter" by Colin Allen and Mandyam Srinivasan, with Natasha Mitchell chairing.

See also Natasha's blog post about Moral Machines.

The full unedited session is also available as MP3 from the Radio Adelaide site: