Thursday, August 13, 2009

Photo Spread – Robots that Brew Tea and Rescue Victims of a Terrorist Attack


Can you identify the robot in the picture to the right? The Boston Globe has a wonderful collection of photos showing contemporary robots performing a wide variety of tasks. This photo spread titled Robots does a nice job of representing the array of robots available today.

Wednesday, August 12, 2009

All in the mind: Robots go to war

Part 2 of Natasha Mitchell's series on robot ethics aired over the weekend in Australia and is available for listening or download at http://www.abc.net.au/rn/allinthemind/stories/2009/2641416.htm. In this show, Ron Arkin is brought in to the discussion that started with Noel Sharkey and me the previous week.

Also check out the All in the Mind blog for the show.

Sunday, August 9, 2009

Path to Autonomy

So, I've been looking at the United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 that Wendell already posted a link to, and I think section 4.6 is particular interesting:

Advances in computing speeds and capacity will change how technology affects the OODA loop. Today the role of technology is changing from supporting to fully participating with humans in each step of the process. In 2047 technology will be able to reduce the time to complete the OODA loop to micro or nano- seconds. Much like a chess master can outperform proficient chess players, UAS will be able to react at these speeds and therefore this loop moves toward becoming a “perceive and act” vector. Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.


Noel Sharkey has already pointed out that the role of humans in these decisions is becoming vanishingly small, and this shift in terminology from "man in the loop" to "man on the loop" seems only to reinforce that shift.

The Air Force report goes on to suggest that the barriers to deployment of autonomous killing machines are legal and ethical rather than technological:

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.


The rest of section 4.6 is reproduced below.


These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems. The guidance for certain mission such as nuclear strike may be technically feasible before UAS safeguards are developed. On that issue in particular, Headquarters Air staff A10 will be integral to develop and vet through the Joint Staff and COCOMS the roles of UAS in the nuclear enterprise. Ethical discussions and policy decisions must take place in the near term in order to guide the development of future UAS capabilities, rather than allowing the development to take its own path apart from this critical guidance.

Assuming the decision is reached to allow some degree of autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today. The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission.

To achieve a “perceive and act” decision vector capability, UAS must achieve a level of trust approaching that of humans charged with executing missions. The synchronization of DOTMLPF-P actions creates a potential path to this full autonomy. Each step along the path requires technology enablers to achieve their full potential. This path begins with immediate steps to maximize UAS support to CCDR. Next, development and fielding will be streamlined, actions will be made to bring UAS to the front as a cornerstone of USAF capability, and finally the portfolio steps to achieve the potential of a fully autonomous system would be executed.

Friday, August 7, 2009

US Air Force Flight Plan for Unmanned Systems 2009-2047

The unclassified sections of the United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 are now available online.

The plan assumes that:
The range, reach, and lethality of 2047 combat operations will necessitate an unmanned system-of-systems to mitigate risk to mission and force, and provide perceive-act line execution. (p. 14)

Increasing autonomy is embraced in the vision outlined.
That harnesses increasingly automated, modular and sustainable systems that retain our ability to employ UASs through their full envelope of performance resulting in a leaner, more adaptable, tailorable, and scalable force that maximizes combat capabilities to theJoint Force. ( p. 15)

Monday, August 3, 2009

MM review in Computing Now

Computing Now, On 7/13/09 8:09 AM:
Moral Machines reviewed by Paul Scerri

As agents and robots move more and more from the lab to the real world, the possibility that they can cause physical, psychological, or monetary harm increases. In recent years, deaths and serious physical injury have been caused by malfunctioning robots. Some amount of blame for the recent global economic crises has even been placed on intelligent trading agents that didn't fully comprehend the impacts of their actions. As the prevalence, availability, capabilities, and autonomy of agents and robots increases, it's critical to examine how to minimize any harm caused by malfunctions or unintended consequences. As engineers, we need to develop practices and techniques that minimize any harmful impacts of our technology.

Read the rest at http://www2.computer.org/portal/web/cnbooks/blog/-/blogs/1397600

Sunday, August 2, 2009

Registration and Travel Grants to March Ethical Guidance Workshop

Registration for the March workshop on Ethical Guidance for Pervasive and Autonomous Technologies are now available at http://ethicalpait.blogspot.com/, as well as application for travel grants for members of underrepresented groups in science and engineering.

Travel Subsidy Eligibility. If you are a member of one or more underrepresented groups in science and engineering (see the Travel Subsidy Application Form) who can demonstrate active scholarship in at least one of the relevant areas (e.g., pervasive information technology, autonomous information technology, practical ethics, research ethics), you are invited to apply for a travel subsidy. You must complete and submit the registration form for the workshop and the application form and other required information for the travel subsidy together by the application due date. Subsidies will be judged by the PAIT Planning Committee based on the applicants’ qualifications and potential to contribute to the workshop. Note that your expenses will be reimbursed after you have submitted your original receipts, which must be received by April 2, 2010.

Do you read me HAL? Robot wars, moral machines and silicon that cares - Part 1


The first episode of Natasha Mitchell's two-part report on robot ethics for her All in the Mind show on Australia's ABC is now available by podcast. This week's show, which aired on Aug 1, skillfully weaves together interviews that Natasha conducted with me and Noel Sharkey and covers mostly non-military applications of robotics, especially elder care and health care. Other topics include Asimov's laws and the Uncanny Valley hypothesis.

Check out the discussion on Natasha's blog too.

Next week's show will focus on military applications, and Ron Arkin (who makes a brief appearance in the first episode) will also be featured.

Transcript of show.