Sunday, August 9, 2009

Path to Autonomy

So, I've been looking at the United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 that Wendell already posted a link to, and I think section 4.6 is particular interesting:

Advances in computing speeds and capacity will change how technology affects the OODA loop. Today the role of technology is changing from supporting to fully participating with humans in each step of the process. In 2047 technology will be able to reduce the time to complete the OODA loop to micro or nano- seconds. Much like a chess master can outperform proficient chess players, UAS will be able to react at these speeds and therefore this loop moves toward becoming a “perceive and act” vector. Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.


Noel Sharkey has already pointed out that the role of humans in these decisions is becoming vanishingly small, and this shift in terminology from "man in the loop" to "man on the loop" seems only to reinforce that shift.

The Air Force report goes on to suggest that the barriers to deployment of autonomous killing machines are legal and ethical rather than technological:

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.


The rest of section 4.6 is reproduced below.


These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems. The guidance for certain mission such as nuclear strike may be technically feasible before UAS safeguards are developed. On that issue in particular, Headquarters Air staff A10 will be integral to develop and vet through the Joint Staff and COCOMS the roles of UAS in the nuclear enterprise. Ethical discussions and policy decisions must take place in the near term in order to guide the development of future UAS capabilities, rather than allowing the development to take its own path apart from this critical guidance.

Assuming the decision is reached to allow some degree of autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today. The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission.

To achieve a “perceive and act” decision vector capability, UAS must achieve a level of trust approaching that of humans charged with executing missions. The synchronization of DOTMLPF-P actions creates a potential path to this full autonomy. Each step along the path requires technology enablers to achieve their full potential. This path begins with immediate steps to maximize UAS support to CCDR. Next, development and fielding will be streamlined, actions will be made to bring UAS to the front as a cornerstone of USAF capability, and finally the portfolio steps to achieve the potential of a fully autonomous system would be executed.

1 comment:

Patrick Crogan said...

Hi folks
Clearly an important issue and a disturbing trend already well in train. Can you folks comment on what the reference to a 'perceive and act vector', in quote marks, in the Air Force document is pointing at? It sounds like some kind of AI speak but I'm not clear if it is deliberately citing someone or some well known idea.