Friday, August 28, 2009

Report on Robots in Healthcare

Robotics for Healthcare, a report sponsored by the European Commission (EC) is available online.

The main aim of this study is to provide key research policy recommendations for the application of robotics in healthcare. Another objective of the study is to raise awareness about these important new developments among a wider audience. To this extent, a roadmap of promising applications of robotics in healthcare and associated R&D was developed, taking into account the state of the art as well as short and long-term future possibilities with a time horizon ending in 2025.

Oct. 2nd: Hughes, Kurzweil, Rothblatt and Wallach panel

The 2009 WOODSTOCK FILM FESTIVAL will screen 2B (Transbeman) and host a Redesigning Humanity panel with J. Hughes, Ray Kurzweil, Martine Rothblatt, and Wendell Wallach on October 2nd.

2B, directed by Richard Kroehling, is a World Premiere future narrative film portraying a decaying world on the cusp of great transformation. Based upon real science and evolving technologies, 2B's script brings to life the 'technohuman' conundrum. Designed to confront the most controversial topic of the 21st century, 2B explores moral and religious questions raised by the biotech revolution, forcing its audience to deeply question their definitions of life itself.

Designed to accompany the issues raised by 2B and Caprica, the Woodstock Film Festival engages the future head-on with the presentation of a ground-breaking panel, Redesigning Humanity - The New Frontier: If artificial intelligence, nanotechnology, genetic engineering and other technologies will, within the next 50 years, allow human beings to transcend the limitations of the body, how will our world fundamentally change under those conditions?

Moderated by Dr. James J. Hughes, Executive Director of the Institute for Emerging Ethics and Technologies and bioethicist at Trinity College, this revolutionary panel features futurist Raymond Kurzweil, the author of four best selling novels, and an inventor responsible for many breakthroughs in biotechnological fields; Dr. Martine Rothblatt, lawyer, author and entrepreneur, responsible for several satellite technology companies, along with Terasem Media and Films, which produces independent narrative and documentary films that deal with biotechnologies; and author Wendell Wallach, regarded as one of forefront thinkers in the field of Machine Ethics who, after co-authoring Moral Machines: Teaching Robots Right from Wrong, is working on a new book examining what humans might become through emerging technologies.

Modeling Morality with Prospective Logic

We encountered an interesting posting on the Earth.Stream website titled, Moral Machines, that had nothing to do with out book, but everything to do with the topic of the book. Two researchers, Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universities Indonesia, in Depok, have published an article about Modeling morality with prospective logic in the International Journal for Reasoning-based Intelligent Systems. Apparently they have used a program they developed for considering each possible outcome of various trolley car problems. We will post more information once we actually see the article.

Wednesday, August 26, 2009

Bony Robot: A Step Towards Machine Consciousness?


Owen Holland and his team have been receiving some publicity lately for their latest iteration of Chronos, a robot with a human-like skeletal structure. Owen, a champion of the prospects for building machines with consciousness, draws on the theories of the German philosopher Thomas Metzinger in his attempts to built embodied robots that move through and potentially experience the world in a manner similar to that of humans. His hope is that in slowly adding features to such a platform not only functional but also phenomenal consciousness might emerge in the system. A recent report on Holland's research was made in an article published by New Scientist titled, Robot with bones moves like you. The article includes a video of Chronos.

Click here for further information on the Chronos Project..

While philosophers such as Steve Torrance believe that phenomenal consciousness will be necessary for developing robots with
moral intelligence, it is very difficult to express why this will be the case. We welcome, and will post on this blog, thoughts on the role of consciousness in the decision making processes of moral machines.

Tuesday, August 25, 2009

Robots learn to lie

LiveScience has this story about robots learning (actually evolving) to lie.



source: Robots Learn to Lie, By Bill Christensen, Technovelgy.com, posted: 24 August 2009 10:48 am ET

LiveScience link: http://www.livescience.com/technology/090824-robots-lie.html

Thursday, August 20, 2009

Report on the Social, Legal and Ethical Issues

The Royal Academy of Engineering has released a report on, Autonomous Systems: Social, Legal and Ethical Issues. The full report is available online.

Contents
1. Introduction
1.1 What is an autonomous system?
1.2 The ethical, legal and social implications of autonomous systems
2. Autonomous road vehicles
2.1 Technologies – from GPS and car-to-car communication to centrally
controlled autonomous highways
2.2 Timescales and transformation
2.3 Barriers: ethical, legal and social
2.4 Recommended actions
3. Artificial companions and smart homes
3.1 Technologies – from blood pressure monitoring to Second Life
3.2 Timescales and transformation
3.3 Barriers: ethical, legal and social
3.4 Recommended actions
4. Conclusions
4.1 Communication and engagement
4.2 Regulation and governance
4.3 Ethical considerations
4.4 Looking for applications
4.5 The wider landscape
5. Appendices
5.1 Working group and acknowledgement
5.2 Statement of Ethical Principles

Those attending the roundtable upon which this report is based, as well as two additional contributors, include:

Professor Igor Aleksander FREng, Emeritus Professor
Imperial College London
Rob Alexander, Department of Computer Science, the University of York
Professor William Bardo FREng, Systems Engineering for Autonomous
Systems Defence Technology Centre
Ginny Clarke, the Highways Agency
Lambert Dopping-Hepenstal FREng, BAE Systems
Professor Martin Earwicker FREng, Vice-Chancellor,
London Southbank University
Dr Patrick Finley, Medimatron
Professor Peter Gore, Institute of Ageing and Health, University of Newcastle
Professor Roger Kemp FREng, Engineering Department,
University of Lancaster
Dr Lesley Paterson, The Royal Academy of Engineering
Dr Mike Steeden, Defence Science Technology Laboratory
Professor Will Stewart FREng
Dr Alan Walker, The Royal Academy of Engineering
Dr Blay Whitby, The University of Sussex
Michael Wong, IBM

Contributors:
Dr Chris Elliott FREng, Pitchill Consulting
Professor Noel Sharkey, University of Sheffield

Evolving Artificial Personalities


According to a report online at The Daily Galaxy, "a research collaboration between Samsung and the Korea Advanced Institute for Science and Technology (KAIST) has created a virtual puppy, Rity, a computerized creature whose every action is guided by a simulated personality system." Luke McKinney reports on this first attempt to evolve an artificial personality through the use of genetic algorithms in a story titled, "Creating Artificial Personalities" (An Evolutionary Step Toward Replacing the Human Species?)

Rity's personality is based on silicon-simulated genes. Its personality program is run from an artificial genome consisting of 1,764 genes, divided into 14 chromosomes. These chromosomes control various components of three separate internal state units, which react to external information and send votes to a probabilistic behavior module equipped with instant instinct reactions.

Full post

Wednesday, August 19, 2009

Beyond Asimov

The full article by Robin Murphy and David Woods, Beyond Asimov: The Three Laws of Responsible Robotics, is now available online.
Our critique reveals that robots need two key capabilities: responsiveness and smooth transfer of control. Our proposed alternative laws remind robotics researchers and developers of their legal and professional responsibilities. They suggest how people can conduct human–robot interaction research safely, and they identify critical research questions.

Ironically, Asimov’s laws really are robot-centric because most of the initiative for safety and efficacy lies in the robot as an autonomous agent. The alternative laws are human-centered because they take a systems approach. They emphasize that
• responsibility for the consequences of robots’ successes and failures lies in the human groups that have a stake in the robots’ activities, and
• capable robotic agents still exist in a web of dynamic social and cognitive relationships.
Ironically, meeting the requirements of the alternative laws leads to the need for robots to be more capable agents—that is, more responsive to others and better at interaction with others.

Singularity Summit 2009 in NYC October 3-4

The Singularity Summit will be held on the East Coast for the first time this year. Details regarding the Summit and registration are available online. Speakers include: Itamar Arel, Gregory Benford, Ed Boyden, David Chalmers, William Dickens, Gary Drescher, Ben Goertzel,Aubrey de Grey, Stuart Hameroff, Robin Hanson, Marcus Hutter, Randal Koene, Ray Kurzweil, Gary Marcus, Bela Nagy, Michael Nielsen, Anders Sandberg, Jurgen Schmidhuber, Ned Seeman, Brad Templeton, Peter Thiel, Gary Wolf, and Eliezer Yudkowsky.
Full post

Friday, August 14, 2009

Ethics of Armed Unmanned Systems Discussed in Washington

A featured panel discussion on the "Ethics in Armed Unmanned Systems in Combat," was part of The AUVSI 2009 North American Symposium, which was held this week in Washington, D.C.

Ron Arkin and P.W. Singer were on the panel, but of particular note was the participation of Major General Charles "Charlie" Dunlap, who has taught law and ethics at the USAF Academy, "Air University" in Maxwell, and at NDU/Ft. McNair. Another panelist, whose name we do not have, is commander of the Predator squadron, located at Creech AFB, NV. This panel may have been the first time that a flag officer of the U.S. Armed Forces has openly discussed the ethics of armed unmanned systems in public.

Transferring risks from soldiers to machines


The New York Times (August 11th) has a very nice story titled, A Soldier's Eye in the Sky, outlining the short-term plans for the uses of hovering drones and unmanned ground vehicles for assisting brigades tracking down insurgents.

The new equipment, being developed by Boeing and other contractors, is expected to cost about $2 billion for the first seven brigades. Each has at least 3,000 soldiers, and the equipment is about two years away from use in the field. By 2025, the Army plans to create similar gear and other improvements for all 73 of its active and reserve brigades.

Thursday, August 13, 2009

Don't Talk to Robots

Photo Spread – Robots that Brew Tea and Rescue Victims of a Terrorist Attack


Can you identify the robot in the picture to the right? The Boston Globe has a wonderful collection of photos showing contemporary robots performing a wide variety of tasks. This photo spread titled Robots does a nice job of representing the array of robots available today.

Wednesday, August 12, 2009

All in the mind: Robots go to war

Part 2 of Natasha Mitchell's series on robot ethics aired over the weekend in Australia and is available for listening or download at http://www.abc.net.au/rn/allinthemind/stories/2009/2641416.htm. In this show, Ron Arkin is brought in to the discussion that started with Noel Sharkey and me the previous week.

Also check out the All in the Mind blog for the show.

Sunday, August 9, 2009

Path to Autonomy

So, I've been looking at the United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 that Wendell already posted a link to, and I think section 4.6 is particular interesting:

Advances in computing speeds and capacity will change how technology affects the OODA loop. Today the role of technology is changing from supporting to fully participating with humans in each step of the process. In 2047 technology will be able to reduce the time to complete the OODA loop to micro or nano- seconds. Much like a chess master can outperform proficient chess players, UAS will be able to react at these speeds and therefore this loop moves toward becoming a “perceive and act” vector. Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.


Noel Sharkey has already pointed out that the role of humans in these decisions is becoming vanishingly small, and this shift in terminology from "man in the loop" to "man on the loop" seems only to reinforce that shift.

The Air Force report goes on to suggest that the barriers to deployment of autonomous killing machines are legal and ethical rather than technological:

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.


The rest of section 4.6 is reproduced below.


These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems. The guidance for certain mission such as nuclear strike may be technically feasible before UAS safeguards are developed. On that issue in particular, Headquarters Air staff A10 will be integral to develop and vet through the Joint Staff and COCOMS the roles of UAS in the nuclear enterprise. Ethical discussions and policy decisions must take place in the near term in order to guide the development of future UAS capabilities, rather than allowing the development to take its own path apart from this critical guidance.

Assuming the decision is reached to allow some degree of autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today. The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission.

To achieve a “perceive and act” decision vector capability, UAS must achieve a level of trust approaching that of humans charged with executing missions. The synchronization of DOTMLPF-P actions creates a potential path to this full autonomy. Each step along the path requires technology enablers to achieve their full potential. This path begins with immediate steps to maximize UAS support to CCDR. Next, development and fielding will be streamlined, actions will be made to bring UAS to the front as a cornerstone of USAF capability, and finally the portfolio steps to achieve the potential of a fully autonomous system would be executed.

Friday, August 7, 2009

US Air Force Flight Plan for Unmanned Systems 2009-2047

The unclassified sections of the United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 are now available online.

The plan assumes that:
The range, reach, and lethality of 2047 combat operations will necessitate an unmanned system-of-systems to mitigate risk to mission and force, and provide perceive-act line execution. (p. 14)

Increasing autonomy is embraced in the vision outlined.
That harnesses increasingly automated, modular and sustainable systems that retain our ability to employ UASs through their full envelope of performance resulting in a leaner, more adaptable, tailorable, and scalable force that maximizes combat capabilities to theJoint Force. ( p. 15)

Monday, August 3, 2009

MM review in Computing Now

Computing Now, On 7/13/09 8:09 AM:
Moral Machines reviewed by Paul Scerri

As agents and robots move more and more from the lab to the real world, the possibility that they can cause physical, psychological, or monetary harm increases. In recent years, deaths and serious physical injury have been caused by malfunctioning robots. Some amount of blame for the recent global economic crises has even been placed on intelligent trading agents that didn't fully comprehend the impacts of their actions. As the prevalence, availability, capabilities, and autonomy of agents and robots increases, it's critical to examine how to minimize any harm caused by malfunctions or unintended consequences. As engineers, we need to develop practices and techniques that minimize any harmful impacts of our technology.

Read the rest at http://www2.computer.org/portal/web/cnbooks/blog/-/blogs/1397600

Sunday, August 2, 2009

Registration and Travel Grants to March Ethical Guidance Workshop

Registration for the March workshop on Ethical Guidance for Pervasive and Autonomous Technologies are now available at http://ethicalpait.blogspot.com/, as well as application for travel grants for members of underrepresented groups in science and engineering.

Travel Subsidy Eligibility. If you are a member of one or more underrepresented groups in science and engineering (see the Travel Subsidy Application Form) who can demonstrate active scholarship in at least one of the relevant areas (e.g., pervasive information technology, autonomous information technology, practical ethics, research ethics), you are invited to apply for a travel subsidy. You must complete and submit the registration form for the workshop and the application form and other required information for the travel subsidy together by the application due date. Subsidies will be judged by the PAIT Planning Committee based on the applicants’ qualifications and potential to contribute to the workshop. Note that your expenses will be reimbursed after you have submitted your original receipts, which must be received by April 2, 2010.

Do you read me HAL? Robot wars, moral machines and silicon that cares - Part 1


The first episode of Natasha Mitchell's two-part report on robot ethics for her All in the Mind show on Australia's ABC is now available by podcast. This week's show, which aired on Aug 1, skillfully weaves together interviews that Natasha conducted with me and Noel Sharkey and covers mostly non-military applications of robotics, especially elder care and health care. Other topics include Asimov's laws and the Uncanny Valley hypothesis.

Check out the discussion on Natasha's blog too.

Next week's show will focus on military applications, and Ron Arkin (who makes a brief appearance in the first episode) will also be featured.

Transcript of show.