Tuesday, June 30, 2009

Colin Allen @ Adelaide Festival of Ideas

Colin will be appearing at the Adelaide Festival of Ideas July 10, 11: http://www.adelaidefestivalofideas.com.au/

Friday July 10, 2.30 - 4.00pm
Bonython Hall
The Mind: Mind over Matter?
Colin Allen
Natasha Mitchell
(Participating Chair)
Mandyam Srinivasan

Saturday 11 July, 10.00 - 10.45am
Elder Hall
Robot Morality
Colin Allen

Mind Over Matter

Neuroscience, Cognitive Science, Brain Science: whatever you call it, over the last ten years, amazing advances in brain imaging and neural recording techniques have led to a revolution in how we think about thinking. Every edition of popular science magazines such as New Scientist or Scientific American features new discoveries in brain function: what is deja vu? how might you see sound? why should someone disown part of their own body?

Increasingly cognitive neuroscientists are venturing into domains once thought to be outside the limits of any kind of experimental enquiry. Perhaps now data exist to answer questions previously only considered by philosophers. Are moral beliefs absolute? Is the concept of God a natural consequence of our neural circuitry? Can the mind exist distinct from any physical reality? How could we ever decide?

Save the Date: Workshop on Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology, March 3-4, 2010

Announcing a 2-day workshop on “Ethical Guidance for Research and Application of Pervasive and Autonomous Information Technology (PAIT)” March 3-4, 2010. The workshop will be a culminating event of a year-long process of planning, case development and analysis, and networking among information technology engineers and researchers, ethicists, and other interested persons. The workshop is funded by the National Science Foundation (grant number SES-0848097) and sponsored by Indiana University’s Poynter Center for the Study of Ethics and American Institutions and the Association for Practical and Professional Ethics.

Confirmed speakers include Helen Nissenbaum, Associate Professor in the Department of Culture and Communication and Senior Fellow of the Information Law Institute, New York University; and Fred H. Cate, Distinguished Professor and C. Ben Dutton Professor of Law, IU School of Law, and Director of the Center for Applied Cybersecurity Research, Indiana University Bloomington.

Technologies are being developed today using very small, relatively inexpensive, wireless-enabled computers and autonomous robots that will most likely result in the near-omnipresence of information gathering and processing devices embedded in clothing, appliances, carpets, food packaging, doors and windows, paperback books, and other everyday items to gather data about when and how (and possibly by whom) an item is used. The data can be analyzed, stored, and shared via the Internet. Some of these pervasive technologies will also be autonomous, making decisions on their own about what data to gather and share, which actions to take (sound an alarm, lock a door), and the like.

The potential benefits of pervasive and autonomous information technology (PAIT) are many and varied, sometimes obvious, sometimes obscure – as are the ethical implications of their development and deployment. The history of information technology suggests that long-standing issues including usability, privacy, and security, among others, as well as relatively new phenomena such as ethically blind autonomous systems, are best addressed early enough to become part of the culture of researchers and engineers responsible for identifying needs and designing solutions.

This project will create a firm ethical foundation for this nascent field by convening an international meeting of experts in PAIT, ethicists well versed in practical ethics, and other stakeholders. The meeting will feature discussions of previously-prepared case studies describing actual and anticipated uses of PAIT, invited presentations on key issues, working groups to identify and categorize ethical concerns, and other activities aimed at community-building and formulating ethical guidance to help researchers and designers of such systems recognize and address ethical issues at every stage, from design to deployment to obsolescence. The participants will form the core of a new interdisciplinary subfield of value-centered PAIT which will develop guidelines and conceptual tools to support communication and collaboration among and between researchers, engineers, and ethicists.

The Planning Committee (see http://poynter.indiana.edu/pait) is actively seeking experts interested in joining one or more informal working groups to help prepare for the workshop; if you are interested in being involved, please get in touch with the project director (see contact information below).

The PAIT workshop will precede the annual meeting of the Association for Practical and Professional Ethics, which will begin on Thursday, March 4, 2010 at the historical Hilton Cincinnati Netherland Plaza in Cincinnati, Ohio.

Registration will be required for attendance at the PAIT workshop, but there will be no registration fee. PAIT participants are also encouraged to register to attend and participate in the Association’s annual meeting (see http://www.indiana.edu/~appe/).

For more information:
Kenneth D. Pimple, Ph.D., PAIT Project Director
Poynter Center, Indiana University
Bloomington IN 47405-3602
FAX 812-855-3315

Jamaica Robotics Program for Youth in Kingston

Tuesday, June 23, 2009

Is a computer implicated in the D.C. train crash that killed nine?

Sarah Karush and Brian Westley, writers for the Associated Press, report that a Computer failure may have caused D.C. train crash.

Investigators looking into the deadly crash of two Metro transit trains focused Tuesday on why a computerized system failed to halt an oncoming train, and why the train failed to stop even though the emergency brake was pressed.

This isn't the first time that Metro's automated system has been called into question.
In June 2005, Metro experienced a close call because of signal troubles in a tunnel under the Potomac River. A train operator noticed he was getting too close to the train ahead of him even though the system indicated the track was clear. He hit the emergency brake in time, as did the operator of another train behind him.

Did a drone kill more than 60 people at a Pakistan funeral?

The New York Times reported on June 23rd that a Suspected U.S. Strike Kills at Least 60 in Pakistan. The strike on Tuesday hit a funeral in South Waziristan.

Details of the attack, which occurred in Makeen, remained unclear, but the reported death toll was exceptionally high. If the reports are indeed accurate and if the attack was carried out by a drone, the strike could be the deadliest since the United States began using the aircraft to fire remotely guided missiles at members of the Taliban and Al Qaeda in the tribal areas of Pakistan.

Air Force plans for smaller, faster, and deadlier UAVs

Two recent articles have reported on the Air Force's plans for the next generation of UAVs. Michael Hoffman reports on The Plan for smaller, faster, deadlier UAVs on the AirForceTimes Website.

The Esquire magazine website has an article by Erik Sofge titled, Inside the Pentagon's New Plan for Drones That Don't Piss Off Pakistan.

the Air Force is planning to build a more selective breed of military drones, with swarms of bird-size bots shadowing targets and new unmanned aerial vehicles (UAVs) capable of launching mini-missiles at multiple targets at once. The mechanized assassin, it seems, is about to become a lot more professional.

Like most UAVs, these robots would most likely be used for surveillance and reconnaissance. But in an animated clip released by the Air Force late last year, a MAV lands on an enemy sniper, and, without so much as a prayer to its machine god, detonates itself. The new Air Force briefing doesn't elaborate on this miniature suicide-bomber concept, but it does include plans to have flocks of sparrow-size MAVs airborne by 2015, and even smaller, dragonfly-size robots by 2030. And with the recent news that Israel is developing an explosives-laden snakebot, the writing is on the wall: You can run from tomorrow's robotic hitmen, and you can hide, and they'll flap or squirm or glide into position and kill you anyway.

Autonomous Robot Assists in Surgery

Bioengineers at Duke University have developed a laboratory robot that can successfully locate tiny pieces of metal within flesh and guide a needle to its exact location -- all without the need for human assistance.

This robot may be the harbinger of systems capable of placing and removing radioactive "seeds" for the treatment of prostate cancer.

Read the full article titled Autonomous robot detects shrapnel on the PHYSORG.COM website.

Forbes Publishes The AI Report

Forbes magazine has published The AI Report: The past, present and future of artificial intelligence. The report has three sections – Intelligence, Robotics, and Living with AI. All the articles can be accessed online by clicking here. Here is a list of the contents of this special report.


Passing The Turing Test –Kevin Warwick
Machine Minds – Michael Vassar
When Will Computers Be Smarter than Us –Nick Bostrom
What Happened To Theoretical AI –David Gelernter
Giving Computers Free Will – Judea Pearl
Computers Make Great Students – Peter Norvig
Can A.I. Fight Terrorism – Juval Aviv
My Computer, My Collaborator –Matthew Klenk
Dumb Like Google – Lee Gomes
We Want More From Computers -- But Not Too Much –Eyal Amir


The Humanoids Are Here –Countney Boyd Myers
The Dawn of the Humanoid Robot – James Kuffner
The Gamelatron Robot Orchestra –Aaron Taylor Kuffner
Robots Vs. Rothko – Margaret A. Boden
Who Needs Humanoids –Helen Greiner
Robots on Jeopardy –Herbert Gelernter
Encounters With Electronic Pets –Lawrence Osborne

Living With AI

Will A Machine Replace You? –Courtney Boyd Myers
AI in The C-Suite –Dale Addison
AI And What To Do About It –Ben Goertzel
The Coming Artilect War –Hugo de Garis
The Ethical War Machine –Patrick Lin
Intelligence Evolution –Barry Ptolemy

Wednesday, June 17, 2009

Robotic Ferret

Ever since 9/11 securing cargo containers has appeared to be a nightmarish task. Now robotic ferrets have been enlisted for inspecting cargo containers. The ferrets will help detect radioactive materials, drugs, and explosives, as well as illegal imigrants smuggled within the containers.

Dubbed the "cargo-screening ferret" and designed for use at seaports and airports, the device is being worked on at the University of Sheffield in the United Kingdom with funding from the Engineering and Physical Sciences Research Council (EPSRC). . . The ferret will be the world's first cargo-screening device able to pinpoint all kinds of illicit substances and the first designed to operate inside standard freight containers. It will be equipped with a suite of sensors that are more comprehensive and sensitive than any currently employed in conventional cargo scanners.

The full story titled, Robotic ferret to secure cargo containers, is available on the Homeland Security Newswire.

Sharkey on Killer Robots in the Daily Telegraph

Noel Sharkey published a piece in the Daily Telegraph on the need to consider the moral consequences of developing mechanical soldiers. He writes in an article title, March of the killer robots, that:
Despite planned cutbacks in spending on conventional weapons, the Obama administration is increasing its budget for robotics: in 2010, the US Air Force will be given $2.13 billion for unmanned technology, including $489.24 million to procure 24 heavily armed Reapers. The US Army plans to spend $2.13 billion on unmanned vehicle technology, including 36 more Predators, while the Navy and Marine Corps will spend $1.05 billion, part of which will go on armed MQ-8B helicopters.

[I]n Waziristan, where there have been repeated Predator strikes since 2006, many of them controlled from Creech Air Force Base, thousands of miles away. According to reports coming out of Pakistan, these have killed 14 al-Qaeda leaders and more than 600 civilians.

Such widespread collateral damage suggests that the human remote-controllers are not doing a very good job of restraining their robotic servants. In fact, the role of the "man in the loop" is becoming vanishingly small, and will disappear. "Our decision power [as controllers] is really only to give a veto," argues Peter Singer, a senior fellow at the Brookings Institution in Washington DC. "And, if we are honest with ourselves, it is a veto power we are often unable or unwilling to exercise because we only have a half-second to react."

Full post

Tuesday, June 16, 2009

CFP: Roboethics Session at AP-CAP, Tokyo


Asia-Pacific Computing and Philosophy 2009 will be held on October
1st-2nd, 2009 in Tokyo, Japan. The conference will be hosted at the
University of Tokyo's Sanjo Conference Hall. Keynotes speeches will be
given by Professor Hiroshi Ishiguro (Osaka University) and Professor
Shinsuke Shimojo (Caltech). This year AP-CAP 2009 will be held in
conjunction with the Devices that Alter Perception workshop, which
will form a special track. The conference will also feature a special track on roboethics.


• July 15th, 2009: Deadline for abstract submission
• August 15th, 2009: Abstract acceptance notification
• September 1st, 2009: Early registration deadline
• September 15th, 2009: Camera-ready papers due
• September 21st, Papers available online
• October 1st-2nd, 2009: AP-CAP 2009 Conference


The call for papers, information for attendees, Word and LaTeX
templates, online paper submission form and registration are all
hosted at:


Following acceptance, papers will be made available online for
commentary and also public voting in order to award the AP-CAP 2009
best paper prize.


Authors are invited to submit an extended abstract limited to 1,000
words. The deadline for abstract submission will be July 15th, 2009 at
23:59 GMT. At submission time, authors should indicate a track for
abstract consideration. Camera-ready papers are due on September 15th
and should be A4 paper size and less than 10 pages and under 2
megabytes in size.

Track Chair: Jorge SOLIS

Nowadays with recent technological breakdowns in developing human-like
robots, medical robots, etc.; it is possible to conceive intelligent
machines which can autonomously perform specific tasks. More recently,
the introduction of personal robots designed to coexist with humans is
becoming closer to the reality. Therefore, new challenges are seen in
introducing robots to other applications fields out of the industry.
The goals of the track are to: (1) understand the ethical, social and
legal aspects of the design, development and employment of robots (2)
engaging in a critical analysis of the social implications of robots
(3) increase the convergence of roboticists, computer scientists,
philosophers, etc.


AP-CAP 2009 is sponsored by the International Association for
Computing and Philosophy. The conference is organized by the
University of Tokyo Meta-Perception Research Group, Oxford University
Information Ethics Research Group, and University of Hertfordshire
Group in Philosophy of Information.

Conference Chair: Masatoshi Ishikawa

Program Chairs: Alvaro Cassinelli & Carson Reynolds

Program Committee: Ezendu Ariwa, Jonathan Bird, Charles Ess, Soraj
Hongladarom, Kayoko Ishii, Shin'Ichi Konomi, Ken Mogi, Tomoe Moriyama,
Yvonne Rogers, Jorge Solis, Sundar Sarukkai and Ryo Uehara.


Attendees who are members of IACAP will enjoy a discounted conference
fee. We encourage interested parties to join IACAP prior to the
September 1st early registration deadline. You can find more
information about membership at the IACAP website:



On-line registration will be available at the AP-CAP 2009 website:


The conference registration fees provide a discount for early
registration (before September 1st) as well as a discount for IA-CAP
members. Registration fees will be payable in US dollars. In the case
of on-site registration we will accept credit card payment or cash.

Early Registration (September 1st deadline) - 375 USD
IACAP Member Registration - 325 USD
Late / On-Site Registration - 425 USD

Monday, June 8, 2009

Drone Used for Drug Surveillance

One advantage of unmanned drones is their ability to stay aloft for long periods of time. This has turned out to be quite useful in the extended surveillance of drug traffickers. A Heron UAV has been deployed in drug interdiction on the coast of El Salvador reports a TIME article by Tim Padgett titled, "Using Drones in the Drug War." The Heron is capable of staying airborne for more than 20 hours at 15,000 feet, while it streams back high resolution real-time video.

Cost savings from the use of drones, as well as placing fewer drug agents' lives in jeopardy, may make funding an expansion of the drone fleet in the drug war irresistible to Congress. From a civil liberties perspective, the use of drones raises concerns as to whether they might also be deployed in ways that violate privacy laws or transgress other civil rights.

the Heron isn't without problems. The Turkish military complained last month about mishaps with the drones it had bought from IAI for counterterrorism surveillance, such as too often not responding to commands from their human operators on the ground. (IAI rejected the claims but has promised to "rectify" any problems.) U.S. Customs & Border Protection has used Predator drones in recent years to detect illegal immigration, but a series of crashes in recent years has clouded the program.

Possible Computer Glitch in Air France 447 Crash?

It is too early to know what caused the crash of Air France Flight 447, however, there are already speculations about a computer glitch. This was presumably a system failure rather than an action initiated by the computer. An article titled, "Could a Computer Glitch Have Brought Down Air France 447", written by TIME correspondent Jeffrey T. Iverson is available online at YAHOO! News.

Friday, June 5, 2009

MM reviewed in Metapsychology

Ryan Tonkens reviews Moral Machines online in Metapyschology

...Moral Machines represents a valuable addition to, and extension of, the current literature on machine morality. As the development of autonomous artificial moral agents becomes closer to being realized, I suspect that this book will only gain in importance.

URL for review: http://metapsychology.mentalhelp.net/poc/view_doc.php?type=book&id=4873

Review - Moral Machines

Teaching Robots Right from Wrong

by Wendell Wallach and Colin Allen

Oxford University Press, 2008

Review by Ryan Tonkens

Apr 28th 2009 (Volume 13, Issue 18)

Moral Machines: Teaching Robots Right from Wrong is the first book-length discussion of issues arising in the nascent field of Machine Ethics, offered by two of its more veteran thinkers. The authors do an admirable job at using language accessible to an interdisciplinary audience, which also makes the book open to a more general public readership. It will be of interest to anyone concerned with the ethical, social, and engineering issues that accompany the quest to develop machines that can act autonomously out in the world.

As a response to the expanding (and seemingly limitless) scope of artificial intelligence and robotics research, a surge of recent work has focused on issues related to the development of artificial moral agents (AMAs) (or moral machines or ethical robots). These robots will (or in certain cases already do) have the capacity to perform ethically relevant actions out in the world, in varying ways and with varying degrees of autonomy. As the capacities of such robots increase, so too should our demand that such machines act ethically. The cutting-edge discipline of Machine Ethics--made up of engineers, artificial intelligence researchers, and philosophers--is important because it investigates whether or not the development of AMAs is possible (and desirable), and helps us to prepare just in case it is.

The main themes of Moral Machines are twofold: An examination of the motivations we have for creating AMAs and how we should go about developing machines that behave ethically. Each chapter of the book focuses on certain specific issues that need to be attended to if the project of Machine Ethics is to be successful. Some of the more noteworthy questions posed by Wallach and Allen include: 'Is machine morality necessary?', 'Can robots be moral?', 'Does humanity want machines making moral decisions?', 'What are the roles of engineers and philosophers in the design of AMAs?', 'What methods and moral frameworks are best suited for the design of AMAs?', and 'How can machine morality inform human morality?'. Through their attempt to answer these questions, the authors offer a detailed and thorough survey of the relevant research being done on machine morality, and offer preliminary (and often quite insightful) answers to these and other questions (although they humbly admit that much more work needs to be done in the future).

The authors also make some more substantial claims about how ethics could be implemented into machines. For example, after discussing the benefits and shortcomings of both top-down (rule-based) and bottom-up (evolution- or learning-based) approaches to the design of moral robots, the authors spend some time arguing for a hybrid approach (Ch. 8 and Ch. 11). One example suggested by the authors is an approach that appeals to a virtue ethical framework, since virtue ethics focuses on virtuous character traits which are acquired through training and habit formation (and hence may accommodate both top-down and bottom-up computational approaches). The authors argue that a hybrid approach holds much promise for overcoming the problems associated with pure top-down and bottom-up approaches to implementing ethics into machines. This proposal has some initial appeal and plausibility, and warrants the attention of further research.

Despite the value of the book as a whole, a few critical notes are worth mentioning. For one thing, although the authors touch upon issues surrounding the nature of moral agency, they do so only somewhat superficially, leaving many of the more complex and important issues unattended (and unresolved). For example, there is a rich debate over whether or not consciousness is a necessary condition for being a moral agent, and, if so, whether robots could be sufficiently conscious so as to possess moral agency (akin to humans, perhaps). Although the authors do mention the issue of machine consciousness (and moral agency in general), they do so only in passing (Ch. 4).

Furthermore, although the authors discuss the relationship between ethics and engineering, and the different (and often conflicting) roles of ethicists and engineers, the authors seem to champion the task of the engineer. In other words, although the book is devoted to the topic of machine morality, the authors focus primarily on the design, implementation, and engineering aspects of creating AMAs, with the consequence of leaving other (ethical) issues by the wayside. For example, in their discussion of which sort of ethic we should implement into machines, the authors focus on which frameworks work best in terms of their computability or implementability. There is no doubt that this issue is important. Yet certain ethical questions may demand attention, prior to the implementation stage. For example, whether the moral codes we are trying to implement into our machines allow for the development of those types of machines is never asked. Moreover, from an engineering perspective, the moral frameworks appealed to for designing AMAs are assessed based solely on whether they are conducive to implementability. Yet, ethicists may be reluctant to accept that all (or most) moral frameworks start on an even playing field, the problem simply being a matter of which frameworks are most conducive to implementation. Some discussion of the longstanding debates in Ethics between competing moral frameworks may be necessary here. Although the authors argue for a hybrid approach to designing AMAs, perhaps one that adopts a virtue-based moral framework, they do not ask whether we would want our machines to be virtuous, in the sense that virtue ethics is the best moral framework on offer (as compared to duty-based or consequentialist ethics, for example).

Despite these unattended issues, Moral Machines represents a valuable addition to, and extension of, the current literature on machine morality. As the development of autonomous artificial moral agents becomes closer to being realized, I suspect that this book will only gain in importance.

© 2009 Ryan Tonkens

Ryan Tonkens, Department of Philosophy, York University, Toronto, Canada

Monday, June 1, 2009

SETI Radio on Robots Call the Shots

P.W. Singer, Wendell Wallach, Pablo Garcia, and Robert Anderson were all interviewed for a Public Radio show developed by the SETI Institute. The show titled, Robots Call the Shots, is available online now. The interviewers for SETI are Seth Sostak and Molly Bentley.

Click here to go directly to a podcast of the show.