Friday, May 29, 2009

Europeans Build Drone that Helps Save Lives

European researchers have developed a small robotic drone capable of helping save lives in emergency situations or preventing terrorist attacks in urban areas.
Drones, known as unmanned aerial vehicles (UAVs), have proven to be of great value in military operations, but so far, their advantages have not been fully exploited for civilian uses.

In civil life, drones are mainly used in the agriculture sector – for assessing how well crops are growing in a particular part of a field – or for meteorological measurements.

The main barrier to the wider use of drones is their large size and lack of manoeuvrability around obstacles. Most military drones are fixed-wing UAVs designed to operate at high altitudes and do not need a lot of manoeuvrability. In built up, highly populated areas such drones would pose a danger to people if they hit a tree or a building, or crashed due to the loss of its navigational signal.

To read the full article titled, "A drone for security and safety" click here.

Interview at Institute for Religion and Peace, Vienna

An interview with Colin has just been published on the website of the Institute for Religion and Peace of the Austrian Military Chaplaincy.

They have also planned an interview with Ron Arkin next month.

Tuesday, May 26, 2009

Teaching Machines Morality Interview

An Interview of Wendell Wallach on Connecticut Public Radio is now available online. John Dankosky, the host of Where We Live, interviewed Wendell on May 26th. John is a wonderful interviewer and had actually read the book before talking with Wendell. Click here for go directly to the interview.

Robots Should be Slaves

Joanna Bryson has written a piece titled Robots Should Be Slaves that according to her website is forthcoming as a chapter of a book edited by Yorick Wilks. Joanna is a computer scientist at the University of Bath who worked on COG as a graduate student in Rodney Brooks' robotics lab at MIT.

More evidence that this area is heating up!

Sunday, May 24, 2009

Ron Arkin's Book Begins Shipping

Ronald Arkin's "long-awaited Governing Lethal Behavior in Autonomous Robots is scheduled to begin shipping this weekend. Click here for the Amazon link to the book.

Preventing Skynet

Our friend Michael Anissimov has, together with others, initiated a new blog, "Terminator Salvation: Preventing Skynet: Just say 'no' to genocidal artificial intelligence!" We applaud this effort and encourage members of the machine morality, machine ethics, and roboethics community to contribute to the blog. There has been a kind of split into two communities, with only a little cross-over, between those focused around future ethical challenges posed by a possible Singularity and those whose attention is directed at more immediately challenges and the implementation of moral decision making in present or near-future technology. I'd like to propose that we make efforts to bridge this gap, and will have more to say about that in a future posting.

The Singularity Finds its Way Into the Mainstream

The most prominent story in the Sunday New York Times' Week in Review section is titled, "The Coming Superbrain: Computers keep getting smarter, while we just stay the same." John Markoff who covers Silicon Valley for the The Times writes about, "A.I.’s new respectability is turning the spotlight back on the question of where the technology might be heading and, more ominously, perhaps, whether computer intelligence will surpass our own, and how quickly."

Soldiers Love Their Robots and Why 'Terminator' Is So Creepy

Jeremy Hsu has written a number of articles of interest to readers of this blog. On May 20th LiveScience posted an article by Hsu on, "Why 'Terminator' Is So Creepy," which focuses largely on discussing some of the psychological underpinnings of the 'uncanny valley' effect. The research projects of Karl MacDormand and Thalia Wheatley are highlighted.

The story of Scooby-Doo, a Packbot with whom members of an Explosive Ordnance Disposal Team team bonded is becoming well known and was discussed in Moral Machines. However, as Hsu relates, in an article on Yahoo News titled, "Real Soldiers Love Their Robot Brethren,"Scooby Doo was not an isolated instance of soldiers forming emotional bonds with the robotic drones and bomb sweepers.

"One of the psychologically interesting things is that these systems aren't designed to promote intimacy, and yet we're seeing these bonds being built with them," said Peter Singer, a leading defense analyst at the Brookings Institution and author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century"

Tuesday, May 19, 2009

Nicholas Carr on Artificial Morality

We missed this February posting by Nicholas Carr on The Artificial Morality of the Robot Warrior on the Encyclopedia Britannica Blog. Carr is the author of The Big Switch: Rewiring the World, From Edison to Google and it is good to see him taking an interest in machine morality.

Monday, May 18, 2009

Ethical Guide for Robot Warriors in the Works

Eric Bland, Discovery News
May 18, 2009 -- Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Read the rest at

Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an "ethical governor," a package of software and hardware that tells robots when and what to fire. His book on the subject, "Governing Lethal Behavior in Autonomous Robots," comes out this month.

He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.

"Ultimately these systems could have more information to make wiser decisions than a human could make," said Arkin. "Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives."

Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like iRobot's SWORDS or QinetiQ's MAARS robots, are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Flying drones can fire at insurgents on the ground. Patriot missile batteries can detect incoming missiles and send up other missiles to intercept and destroy them.

No matter where the robots are deployed however, there is always a human involved in the decision-making, directing where a robot should fly and what munitions the robot should use if it encounters resistance.

Humans aren't expected to be removed any time soon. Arkin's ethical governor is designed for a more traditional war where civilians have evacuated the war zone and anyone pointing a weapon at U.S. troops can be considered a target.

Arkin's challenge is to translate the 150-plus years of codified, written military law into terms that robots can understand and interpret themselves. In many ways, creating an independent war robot is easier than many other types of artificial intelligence because the laws of war have existed for over 150 years and are clearly stated in numerous treaties.

"We tell soldiers what is right and wrong," said Arkin. "We don't allow soldiers to develop ethics on their own."

One possible scenario for Arkin's ethical governor is an enemy sniper posted in building next to an important cultural setting, like a mosque or cemetery. A wheeled military robot emerges from cover and the sniper fires on it. The robot finds the sniper and has a choice; does it use a grenade launcher or its own sniper rifle to bring down the fighter?

Using geographical data on the surrounding buildings, the robot would decide to use the sniper rifle to minimize any potential damage to the surrounding buildings.
For a human safely removed from combat, the choice of a rifle seems obvious. But a soldier under fire might take extreme action, possibly blowing up the building and damaging the nearby building.
"Robots don't have an inherent right to self-defense and don't get scared," said Arkin. "The robots can take greater risk and respond more appropriately."

Fear might influence human decision-making, but math rules for robots. Simplified, various actions can be classified as ethical or unethical, and assigned a certain value. Starting with a lethal action and subtracting the various ethical responses to the situation equals an unethical response. Other similar equations governor the various possible actions.

The difficult thing is to determine what types of actions go into those equations, and for that humans will be necessary, and ultimately responsible for.

Robots, freed of human masters and capable of lethality "are going to happen," said Arkin. "It's just a question of how much autonomy will be put on them and how fast that happens."

Giving robots specific rules and equations will work in an ideal, civilian-free war, but critics point out such a thing is virtually impossible to find on today's battlefield.

"I challenge you to find a war with no civilians," said Colin Allen, a professor at Indiana University who also coauthored a book on the ethics of military robots.

An approach like Arkin's is easier to program and will appear sooner, but a bottom-up approach, where the robot learns the rules of war itself and makes its own judgment is a far better scenario, according to Allen.

The problem with a bottom-up approach is the the technology doesn't yet exist, and likely won't for another 50 years, says Allen.

Whenever autonomous robots are deployed, humans will still be in the loop, at least legally. If a robot does do something ethically wrong, despite its programming, the software engineer or the builder of the robot will likely be held accountable, says Michael Anderson at Franklin and Marshall University.

Saturday, May 16, 2009