Thursday, January 28, 2010

Can We Trust Robots?

The cover of the current issue of Popular Mechanics asks, "Can We Trust Robots?" The story written by Erik Sofge is titled, The Uncertain Future For Social Robots.

nearly every researcher I spoke with agreed on a single point: We need ethical guidelines for robots, and we need them now. Not because robots lack a moral compass, but because their creators are operating in an ethical and legal vacuum. “When a bridge falls down, we have a rough-and-ready set of guidelines for apportioning out accountability,” says P.W. Singer, a senior fellow at the Brookings Institution and author of Wired for War. “Now we have the equivalent of a bridge that can get up and move and operate in the world, and we don’t have a way of figuring out who’s responsible for it when it falls down.”

In a debate steeped in speculation and short on empirical data, a set of smart ethical guidelines could act as an insurance policy. “My concern is not about the immediate yuck factor: What if this robot goes wrong?” says Chris Elliott, a systems engineer and trial lawyer who contributed to a recent Royal Academy report on autonomous systems. “It’s that people will go wrong.” Even if the large-scale psychological impact of social robots turns out to be zero, Elliott worries that a single mishap, and the corresponding backlash, could reverse years of progress. Imagine the media coverage of the first patient killed by a robotic surgeon, an autonomous car that T-bones a school bus or a video clip of a robotic orderly wrestling with a dementia patient. “The law is way behind. We could reach a point where we’re afraid to deploy new beneficial robots because of the legal uncertainty,” Elliott says.

The exact nature of those guidelines is still anyone’s guess. One option would be to restrict the use of each robotic class or model to a specific mission—nurse bots that can visit with patients within a certain age range, or elder-care bots that watch for dangerous falls but aren’t built for small talk and snuggling.

Popular Mechanics on The Uncanny Valley

Knowing that the uncanny valley began as a groundless thought experiment, and has inspired a range of more self-contained experiments, doesn't completely invalidate it. It simply means the valley has grown up, and that casual references to it are only slightly off-base. After all, there's still the matter of the uncanny's power to horrify us and validate our fears of robots. If machines can trigger cognitive dissonance in the human brain, roboticists must continue to carefully tweak their creations, to avoid individual revulsion and even a society-wide blowback. That would be a major concern for the designers and manufacturers of the coming generation of social robots.

Read Erik Sofge's article titled, The Truth About Robots and the Uncanny Valley: Analysis in Popular Mechanics online.

When a Robot Needs a Lawyer

View more news videos at:

Sunday, January 24, 2010

Honda Dreams: Living with Robots

Honda has been putting together a series of quite short award winning films. They just released a film called Living with Robots in which Wendell Wallach is featured along with various scientists and with the robot Asimo.

Saturday, January 23, 2010

Sharkey on Robots in the Military, Childcare, and Eldercare

There is also an article at titled, Artificial intelligence: The robots are coming but are we ready for them?, which discusses the ideas of Noel Sharkey, Alan Winfield, Kevin Warwick, David Levy, and others.

War and Sex with Robots

On January 22nd Noel Sharkey interviews David Levy on ethical issues arising from robots designed as sex toys, including the newly released Roxxy (pictured to the right).

Next week Noel will be interviewing Peter Asaro about Roxxy and other sex robots of the future.

These interviews can be accessed at The Sound of Science website.

Houston police also conducted "no media" test of spy drones

Noel Sharkey points out in response to my previous post that it's not just the UK police looking to use drones for surveillance, as reported in Houston, Texas

US to Sell Drones to Pakistan

The New York Times reports that Secretary of Defense Robert Gates is offering Pakistan its own Shadow drones as an incentive to encourage cooperation in the fight against the Taliban. Of course the Pakistan government might elect to use these drones to target political enemies other than those with whom the U.S. is at war. The story titled, U.S. Offers Pakistan Drones to Urge Cooperation, was published on January 21st.

Friday, January 22, 2010

Saturday, January 16, 2010

Robot Border Guards

A MIGRANT makes a furtive dash across an unwalled rural section of a national border, only to be confronted by a tracked robot that looks like a tiny combat tank - with a gimballed camera for an eye. As he passes the bug-eyed droid, it follows him and a border guard's voice booms from its loudspeaker. He has illegally entered the country, he is warned, and if he does not turn back he will be filmed and followed by the robot, or by an airborne drone, until guards apprehend him.

Welcome to the European border of the not-too-distant future. Amid the ever-present angst over illegal immigration, cross-border terrorism and contraband smuggling, some nations are turning to novel border-surveillance technologies, potentially backed up by robots, a conference on state security at Leeds Metropolitan University, UK, heard in November. The idea is to scatter arrays of sensors in a border area in ways that give guards or robots plenty of time to respond before their targets make good an escape.

An article at NewScientist titled, Robot border guards to patrol future frontiers, reports that:
the US Department of Homeland Security, along with Boeing Intelligence and Security Systems, is fielding sensors on the border with Mexico, in an $8 billion project called the Secure Border Initiative network.

Managing Data From Drones

Military Is Awash in Data From Drone writes Christopher Drew in the NYTIMES.

Lt. Col. Brendan M. Harris, who is in charge of an intelligence squadron here . . . said the Air Force had just installed telestrators on its latest hand-held video receiver, and harried officers in the field would soon be able to simply circle the images of trucks or individuals they wanted the drones to follow.

But Colonel Harris also said that the drones often shot gray-toned video with infrared cameras that was harder to decipher than color shots. And when force is potentially involved, he said, there will be limits on what automated systems are allowed to do.

“You need somebody who’s trained and is accountable in recognizing that that is a woman, that is a child and that is someone who’s carrying a weapon,” he said. “And the best tools for that are still the eyeball and the human brain.”

Israel and 40 Other Countries Building Unmanned Fighting Machines

In 10 to 15 years, one-third of Israel's military machines will be unmanned, predicts Giora Katz, vice president of Rafael Advanced Defense Systems Ltd., one of Israel's leading weapons manufacturers.

In an article published on WSJ.COM titled, Israeli Robots Remake Battlefield
Nation Forges Ahead in Deploying Unmanned Military Vehicles by Air, Sea and Land
, Charles Levinson writes that, "Over 40 countries have military-robotics programs today." If anyone knows where this figure comes from, let us know. In the video included with this article, Levinson makes it clear that the unmanned fighting machines that Israel is developing are being designed so they can make decisions without a human in the loop.

Sharkey interviews Singer

Noel Sharkey interviews Peter Singer (Wired for War) for his Sound of Science webcast.

Wednesday, January 13, 2010

Elevators and Ethics

Yesterday on NPR there was an "All Tech Considered" piece about the latest generation of smart elevator controllers that can compute in real time the most efficient allocation of stops to floors to minimize passenger waiting time. The story dwelled quite a bit on the loss of human operators, but it was casually mentioned that a company is developing a smartphone application that will communicate with the elevator controller so that it is "aware" that you will be arriving at the elevator shaft within a few minutes, and schedule accordingly. This raises a number of interesting issues, quite aside from the surveillance opportunities it affords. For instance, how will the system know whether you are just leaving work to run an errand, or that you have a particular situation (e.g. a medical emergency at home) that might require a "less efficient" decision to be taken in order to transport you before other people who might have been waiting longer. Could such machines be designed better to detect and respond flexibly to such contingencies? I don't see why not, but what are the dangers of going down the route towards autonomous machines making decisions that are sensitive to the ethically relevant features of not entirely predictable situations?

Letters: Football, Elevator Technology
January 12, 2010 ... SIEGEL: Yesterday in our All Tech Considered segment, we heard about the latest in elevator technology. ...

Wednesday, January 6, 2010

George Bekey interview

George Bekey is interviewed by Gerhard Dabringer for his ongoing series of discussions with roboticists and philosophers with something to say about the ethics of military roboticization.