The cover of the current issue of Popular Mechanics asks, "Can We Trust Robots?" The story written by Erik Sofge is titled, The Uncertain Future For Social Robots.
nearly every researcher I spoke with agreed on a single point: We need ethical guidelines for robots, and we need them now. Not because robots lack a moral compass, but because their creators are operating in an ethical and legal vacuum. “When a bridge falls down, we have a rough-and-ready set of guidelines for apportioning out accountability,” says P.W. Singer, a senior fellow at the Brookings Institution and author of Wired for War. “Now we have the equivalent of a bridge that can get up and move and operate in the world, and we don’t have a way of figuring out who’s responsible for it when it falls down.”
In a debate steeped in speculation and short on empirical data, a set of smart ethical guidelines could act as an insurance policy. “My concern is not about the immediate yuck factor: What if this robot goes wrong?” says Chris Elliott, a systems engineer and trial lawyer who contributed to a recent Royal Academy report on autonomous systems. “It’s that people will go wrong.” Even if the large-scale psychological impact of social robots turns out to be zero, Elliott worries that a single mishap, and the corresponding backlash, could reverse years of progress. Imagine the media coverage of the first patient killed by a robotic surgeon, an autonomous car that T-bones a school bus or a video clip of a robotic orderly wrestling with a dementia patient. “The law is way behind. We could reach a point where we’re afraid to deploy new beneficial robots because of the legal uncertainty,” Elliott says.
The exact nature of those guidelines is still anyone’s guess. One option would be to restrict the use of each robotic class or model to a specific mission—nurse bots that can visit with patients within a certain age range, or elder-care bots that watch for dangerous falls but aren’t built for small talk and snuggling.
No comments:
Post a Comment