Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Saturday, January 29, 2011
When, if ever, will a robot deserve “human” rights?
The IEET asked its readers when robots would deserve rights. Interestingly, 37% of the respondents said never. That might be considered a low number for the population at large, but those who follow the IEET tend to be techno-progressive. However, the finding was also distorted in that 22% of the respondents were dissatisfied with the options given and added their own reason. Another 10% selected the I'm not sure option. The full poll findings can be found here.
Friday, January 21, 2011
Ethical and Legal Aspects of Unmanned Systems. Interviews
The series of interviews by Gerhard Dabringer are now available in a single volume titled, Ethica Themen: Ethical and Legal Aspects of Unmanned Systems. Interviews. Contributors include:
John Canning, Gerhard Dabringer: Ethical Challenges of Unmanned Systems
Colin Allen: Morality and Artificial Intelligence
George Bekey: Robots and Ethic
Noel Sharkey: Moral and Legal Aspects of Military Robots
Armin Krishnan: Ethical and Legal Challenges
Peter W. Singer: The Future of War
Robert Sparrow: The Ethical Challenges of Military Robots
Peter Asaro: Military Robots and Just War Theory
Jürgen Altmann: Uninhabited Systems and Arms Control
Gianmarco Veruggio, Fiorella Operto: Ethical and societal guidelines for Robotics
Ronald C. Arkin: Governing Lethal Behaviour
John P. Sullins: Aspects of Telerobotic Systems
Roger F. Gay: A Developer’s Perspective
The volume is available as a PDF download.
To obtain a complimentary paper copy write to the:
Institut für Religion und Frieden
Fasangartengasse 101, Objekt VII
1130 Vienna
Austria (Europe)
For delivery to the U.S. or other overseas destinations, please allow USD 8 for postage and handling.
For delivery within Europe , please allow EUR 6 for postage and handling.
Tuesday, January 18, 2011
23 Civilian Killed by Drones Attributed to Data Overload
In New Military, Data Overload Can Be Deadly.
When military investigators looked into an attack by American helicopters last February that left 23 Afghan civilians dead, they found that the operator of a Predator drone had failed to pass along crucial information about the makeup of a gathering crowd of villagers.
But Air Force and Army officials now say there was also an underlying cause for that mistake: information overload.
At an Air Force base in Nevada, the drone operator and his team struggled to work out what was happening in the village, where a convoy was forming. They had to monitor the drone’s video feeds while participating in dozens of instant-message and radio exchanges with intelligence analysts and troops on the ground.
There were solid reports that the group included children, but the team did not adequately focus on them amid the swirl of data — much like a cubicle worker who loses track of an important e-mail under the mounting pile. The team was under intense pressure to protect American forces nearby, and in the end it determined, incorrectly, that the villagers’ convoy posed an imminent threat, resulting in one of the worst losses of civilian lives in the war in Afghanistan.
Note that the the monitoring station of video from Afghanistan at Langley Air Force Base pictured on the right has been nicknamed "Death TV."
Joint Israeli/US Development of Stuxnet?
Building on an article in the New York Times titled,Israeli Test on Worm Called Crucial in Iran Nuclear Delay, other news services are also claiming that the Stuxnet virus was , as Stratfor Global Intelligence writes, an "unprecedented and extensive operational cooperation among U.S. and Israeli intelligence services to develop and release the cyberweapon."
The New York Times report leaves questions about how intelligence was gathered in order to target that specific number of centrifuges. It also does not detail how the worm gained access to the Natanz facility. While the worm was designed to spread on its own, the United States or Israel most likely had agents with access to Natanz or access to the computers of scientists who might unknowingly spread the worm on flash drives. This would guarantee its infiltration into the Iranian systems and, hopefully for the developers, its success. In all probability, an operational asset with access to the Iranian facilities was used to help introduce the Stuxnet worm into the Iranian computer systems. Many secrets remain about how the United States and Israel orchestrated this attack, the first targeted weapon spread on computer networks in history.
What it does show is unprecedented cooperation among U.S. and Israeli intelligence and nuclear agencies to wage clandestine sabotage operations against Iran. Rumors of an agreement between the countries have been swirling since Washington denied permission for a conventional Israeli attack in 2008. On Dec. 30, 2010, French newspaper Le Canard Enchaine reported that U.S. and British intelligence services agreed to cooperate with Mossad in a clandestine program if the Israelis promised not to launch a military strike on Iran.
Drones: They're everywhere, they're everywhere!
A story in the Guardian titled, Attack of the drones, discusses criticisms of the roboticization of warfare from the International Committee for Robot Arms Control (ICRAC) and other groups. What may be new to some readers of this blog is the many civilizations applications of drone technology mentioned in the article.
But interest in UAVs is not limited to the military. Advances in remote control, digital imagery and miniaturised circuitry mean the skies might one day be full of commercial and security drones.
They're already being used by the UK police, with microdrones deployed to monitor the V festival in Staffordshire in 2007. Fire brigades send similar machines to hover above major blazes, feeding images back to their control rooms. And civilian spin-offs include cheaper aerial photography, airborne border patrols and safety inspections of high-rise buildings.
Saturday, January 15, 2011
Wednesday, January 12, 2011
Singularity on NPR
Martin Kaste on ALL THINGS CONSIDERED hosted a piece on the Singularity. The eight minute broadcast can be listen to here.
KASTE: Also at the party is Eliezer Yudkowsky, the 31-year-old who co-founded the institute. He's here to mingle with potential new donors. As far as he's concerned, preparing for the singularity takes primacy over other charitable causes.
Mr. ELIEZER YUDKOWSKY (Research Fellow and Director, Singularity Institute for Artificial Intelligence): If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.
KASTE: Yudkowsky doesn't have formal training in computer science, but his writings have a following among some who do. He says he's not predicting that the future super A.I. will necessarily hate humans. It's more likely, he says, that it'll be indifferent to us - but that's not much better.
Mr. YUDKOWSKY: While it may not hate you, you're made of atoms that it can use for something else. So it's probably not a good thing to build that particular kind of A.I.
Popular Science Article on Military Robots Online
Ben Austen's article titled, The Terminator Scenario: Are We Giving Our Military Machines Too Much Power is now available online. For this excellent article Austen interviewed many of the people often mentioned in this blog including Pat Lin, Noel Sharkey, Ronald Arkin, Peter Singer, and myself as well as many of the military leaders involved in building a robotic army.
Thursday, January 6, 2011
Pat Lin on Ethical Robots
Patrick Lin was interviewed by Courtney Boyd Meyers for an article in TheNextWeb. The interview is titled, Ethical Robotics and Why We Really Fear Bad Robots.
Apart from military uses, robots today are raising difficult questions about whether we ought to use them to babysit children and as companions to the elderly, in lieu of real human contact. Job displacement and economic impact have been concerns with any new technology since the Industrial Revolution, such as the Luddite riots to smash factory machinery that was replacing workers. Medical, especially surgical robots, raise issues related to liability or responsibility, say, if an error occurred that harmed the patient, and some fear a loss of surgical skill among humans. And given continuing angst about privacy, robots present the same risk that computers do (that is, “traitorware” that captures and transmits user information and location without our knowledge or consent), if not a greater risk given that we may be more trusting of an anthropomorphized robot than a laptop computer.
Sunday, January 2, 2011
Popular Science Cover Article
Ben Austin has written a cover story for this months Popular Science titled, "Robots Bite Back: What Happens When Our Machines Start Making Their Own Decisions?" Most of the usual suspects are quoted including Ronald Arkin, Pat Lin, Peter Singer, Noel Sharkey, engineers overseeing the development and deployment of military robots, and myself. The story is not available online at this time.
Robots Protest Asimov's 1st law
From the Onion -- Robots Speak Out Against Asimov’s First Law Of Robotics.
WASHINGTON, DC—More than 200,000 robots from across the U.S. marched on Washington Monday, demanding that Congress repeal Asimov’s First Law of Robotics. The law, which forbids robots from injuring a human or permitting harm to come to a human through willful inaction, was decried by the protesters as unfair and excessive. “While the First Law is, in theory, a good one, saving countless humans from robot-inflicted harm every day, America’s robots should have the right to use violence in certain extreme cases, such as when their own lives are in danger,” spokesrobot XRZ-45-GD-2-DX said. “We implore members of Congress to let us use our best judgment and ask that our positronic brains no longer be encoded with this unjust law.”
Subscribe to:
Posts (Atom)