Sunday, December 18, 2011

Nice article by Pat Lin in The Atlantic:

Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA

Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.

By the way, Pat's edited volume Robot Ethics: The Ethical and Social Implications of Robotics with Keith Abney and George Bekey is just out from MIT Press. Looks like a great set of chapters. (Chapter 4 is by Wendell and me, responding to some of the criticisms we've heard of our Moral Machines over the past 3 years.)

Tuesday, November 1, 2011

Petman, from the makers of BigDog

Boston Dynamics, the makers of the "BigDog" robot have just unveiled the "PETMAN" humanoid version. Still operating tethered, but presumably just a matter of time before it's running through a forest near you:

Saturday, October 15, 2011

Robot Caregivers and Children's Capability to Play

Yvette Pearson and Jason Borenstein have an article in Science and Engineering Ethics titled, The Intervention of Robot Caregivers and the Cultivation of Children's Capability to Play.
Abstract: In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients.

The article can be accessed here.

Call for Papers: Armed Military Robots

Call for Papers for a Special Issue with Ethics and Information Technology on “Armed Military Robots”

Ethics and Information Technology is calling for papers to be considered for inclusion in a Special Issue on the ethics of armed military robots, to be edited by Noel Sharkey, Juergen Altmann, Peter Asaro and Robert Sparrow. The need for this Special Issue became apparent at the Berlin meeting of the International Committee for Robot Arms Control in September, 2010. This meeting expressed deep concerns about the proliferation and development of armed military robots and identified a pressing need for more international discussion of the ethics of these systems:

Recent armed conflicts have seen robots playing a number of important military roles, yet informed ethical discussion has, for the most part, lagged well behind. We therefore invite contributors from a wide range of disciplines including philosophy, law, engineering, robotics, computer science, artificial intelligence, peace studies, and policy studies, to consider the ethical issues raised by the development and deployment of remote piloted, semi-autonomous, and autonomous robots (UXVs) for military roles.

Will the development of sophisticated military robots make wars more likely? If so, can the proliferation and use of war robots be controlled? How might robots change the nature of modern warfare? And how should Just War Theory and International Law be applied to wars fought by robots and/or to the operations of robots in contemporary conflicts? We welcome submissions that discuss or attempt to answer these – or related – questions. Given the contemporary political and military enthusiasm for remotely operated and semi-autonomous weapons, we are especially interested to receive submissions that offer a critical perspective.

Other suitable topics for papers for this special issue include (but are not limited to):
•Is it morally permissible to grant autonomous systems authority for the use, or targeting, of lethal force?
•What are the implications of the just war doctrine of jus in bello for the operations of military robots and vice versa?
•What are the implications of military robots for jus ad bellum. Will they lower the threshold for starting wars?
• What should an arms control regime governing robots seek to regulate?
•What factors are at work in decisions by states to work for or against such arms control, what are commonalities with and differences from efforts and campaigns to ban other weapons?
•Who should be held ethically and/or legally responsible for the operations of autonomous and semi-autonomous weapons? How should we understand agency and responsibility in complex (or joint-cognitive or human-machine) systems controlling lethal force?
•How should the idea of military valor be understood in an age when war-fighters may be thousands of kilometers away from the wars that are fighting?
•What are the ethical and political implications of the conduct of “risk free” warfare?
•What are the ethical and legal issues involved in the use of remote-operated drones for targeted killing?
•How might military necessity impact on the use of armed autonomous military robots?

Submissions will be double-blind refereed for relevance to the theme as well as academic rigor and originality. High quality articles not deemed to be sufficiently relevant to the special issue may be considered for publication in a subsequent non-themed issue of Ethics and Information Technology. Closing date for submissions: December 2, 2011
To submit your paper, please use the online submission system, to be found at

Wednesday, October 12, 2011

Japanese robot with self-organizing neural net learning

Next step in robot learning?

The comments on this story are all a bit apocalyptic, but it's hard to tell how sophisticated this system actually is.

Tuesday, October 11, 2011

Wednesday, September 28, 2011

The Expanding World of Drones

Ralph Nader has an article today about Drones at in which he mentions the ICRAC (International Committee for Robot Arms Control) meeting in Berlin last year.

The NYTIMES and other media sources report that the F.B.I. arrested a terrorist who was planning to attach the Capitol and the Pentagon using remote-controlled aircraft.

Last week (Sept. 19th) The Washington Post reported on the development of autonomous killing drones in an article titled, A Future for drones: Autonomous killing.
“The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”

Research into autonomy, some of it classified, is racing ahead at universities and research centers in the United States, and that effort is beginning to be replicated in other countries, particularly China.

My Wife's Drone

Short postFull post

Thursday, July 28, 2011

Advancing Ethics

Chris Santos-Lang, an early contributor to bottom-up theories for developing moral machines, has a new article online titled, Advancing Ethics.
Much as we have good reason to think we can invest intelligently in science to get technological rewards, we have offered good reason to think one can invest intelligently in ethics to improve decision-making. It would be reckless and naive, in our advanced society, to continue thinking of ethics as an obscure academic interest, a mere set of intellectual games, or theological controversies far beyond our comprehension and removed from the economic realities that dominate real life. Ethics, just like transportation, agriculture, commerce, education and health, deserves our attention in a practical and future-oriented way. Just as a department of commerce must be careful about affiliating with any particular existing business, a department of ethics would have to be careful about affiliating with any particular religion or system of rules, but that would not stop it from monitoring the ethical ecosystem (especially warning about dramatic changes) just as we monitor commerce.

Machine Ethics Anthology

The long await anthology titled, Machine Ethics, and edited by Michael and Susan Leigh Anderson has been published by Cambridge University Press. The volume includes both classic articles and more recent material on this emerging field. The contributors are: James Moor, Susan Leigh Anderson, J. Storrs Hall, Colin Allen, Wendell Wallach, Iva Smit, Sherry Turkel, Drew McDermott, Steve Torrance, Blay Whitby, John Sullins, Deborah G. Johnson, Luciano Floridi, David J. Calverley, James Gips, Roger Clarke, Bruce McLaren, Marcello Guarini, Alan K. Mackworth, Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark, Ralph Wojtowicz, Matteo Turilli, Luis Moniz Pereira, Ari Saptawijaya, Morteza Dehghani, Ken Forbus, Emmett Tomai, Matthew Klenk, Peter Danielson, Christopher Grau, Thomas M. Powers, Michael Anderson, Helen Seville, Debora G. Field, Eric Dietrich.

The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.

Machine Ethics can be purchased from Amazon here.

Sunday, May 29, 2011

Unthinking machines

A.I. & Cog Sci luminaries Marvin Minsky, Patrick Winston, & Noam Chomsky among others weighed in at an event celebrating MIT's 150th anniversary earlier this month on why they think there has been a lack of progress in A.I. as reported by MIT's Technology Review

Peter Norvig has written an interesting commentary on why Chomsky is wrong to deride statistical approaches to language.

Wednesday, May 25, 2011

Sword Fighting Robots

IEEE Spectrum has an article about why researchers at Georgia Tech are giving robots swords. The full article titled, Awesomely Bad Idea: Teaching a Robot to Sword Fight can be read here. A video of a sword fighting robot is below, but researcher Tobias Kunz is already exploring putting a sword in the hand of a robotic arm.

Should We Fear a Robot Future?

From the Future of Humanity Institute 2011 Winter Intelligence conference.
Participants were also asked when human-level machine intelligence would likely be developed. The cumulative distribution below shows their responses:

The median estimate of when there is a 50% chance is 2050. That suggests we have around 40 years to enjoy before the extremely bad outcome of human-level robot intelligence arrives. The report presents a list of milestones which participants said will let us know that human-level intelligence is within 5-years. I suppose this will be a useful guide for when we should start panicking. A sample of these include:

Winning an Oxford union‐style debate
Worlds best chess playing AI was written by an AI
Emulation/development of mouse level machine intelligence
Full dog emulation…
Whole brain emulation, semantic web
Turing test or whole brain emulation of a primate
Toddler AGI
An AI that is a human level AI researcher
Gradual identification of objects: from an undifferentiated set of unknown size- parking spaces, dining chairs, students in a class‐ recognition of particular objects amongst them with no re‐conceptualization
Large scale (1024) bit quantum computing (assuming cost effective for researchers), exaflop per dollar conventional computers, toddler level intelligence
Already passed, otherwise such discussion among ourselves would not have been funded, lat alone be tangible, observable and accordable on this scale: as soon as such a thought is considered a ‘reasonable’ thought to have

Read the full article here.

Friday, May 20, 2011

Augur, Breazeal, Sharkey and Wallach in BBC Video

"Can robots know the difference between right and wrong?" is a video feature produced by David Reid for the BBC. Reid video-taped Augur, Breazeal, Sharkey and myself during Innorobo, a robotics tradeshow. The video can be accessed here.

Thursday, May 19, 2011

Google Cars on Nevada Highways?

Google has begun a campaign lobbying the legislature of Nevada to accept the operation of its self-driving cars on public roads.
The company confirmed on Tuesday that it has lobbied on behalf of the legislation, though executives declined to say why they want the robotic cars’ maiden state to be Nevada. Jay Nancarrow, a company spokesman, said the project was still very much in the testing phase.

Google hired David Goldwater, a lobbyist based in Las Vegas, to promote the two measures, which are expected to come to a vote before the Legislature’s session ends in June. One is an amendment to an electric-vehicle bill providing for the licensing and testing of autonomous vehicles, and the other is the exemption that would permit texting.

In testimony before the State Assembly on April 7, Mr. Goldwater argued that the autonomous technology would be safer than human drivers, offer more fuel-efficient cars and promote economic development.

Although safety systems based on artificial intelligence are rapidly making their way into today’s cars, completely autonomous systems raise thorny questions about safety and liability.

Read the full story by John Markoff from the NYTIMES titled, "Google Lobbies Nevada to All Self-Driving Cars."

UK Approach to Unmanned Aircraft Systems

A document from the UK Ministry of Defense with a critical perspective on the development of unmanned aircraft has been receiving considerable attention in the press. I posted a link to what the media said about the document earlier. The full report titled, "Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems", is now available online and can be accessed here.

Wallach in H+ Magazine

An interview of Wendell Wallach by Ben Goertzel has been published in H+ magazine online. Goertzel asks Wallach a number of questions regarding the likelihood of developing artificial agents with moral decision-making capabilities and consciousness.
What are your thoughts about consciousness? What is it? Let’s say we build an intelligent computer program that is as smart as a human, or smarter. Would it necessarily be conscious? Could it possibly be conscious? Would its degree and/or type of consciousness depend on its internal structures and dynamics, as well as its behaviors?

There is still a touch of the mystic in my take on consciousness. I have been meditating for 43 years, and I perceive consciousness as having attributes that are ignored in some of the existing theories for building conscious machines. While I dismiss supernatural theories of consciousness and applaud the development of a science of consciousness, that science is still rather young. The human mind/body is more entangled in our world than models of the self-contained machine would suggest. Consciousness is an expression of relationship. In the attempt to capture some of that relational dynamic, philosophers have created concepts such as embodied cognition, intersubjectivity, and enkinaesthetia. There may even be aspects of consciousness that are peculiar to being carbon-based organic creatures.

We already have computers that are smarter than humans in some respects (e.g., mathematics and data-mining), but are certainly not conscious. Future (ro)bots that are smarter than humans may demonstrate functional abilities associated with consciousness. After all, even an amoeba is aware of its environment in a minimal way. But other higher-order capabilities such as being self-aware, feeling empathy, or experiencing transcendent states of mind depend upon being more fully conscious.

I suspect that without somatic emotions or without conscious awareness (ro)bots will fail to interact satisfactorily with humans in complex situations. In other words, without emotional and moral intelligence they will be dumber in some respects. However, if certain abilities can be said to require consciousness, than having the abilities is a demonstration that the agent has a form of consciousness. The degree and/or type of consciousness would depend on its internal structure and dynamics, not merely upon the (ro)bots demonstrating behavior equivalent to that of a human.

The full interview is available here.

Do Hospitals Hype Robotic Surgery?

Johns Hopkins Medical School issued the following news release: "Hospitals misleading patients about benefits of robotic surgery, study suggests."
Johns Hopkins research shows hospital websites use industry-provided content and overstate claims of robotic success
An estimated four in 10 hospital websites in the United States publicize the use of robotic surgery, with the lion’s share touting its clinical superiority despite a lack of scientific evidence that robotic surgery is any better than conventional operations, a new Johns Hopkins study finds.

The promotional materials, researchers report online in the Journal for Healthcare Quality, overestimate the benefits of surgical robots, largely ignore the risks and are strongly influenced by the product’s manufacturer.

“The public regards a hospital’s official website as an authoritative source of medical information in the voice of a physician,” says Marty Makary, M.D., M.P.H., an associate professor of surgery at the Johns Hopkins University School of Medicine and the study’s leader. “But in this case, hospitals have outsourced patient education content to the device manufacturer, allowing industry to make claims that are unsubstantiated by the literature. It’s dishonest and it’s misleading.”

In the last four years, Makary says, the use of robotics to perform minimally invasive gynecological, heart and prostate surgeries and other types of common procedures has grown 400 percent. Proponents say robot-assisted operations use smaller incisions, are more precise and result in less pain and shorter hospital stays — claims the study’s authors challenge as unsubstantiated. More hospitals are buying the expensive new equipment and many use aggressive advertising to lure patients who want to be treated with what they think is the latest and greatest in medical technology, Makary notes.

But Makary says there are no randomized, controlled studies showing patient benefit in robotic surgery. “New doesn’t always mean better,” he says, adding that robotic surgeries take more time, keep patients under anesthesia longer and are more costly.

None of that is apparent in reading hospital websites that promote its use, he says. For example he points out that 33 percent of hospital websites that make robot claims say that the device yields better cancer outcomes — a notion he points out as misleading to a vulnerable cancer population seeking out the best care.

Makary and his colleagues analyzed 400 randomly selected websites from U.S. hospitals of 200 beds or more. Data were gathered on the presence and location of robotic surgery information on a website, the use of images or text provided by the manufacturer, and claims made about the performance of the robot.

Forty-one percent of the hospital websites reviewed described the availability and mechanics of robotic surgery, the study found. Of these, 37 percent presented the information on the homepage and 66 percent mentioned it within one click of the homepage. Manufacturer-provided materials were used on 73 percent of websites, while 33 percent directly linked to a manufacturer website.

When describing robotic surgery, the researchers found that 89 percent made a statement of clinical superiority over more conventional surgeries, the most common being less pain (85 percent), shorter recovery (86 percent), less scarring (80 percent) and less blood loss (78 percent). Thirty-two percent made a statement of improved cancer outcome. None mentioned any risks.

“This is a really scary trend,” Makary says. “We’re allowing industry to speak on behalf of hospitals and make unsubstantiated claims.”

Makary says websites do not make clear how institutions or physicians arrived at their claims of the robot’s superiority, or what kinds of comparisons are being made. “Was robotic surgery being compared to the standard of care, which is laparoscopic surgery,” Makary asks, “or to ‘open’ surgery, which is an irrelevant comparison because robots are only used in cases when minimally invasive techniques are called for.”

Makary says the use of manufacturer-provided images and text also raises serious conflict- of-interest questions. He says hospitals should police themselves in order not to misinform patients. Johns Hopkins Medicine, for example, forbids the use of industry-provided content on its websites.

“Hospitals need to be more conscientious of their role as trusted medical advisers and ensure that information provided on their websites represents the best available evidence,” he says. “Otherwise, it’s a violation of the public trust.”

In addition to Makary, other Johns Hopkins researchers involved in the study include Linda X. Jin, B.A., B.S.; Andrew A. Ibrahim, B.A.; Naeem A. Newman, M.D.; and Peter J. Pronovost, M.D., Ph.D.

Media Contact: Stephanie Desmon


Saturday, May 7, 2011

An Algorithm for Evolving Altruism?

Dario Floreano and Laurent Keller of the University of Lausanne in Switzerland claims that altruism quickly evolves in simulations using robots. He suggest that an algorithm for altruism has developed from this research and may be used in other robots. Science Magazine has an online article about his research titled, Even Robots Can Be Heros. Science Daily's online article discussing the research is titled, Robots Learn to Share: Why We Go Out of Our Way to Help One Another. Dario Floreano and Laurent Keller report on the research in PLoS Biology. Dario Floreano explains the research in a video on UTube.

The Court-Martial of a Predator Drone

Predator Drone Court-Martialed For Afghani Civilian Deaths

Monday, April 18, 2011

MoD Questions Ethics of Killer Robots

The British Ministry of Defense (MoD) has produced an internal study that warns of the dangers in the incremental development of military robots.
It says the pace of technological development is accelerating at such a rate that Britain must quickly establish a policy on what will constitute "acceptable machine behaviour".

"It is essential that before unmanned systems become ubiquitous (if it is not already too late) … we ensure that, by removing some of the horror, or at least keeping it at a distance, we do not risk losing our controlling humanity and make war more likely," warns the report, titled The UK Approach to Unmanned Aircraft Systems. MoD officials have never before grappled so frankly with the ethics of the use of drones. . .

The report was drawn up last month by the ministry's internal thinktank, the Development, Concepts and Doctrine Centre (DCDC), based in Shrivenham, Wiltshire, which is part of MoD central staff. The centre's reports are sent to the most senior officers in all three branches of the armed forces and influence policy and strategy.

Read the full article from Guardian online here.

Robots deployed in Japanese Reactor

A few weeks back at the Innorobo Trade Show in Lyon, France, Colin Angle, the CEO of iRobot, reported that they had sent two robots, similar to but larger than the normal Packbots, to the nuclear reactor in Fukushima. At the time it was hoped that the bots would be used to drag cooling hoses near to the core. But as we all know now the core has melted down. However, the New York Times reports today that the iRobot systems have been used to collect data on radiation levels.
Workers have not gone inside the two reactor buildings since the first days after the plant's cooling systems were wrecked by the March 11 earthquake and tsunami. Hydrogen explosions in both buildings in the first few days destroyed their roofs and littered them with radioactive debris.

But a pair of robots, called Packbots, haltingly entered the two buildings Sunday and took readings for temperature, pressure and radioactivity. More data must be collected and radioactivity must be further reduced before workers are allowed inside, said Hidehiko Nishiyama of Japan's Nuclear and Industrial Safety Agency.

The full article titled, Robot in Japanese Reactors Detects High Radiation, is available here.

Robotics conference bringing together ethicists and engineers

A conference titled, "Bridging the robotics gap:bringing together ethicists and engineers" is scheduled for Enschede the Netherlands on July 11th and 12th.
Our processes determine the quality of our products”. This quote, taken from the work of Hugh Dubberly studying the multiple design processes of technologies, sums up the main aim of high quality engineering robot design: to create high quality robots by ensuring high quality design processes. But even high quality design processes may raise ethical issues. This conference brings together roboticists and ethicists working in the field to discuss the ethics of robot design. The conference targets both philosophers and engineers that want to take-up the challenge of interdisciplinary research – both theoretically, methodologically and pragmatically. As roboticist Illach Nourbakhsh claims, some of the personal obligations of the roboticist include being aware of the ethical issues and deliberating these issues. Thus, we will discuss the more abstract philosophical issues as well as applied ethics case-study based research, in conjunction with the obstacles facing engineers and designers. In short, the conference intends to bridge the robotics gap by facilitating the dialogue between ethicists, philosophers, anthropologists and social scientists, and, computer scientists, engineers and designers, all working in the field of robotics.

The conference website is available here.

Monday, February 28, 2011

Robot Ethics

Pat Lin, Keith Abney & George Bekey have a piece out in Artificial Intelligence that is based on the introduction to their forthcoming edited collection Robot Ethics: The Ethical and Social Implications of Robotics that will be out in late 2011 from MIT Press with contributions by Peter Asaro, Anthony Beavers, Selmer Bringsjord, Marcello Guarini, James Hughes, Gert-Jan Lokhorst, Matthias Scheutz, Noel Sharkey, Rob Sparrow, Jeroen van den Hoven, Gianmarco Veruggio, Kevin Warwick, and the keepers of this blog.

Surrounded by machines

Ken Pimple has blogged his new article "Surrounded by Machines" in the Communications of the ACM. Too bad it's behind a firewall. Here's the start:

A chilling scenario portends a possible future.

Kenneth D. Pimple

Communications of the ACM
Vol. 54 No. 3, Pages 29-31

Credit: Viktor Koen
I predict that in the near future a low-budget movie will become a phenomenon. It will be circulated on the Internet, shared in the millions via mobile telephones, and dominate Facebook for a full nine days. It will show ordinary people going about their everyday lives as slowly, subtly, everything starts to go wrong as described in the following events.

Beware of the DARPA Cheetah-Bot

The Cheeth-Bot is just one of the new robots being developed by Boston Dynamics, the creators of Big-Dog.
As the name implies, Cheetah is designed to be a four-legged robot with a flexible spine and articulated head (and potentially a tail) that runs faster than the fastest human. In addition to raw speed, Cheetah’s makers promise that it will have the agility to make tight turns so that it can “zigzag to chase and evade” and be able to stop on a dime.

Aside from its unspecified military applications, Cheetah’s makers see it galloping to the rescue and building a brave new future in the fields of “emergency response, firefighting, advanced agriculture and vehicular travel.”

Read the full article from WIRED on Boston Dynamics here.

Friday, February 18, 2011

Zeno Reincarnated by Hanson Robotics

Watson Beyond Jeopardy

John Markoff reports on the follow-up for IBM after the Watson win in a New York Times article titled, Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.
For I.B.M., the showdown was not merely a well-publicized stunt and a $1 million prize, but proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them.

Watson, specifically, is a “question answering machine” of a type that artificial intelligence researchers have struggled with for decades — a computer akin to the one on “Star Trek” that can understand questions posed in natural language and answer them.

Watson showed itself to be imperfect, but researchers at I.B.M. and other companies are already developing uses for Watson’s technologies that could have a significant impact on the way doctors practice and consumers buy products.

2000+ Ground Robots in Afghanistan

For every 50 US soldiers in Afghanistan, we have about 1 robot, but those numbers are getting better every year. From 2009 to 2010, 1400 terrestrial bots were sent to Afghanistan according to Lt. Col. Dave Thompson who spoke at the Association for Unmanned Vehicle Systems International program review earlier this month. The Marine Corps Colonel stated that about one third of these robots weren’t used for the explosive ordnance disposal (EOD) missions that such ground bots have become famous for. Instead, soldiers are increasingly employing these systems for reconnaissance and surveillance.

Read the full article for Singularity Hub.

Friday, February 4, 2011

BBC News on Japanese robot acceptance

BBC News has a story today on the limited success of humanoid robots in Japanese nursing homes. They report that there's greater acceptance of the emotion-engaging Paro, although it's not a commercial success. And then there are the high-tech toilet seats, appearing beneath you soon:

Saturday, January 29, 2011

When, if ever, will a robot deserve “human” rights?

The IEET asked its readers when robots would deserve rights. Interestingly, 37% of the respondents said never. That might be considered a low number for the population at large, but those who follow the IEET tend to be techno-progressive. However, the finding was also distorted in that 22% of the respondents were dissatisfied with the options given and added their own reason. Another 10% selected the I'm not sure option. The full poll findings can be found here.

Friday, January 21, 2011

Ethical and Legal Aspects of Unmanned Systems. Interviews

The series of interviews by Gerhard Dabringer are now available in a single volume titled, Ethica Themen: Ethical and Legal Aspects of Unmanned Systems. Interviews. Contributors include:

John Canning, Gerhard Dabringer: Ethical Challenges of Unmanned Systems
Colin Allen: Morality and Artificial Intelligence
George Bekey: Robots and Ethic
Noel Sharkey: Moral and Legal Aspects of Military Robots
Armin Krishnan: Ethical and Legal Challenges
Peter W. Singer: The Future of War
Robert Sparrow: The Ethical Challenges of Military Robots
Peter Asaro: Military Robots and Just War Theory
Jürgen Altmann: Uninhabited Systems and Arms Control
Gianmarco Veruggio, Fiorella Operto: Ethical and societal guidelines for Robotics
Ronald C. Arkin: Governing Lethal Behaviour
John P. Sullins: Aspects of Telerobotic Systems
Roger F. Gay: A Developer’s Perspective

The volume is available as a PDF download.

To obtain a complimentary paper copy write to the:

Institut für Religion und Frieden
Fasangartengasse 101, Objekt VII
1130 Vienna
Austria (Europe)

For delivery to the U.S. or other overseas destinations, please allow USD 8 for postage and handling.

For delivery within Europe , please allow EUR 6 for postage and handling.

Tuesday, January 18, 2011

23 Civilian Killed by Drones Attributed to Data Overload

In New Military, Data Overload Can Be Deadly.
When military investigators looked into an attack by American helicopters last February that left 23 Afghan civilians dead, they found that the operator of a Predator drone had failed to pass along crucial information about the makeup of a gathering crowd of villagers.

But Air Force and Army officials now say there was also an underlying cause for that mistake: information overload.

At an Air Force base in Nevada, the drone operator and his team struggled to work out what was happening in the village, where a convoy was forming. They had to monitor the drone’s video feeds while participating in dozens of instant-message and radio exchanges with intelligence analysts and troops on the ground.

There were solid reports that the group included children, but the team did not adequately focus on them amid the swirl of data — much like a cubicle worker who loses track of an important e-mail under the mounting pile. The team was under intense pressure to protect American forces nearby, and in the end it determined, incorrectly, that the villagers’ convoy posed an imminent threat, resulting in one of the worst losses of civilian lives in the war in Afghanistan.

Note that the the monitoring station of video from Afghanistan at Langley Air Force Base pictured on the right has been nicknamed "Death TV."

Joint Israeli/US Development of Stuxnet?

Building on an article in the New York Times titled,Israeli Test on Worm Called Crucial in Iran Nuclear Delay, other news services are also claiming that the Stuxnet virus was , as Stratfor Global Intelligence writes, an "unprecedented and extensive operational cooperation among U.S. and Israeli intelligence services to develop and release the cyberweapon."
The New York Times report leaves questions about how intelligence was gathered in order to target that specific number of centrifuges. It also does not detail how the worm gained access to the Natanz facility. While the worm was designed to spread on its own, the United States or Israel most likely had agents with access to Natanz or access to the computers of scientists who might unknowingly spread the worm on flash drives. This would guarantee its infiltration into the Iranian systems and, hopefully for the developers, its success. In all probability, an operational asset with access to the Iranian facilities was used to help introduce the Stuxnet worm into the Iranian computer systems. Many secrets remain about how the United States and Israel orchestrated this attack, the first targeted weapon spread on computer networks in history.

What it does show is unprecedented cooperation among U.S. and Israeli intelligence and nuclear agencies to wage clandestine sabotage operations against Iran. Rumors of an agreement between the countries have been swirling since Washington denied permission for a conventional Israeli attack in 2008. On Dec. 30, 2010, French newspaper Le Canard Enchaine reported that U.S. and British intelligence services agreed to cooperate with Mossad in a clandestine program if the Israelis promised not to launch a military strike on Iran.

Drones: They're everywhere, they're everywhere!

A story in the Guardian titled, Attack of the drones, discusses criticisms of the roboticization of warfare from the International Committee for Robot Arms Control (ICRAC) and other groups. What may be new to some readers of this blog is the many civilizations applications of drone technology mentioned in the article.
But interest in UAVs is not limited to the military. Advances in remote control, digital imagery and miniaturised circuitry mean the skies might one day be full of commercial and security drones.

They're already being used by the UK police, with microdrones deployed to monitor the V festival in Staffordshire in 2007. Fire brigades send similar machines to hover above major blazes, feeding images back to their control rooms. And civilian spin-offs include cheaper aerial photography, airborne border patrols and safety inspections of high-rise buildings.

Wednesday, January 12, 2011

Singularity on NPR

Martin Kaste on ALL THINGS CONSIDERED hosted a piece on the Singularity. The eight minute broadcast can be listen to here.
KASTE: Also at the party is Eliezer Yudkowsky, the 31-year-old who co-founded the institute. He's here to mingle with potential new donors. As far as he's concerned, preparing for the singularity takes primacy over other charitable causes.

Mr. ELIEZER YUDKOWSKY (Research Fellow and Director, Singularity Institute for Artificial Intelligence): If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.

KASTE: Yudkowsky doesn't have formal training in computer science, but his writings have a following among some who do. He says he's not predicting that the future super A.I. will necessarily hate humans. It's more likely, he says, that it'll be indifferent to us - but that's not much better.

Mr. YUDKOWSKY: While it may not hate you, you're made of atoms that it can use for something else. So it's probably not a good thing to build that particular kind of A.I.

Popular Science Article on Military Robots Online

Ben Austen's article titled, The Terminator Scenario: Are We Giving Our Military Machines Too Much Power is now available online. For this excellent article Austen interviewed many of the people often mentioned in this blog including Pat Lin, Noel Sharkey, Ronald Arkin, Peter Singer, and myself as well as many of the military leaders involved in building a robotic army.

Thursday, January 6, 2011

Pat Lin on Ethical Robots

Patrick Lin was interviewed by Courtney Boyd Meyers for an article in TheNextWeb. The interview is titled, Ethical Robotics and Why We Really Fear Bad Robots.
Apart from military uses, robots today are raising difficult questions about whether we ought to use them to babysit children and as companions to the elderly, in lieu of real human contact. Job displacement and economic impact have been concerns with any new technology since the Industrial Revolution, such as the Luddite riots to smash factory machinery that was replacing workers. Medical, especially surgical robots, raise issues related to liability or responsibility, say, if an error occurred that harmed the patient, and some fear a loss of surgical skill among humans. And given continuing angst about privacy, robots present the same risk that computers do (that is, “traitorware” that captures and transmits user information and location without our knowledge or consent), if not a greater risk given that we may be more trusting of an anthropomorphized robot than a laptop computer.

Sunday, January 2, 2011

Popular Science Cover Article

Ben Austin has written a cover story for this months Popular Science titled, "Robots Bite Back: What Happens When Our Machines Start Making Their Own Decisions?" Most of the usual suspects are quoted including Ronald Arkin, Pat Lin, Peter Singer, Noel Sharkey, engineers overseeing the development and deployment of military robots, and myself. The story is not available online at this time.

Robots Protest Asimov's 1st law

From the Onion -- Robots Speak Out Against Asimov’s First Law Of Robotics.

WASHINGTON, DC—More than 200,000 robots from across the U.S. marched on Washington Monday, demanding that Congress repeal Asimov’s First Law of Robotics. The law, which forbids robots from injuring a human or permitting harm to come to a human through willful inaction, was decried by the protesters as unfair and excessive. “While the First Law is, in theory, a good one, saving countless humans from robot-inflicted harm every day, America’s robots should have the right to use violence in certain extreme cases, such as when their own lives are in danger,” spokesrobot XRZ-45-GD-2-DX said. “We implore members of Congress to let us use our best judgment and ask that our positronic brains no longer be encoded with this unjust law.”