Tuesday, August 31, 2010

Flash Crash Ethics

Aug 29: Australian Broadcasting radio show Background Briefing aired a story on "the flash crash" for which Colin was interviewed (at the end).

Program description:

A few months ago the US share market plunged l000 points in a few minutes, and trillions were traded both up and down. What caused it, and can it happen again? Tiny high frequency computer algorithms - or algos - roam the markets, buying and selling in a parallel universe more or less uncontrolled by anyone. Did they go feral, or was it the fat finger of a coked out trader? In September US regulators bring out their findings. Reporter Stan Correy.

Saturday, August 28, 2010

Call for Papers

IEEE Transactions on Affective Computing

Special Issue on Ethics and Affective Computing

The pervasive presence of automated and autonomous systems necessitates the rapid growth of a relatively new area of inquiry called machine ethics. If machines are going to be turned loose on their own to kill and heal, explore and decide, the need for designing them to be moral becomes pressing. This need, in turn, penetrates to the very foundations of ethics as robot designers strive to build systems that comply. Fuzzy intuitions will not do when computational clarity is required. So, machine ethics also asks the discipline of ethics to make itself clear. The truth is that at present we do not know how to make it so. Rule-based approaches are being tried even in light of an acknowledged difficulty to formalize moral behavior, and it is already common to hear that introducing affects into machines may be necessary in order to make machines behave morally. From this perspective, affective computing may be morally required by machine ethics.

On the other hand, building machines with artificial affects might carry with it negative ethical consequences. In order to make humans more willing to accept robots and other automated computational devices, creating them to display emotion will be a help, since if we like them, we will, no doubt, be more willing to welcome them. We might even pay dearly to have them. But do artificial affects deceive? Will they catch us with our defenses down, and do we have to worry about Plato's caveat in the Republic that one of the best ways to be unjust is to appear just? Automated agents that seem like persons might appear congenial, even as any moral regard is ignored, making them dangerous culprits indistinguishable from automated "friends." In this light, machine ethics might demand that we exercise great caution in using affective computing. In radical cases, it might even demand that we not use it at all.

We would seem to have here a quandary. No doubt there are others. The purpose of this volume is to explore the range of ethical issues related to affective computing. Is affective computing necessary for making artificial agents moral? If so, why and how? Where does affective computing require moral caution? In what cases do benefits outweigh the moral risks? Etc.

Invited Authors:
Roddy Cowie (Queen's University, Belfast)
Luciano Floridi (University of Hertfordshire and University of Oxford)
Matthias Scheutz (Tufts University)
Papers must not have been previously published, with the exception that substantial extensions of conference papers can be considered. The authors will be required to follow the Author’s Guide for manuscript submission to the IEEE Transactions on Affective Computing at http://www.computer.org/portal/web/tac/author. Papers are due by March 1st, 2011, and should be submitted electronically at https://mc.manuscriptcentral.com/taffc-cs. Please select the "SI - Ethics 2011" manuscript type upon submission. For further information, please contact guest editor, Anthony Beavers at afbeavers@gmail.com.

Friday, August 20, 2010

Willow Garage Ready to Market Beer-Fetching, Pool-Shooting Robot



You Could Own A Pool-Shooting, Beer-Fetching Willow Garage Robot

Autonomy and Accountability in Robot Wars

Vivek (Vik) Kanwar has written an article titled, Post-Human Humanitarian Law: The Law of War in the Age of Robotic Warfare, that has been published by the Social Science Research Network.
Abstract:
This Review Essay surveys the recent literature on the tensions between of autonomy and accountability in robotic warfare. Four books, taken together, suggest an original account of fundamental changes taking place in the field of IHL: P.W. Singer’s book Wired for War: the Robotics Revolution and Conflict in the 21st Century (2009), William H. Boothby’s Weapons and the Law of Armed Conflict (2009), Armin Krishnan’s Killer Robots: Legality and Ethicality of Autonomous Weapons (2009), and Ronald Arkin’s Governing Lethal Behavior in Autonomous Robots (2009). This Review Essay argues that from the point of view of IHL the concern is not the introduction of robots into the battlefield, but the gradual removal of humans. In this way the issue of weapon autonomy marks a paradigmatic shift from the so-called “humanization” of IHL to possible post-human concerns.

P ≠ NP? Limits on Computing?

An August 10th article in the NewScientist titled, P ≠ NP? It's bad news for the power of computing, reports that a mathematician Vinay Deolalikar has perhaps solved a major computational problem.
If the result stands, it would prove that the two classes P and NP are not identical, and impose severe limits on what computers can accomplish – implying that many tasks may be fundamentally, irreducibly complex.

For some problems – including factorisation – the result does not clearly say whether they can be solved quickly. But a huge sub-class of problems called "NP-complete" would be doomed. A famous example is the travelling salesman problem – finding the shortest route between a set of cities. Such problems can be checked quickly, but if P ≠ NP then there is no computer program that can complete them quickly from scratch.

Complexity theorists have given a favourable reception to Deolalikar's draft paper, but when the final version is released in a week's time the process of checking it will intensify.

Big Brother and the Iris Scanner

Biometrics R&D firm Global Rainmakers Inc. (GRI) announced today that it is rolling out its iris scanning technology to create what it calls "the most secure city in the world." In a partnership with Leon -- one of the largest cities in Mexico, with a population of more than a million -- GRI will fill the city with eye-scanners. That will help law enforcement revolutionize the way we live -- not to mention marketers.
"In the future, whether it's entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris," says Jeff Carter, CDO of Global Rainmakers. . .
For such a Big Brother-esque system, why would any law-abiding resident ever volunteer to scan their irises into a public database, and sacrifice their privacy? GRI hopes that the immediate value the system creates will alleviate any concern. "There's a lot of convenience to this--you'll have nothing to carry except your eyes," says Carter, claiming that consumers will no longer be carded at bars and liquor stores. And he has a warning for those thinking of opting out: "When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in." . . .
So will we live the future under iris scanners and constant Big Brother monitoring? According to Carter, eye scanners will soon be so cost-effective--between $50-$100 each--that in the not-too-distant future we'll have "billions and billions of sensors" across the globe.


From Fast Company: Iris Scanners Create the Most Secure City in the World. Welcome, Big Brother

Friday, August 13, 2010

Moral Machines and the Threat of Ethical Nihilism

A draft of a paper that reacts to Moral Machines is available online. See Moral Machines and the Threat of Ethical Nihilism.

Here's a quick statement of the paper's direction:

"In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009). While the authors are careful to note that a language-only test based on moral justifications, or reasons, would be inadequate, they consider a test based on moral behavior. “One way to shift the focus from reasons to actions,” they write, “might be to restrict the information available to the human judge in some way. Suppose the human judge in the MTT is provided with descriptions of actual, morally significant actions of a human and an AMA, purged of all references that would identify the agents. If the judge correctly identifies the machine at a level above chance, then the machine has failed the test” (206). While they are careful to note that indistinguishability between human and automated agents might set the bar for passing the test too low, such a test by its very nature decides the morality of an agent on the basis of appearances. Since there seems to be little else we could use to determine the success of an AMA, we may rightfully ask whether, analogous to the term "thinking" in other contexts, the term "moral" is headed for redescription here. Indeed, Wallach and Allen’s survey of the problem space of machine ethics forces the question of whether in fifty years (or less) one will be able to speak of a machine as being moral without expecting to be contradicted. Supposing the answer were yes, why might this invite concern? What is at stake? How might such a redescription of the term "moral" come about?"

Robot Ethics and Human Ethics

A special issue of Ethics and Information Technology on "Robot Ethics and Human Ethics" has just been released. See http://www.springerlink.com/content/1388-1957/12/3/ for details.

Monday, August 2, 2010

"Rise of the Drones" -- Transcript of House Committee on Oversight and Government Reform

A transcript of testimony collected March 23, 2010, before the House of Representatives Committee on Oversight and Government Reform, is available from the Homeland Security Digital Library. It is titled: "Rise of the Drones: Unmanned Systems and the Future of War".

A full list of witness is listed below. Among the statements made are these:

"the United States government urgently needs publicly to declare the legal rationale behind its use of drones, and defend that legal rationale in the international community" — Kenneth Anderson, Washington College of Law, American University

"AUVSI’s over 6,000 members from industry, government organizations, and academia are committed to fostering and promoting unmanned systems and related technologies." — Michael S. Fagan Chair, Unmanned Aircraft Systems (UAS) Advocacy Committee Association for Unmanned Vehicle Systems International (AUVSI)

"The Department of Commerce believes the issue of missile proliferation has never been as important to our national security interests as it is now. A comprehensive export control system is already in place to protect our national security. As noted above, the Department of Commerce is committed to enhancements to that system as needed to ensure it continues to protect our national security." — Kevin Wolf, Assistant Secretary for Export Administration, Bureau of Industry and Security

"Our industry growth is adversely affected by International Traffic in Arms Regulations (ITAR) for export of certain UAS technologies, and by a lengthy license approval process by Political Military Defense Trade Controls (PM-DTC). AUVSI is an advocate for simplified export-control regulations and expedited license approvals for unmanned systems technologies." — Michael Fagan, AUVSI Chair

"I would advise an incremental approach similar to that used with remote-controlled systems: intelligence missions first, strike missions later. Given the complexity involved, I would also restrict initial strike missions to non-lethal weapons and combatant-only areas. One possible exception to this non-lethal recommendation would involve autonomous systems targeting submarines, where one only would have to identify friendly combatants, enemy combatants, and perhaps whales." — Edward Barrett, Director of Research, Stockdale Center for Ethical Leadership U.S. Naval Academy


Witness list:
  • John F. Tierney, Chairman
  • Peter W. Singer, Director, 21st Century Defense Initiative The Brookings Institution
  • Edward Barrett, Director of Research, Stockdale Center for Ethical Leadership U.S. Naval Academy
  • Kenneth Anderson, Professor, Washington College of Law American University
  • John Jackson, Professor of Unmanned Systems U.S. Naval War College
  • Michael Fagan, Chair, Unmanned Aerial Systems Advocacy Committee Association of Unmanned Vehicle Systems International
  • Michael J. Sullivan, Director, Acquisition and Sourcing Management U.S. Government Accountability Office
  • Dyke Weatherington, Deputy, Unmanned Aerial Vehicle Planning Taskforce Office of the Under Secretary for Acquisition, Technology and Logistics, U.S. Department of Defense
  • Kevin Wolf, Assistant Secretary for Export Administration, Bureau of Industry and Security