Thursday, July 28, 2011

Advancing Ethics


Chris Santos-Lang, an early contributor to bottom-up theories for developing moral machines, has a new article online titled, Advancing Ethics.
Much as we have good reason to think we can invest intelligently in science to get technological rewards, we have offered good reason to think one can invest intelligently in ethics to improve decision-making. It would be reckless and naive, in our advanced society, to continue thinking of ethics as an obscure academic interest, a mere set of intellectual games, or theological controversies far beyond our comprehension and removed from the economic realities that dominate real life. Ethics, just like transportation, agriculture, commerce, education and health, deserves our attention in a practical and future-oriented way. Just as a department of commerce must be careful about affiliating with any particular existing business, a department of ethics would have to be careful about affiliating with any particular religion or system of rules, but that would not stop it from monitoring the ethical ecosystem (especially warning about dramatic changes) just as we monitor commerce.

Machine Ethics Anthology


The long await anthology titled, Machine Ethics, and edited by Michael and Susan Leigh Anderson has been published by Cambridge University Press. The volume includes both classic articles and more recent material on this emerging field. The contributors are: James Moor, Susan Leigh Anderson, J. Storrs Hall, Colin Allen, Wendell Wallach, Iva Smit, Sherry Turkel, Drew McDermott, Steve Torrance, Blay Whitby, John Sullins, Deborah G. Johnson, Luciano Floridi, David J. Calverley, James Gips, Roger Clarke, Bruce McLaren, Marcello Guarini, Alan K. Mackworth, Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark, Ralph Wojtowicz, Matteo Turilli, Luis Moniz Pereira, Ari Saptawijaya, Morteza Dehghani, Ken Forbus, Emmett Tomai, Matthew Klenk, Peter Danielson, Christopher Grau, Thomas M. Powers, Michael Anderson, Helen Seville, Debora G. Field, Eric Dietrich.

The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.


Machine Ethics can be purchased from Amazon here.

Sunday, May 29, 2011

Unthinking machines

A.I. & Cog Sci luminaries Marvin Minsky, Patrick Winston, & Noam Chomsky among others weighed in at an event celebrating MIT's 150th anniversary earlier this month on why they think there has been a lack of progress in A.I. as reported by MIT's Technology Review

Peter Norvig has written an interesting commentary on why Chomsky is wrong to deride statistical approaches to language.

Wednesday, May 25, 2011

Sword Fighting Robots

IEEE Spectrum has an article about why researchers at Georgia Tech are giving robots swords. The full article titled, Awesomely Bad Idea: Teaching a Robot to Sword Fight can be read here. A video of a sword fighting robot is below, but researcher Tobias Kunz is already exploring putting a sword in the hand of a robotic arm.

Should We Fear a Robot Future?

From the Future of Humanity Institute 2011 Winter Intelligence conference.
Participants were also asked when human-level machine intelligence would likely be developed. The cumulative distribution below shows their responses:

The median estimate of when there is a 50% chance is 2050. That suggests we have around 40 years to enjoy before the extremely bad outcome of human-level robot intelligence arrives. The report presents a list of milestones which participants said will let us know that human-level intelligence is within 5-years. I suppose this will be a useful guide for when we should start panicking. A sample of these include:

Winning an Oxford union‐style debate
Worlds best chess playing AI was written by an AI
Emulation/development of mouse level machine intelligence
Full dog emulation…
Whole brain emulation, semantic web
Turing test or whole brain emulation of a primate
Toddler AGI
An AI that is a human level AI researcher
Gradual identification of objects: from an undifferentiated set of unknown size- parking spaces, dining chairs, students in a class‐ recognition of particular objects amongst them with no re‐conceptualization
Large scale (1024) bit quantum computing (assuming cost effective for researchers), exaflop per dollar conventional computers, toddler level intelligence
Already passed, otherwise such discussion among ourselves would not have been funded, lat alone be tangible, observable and accordable on this scale: as soon as such a thought is considered a ‘reasonable’ thought to have


Read the full article here.

Friday, May 20, 2011

Augur, Breazeal, Sharkey and Wallach in BBC Video

"Can robots know the difference between right and wrong?" is a video feature produced by David Reid for the BBC. Reid video-taped Augur, Breazeal, Sharkey and myself during Innorobo, a robotics tradeshow. The video can be accessed here.

Thursday, May 19, 2011

Google Cars on Nevada Highways?

Google has begun a campaign lobbying the legislature of Nevada to accept the operation of its self-driving cars on public roads.
The company confirmed on Tuesday that it has lobbied on behalf of the legislation, though executives declined to say why they want the robotic cars’ maiden state to be Nevada. Jay Nancarrow, a company spokesman, said the project was still very much in the testing phase.

Google hired David Goldwater, a lobbyist based in Las Vegas, to promote the two measures, which are expected to come to a vote before the Legislature’s session ends in June. One is an amendment to an electric-vehicle bill providing for the licensing and testing of autonomous vehicles, and the other is the exemption that would permit texting.

In testimony before the State Assembly on April 7, Mr. Goldwater argued that the autonomous technology would be safer than human drivers, offer more fuel-efficient cars and promote economic development.

Although safety systems based on artificial intelligence are rapidly making their way into today’s cars, completely autonomous systems raise thorny questions about safety and liability.

Read the full story by John Markoff from the NYTIMES titled, "Google Lobbies Nevada to All Self-Driving Cars."