As readers of this blog are aware, Noel Sharkey has written quite a bit on the ethical and societal challenges arising from the adoption of robots for an array of activities. Here is a list of his writings on the subject.
Sharkey, N.E. (2008) The Ethical Frontiers of Robotics, Science, 322. 1800-1801
Sharkey, N. (2009) Death Strikes from the Sky: The calculus of proportionality, IEEE Science and Society, 28, 16-19
Sharkey, N.E. (2009) The Robot Arm of the Law grows longer, IEEE Computer, August, 2009
Sharkey, N. (2007) Automated killers and the computing profession, IEEE Computer, November, 106-108
Sharkey, N.E. (2008) Grounds for Discrimination: Autonomous Robot Weapons, RUSI Defence Systems, 11 (2), 86-89
Sharkey, N.E. (2009) Weapons of indiscriminate lethality, FIfF Kommunikation, 1/09, 26-28
Sharkey, N (2008) Cassandra or False Prophet of Doom: AI Robots and War, IEEE Intelligent Systems, vol. 23, no, 4, 14-17, JulyAugust Issue
Sharkey, N.E. and Sharkey, A.J.C. (in press for 2010) The crying shame of robot nannies: an ethical appraisal, Journal of Interactive Studies (25 pages)
Chapters
Sharkey, N.E. Killing made easy: from joystics to politics, In Sibylle Scheipers and Hew Strachan (Eds) The Changing Character of War, Oxford: OUP
Sharkey, N.E. and Sharkey, A.J.C. (in press) Living with Robots: Ethical tradeoffs in eldercare, In Yorick Wilks (Ed) Artificial Companions in Society: scientific, economic and philosphical perspectives, Amsterdam,: John Benjamins
Sharkey, N.E. (in press) The ethical problems for 21st century robots: from the battlefield to law enforcement, RSA book of Ethical futures (aimed at policy makers)
Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Monday, September 28, 2009
Saturday, September 26, 2009
Friday, September 25, 2009
Drones 1 Manned Aircraft 0
The September 28th, 2009 issue of Newsweek carries a most interesting story on how the killing of the F-22 by Congress represents a victory for elements in the Air Force championing unmanned aircraft. The story, written by Fred Kaplan, is titled, Attack of the Drones.
Iraq and Afghanistan are very different wars from the war the F-22 Raptor was designed to fight. (Not one of the advanced aircraft has flown a single mission over either theater.) The enemy isn't a foreign government, but an insurgency; there are few "strategic" targets to bomb and no opposing air force to go after. So the main Air Force role is to support American and allied troops on the ground. This means two things: first, airlifting supplies (General Schwartz's specialty); second, helping the troops find and kill bad guys.
For this second mission, the Air Force has been relying more and more on unmanned aerial vehicles (UAVs) with names like Predator, Reaper, Global Hawk, and Warrior Alpha. Joystick pilots located halfway around the world operate these ghost planes. They pinpoint their targets by watching streams of real-time video, taken by cameras strapped to the bellies of the UAVs. Many of the aircraft also carry super-accurate smart bombs, which the joystick pilots can fire with the push of a button once they've spotted the targets on their video screens.
Studying the impact of virtual interactions on ethical behavior
The Physorg article on MacDormand's research mentioned in the post below also describes research he and Matthias Scheutz have initiated.
MacDorman's upcoming research with his colleague, Matthias Scheutz, turns in the direction of virtual versus physical representation influencing ethical decision making made via online and face-to-face interactions. Matthias Scheutz is an associate professor of Cognitive Science, Computer Science, and Informatics and the director of the Human-Robot Interaction Laboratory at Indiana University Bloomington.
"The purpose of one of these upcoming proposals is to determine whether the physical embodiment or virtual representation of a robot can influence human decision-making of ethical consequence," MacDorman explained.
Revisiting the Uncanny Valley
The latest research by Karl MacDorman and his team of researchers at Indiana University questions Mori's theory that robots whose features differ slightly from those of humans will by necessity appear eerie. "Even abstract faces can look eerie if they contain facets that seem unintended or arbitrary," MacDorman said. His latest research on the uncanny valley was published in an article entitled Too real for comfort? Uncanny responses to computer generated faces, which appears in the Journel Computers in Human Behavior.
A September 22nd article in Physorg.com titled, To scary to be real, research looks to quantify eeriness in virtual characters, also reports that:
a research paper now in review, "Gender Differences in the Impact of Presentational Factors in Human Character Animation on Ethical Decisions about Medical Dilemmas," indicates that under certain similar conditions men and women make different decisions.
MacDorman, Chin-Chang Ho, Joseph Coram and Himalaya Patel found that using a computer-generated character instead of a human character, or using jerky movements instead of fluid movements, to present participants with an ethical dilemma produced no significant effect on female participants. Male participants, however, were much more likely to rule against the computer-generated character with jerky movements.
Monday, September 21, 2009
Robot Nannies
Noel and Amanda Starkey have written a wonderfully comprehensive paper on the ethical challenges arising from the development of robotic caregivers for infants and children. While they are clearly critical of this prospect, the article is nuanced in its recognition of what we do and do not know about how sustained interaction with robotic caregivers might affect the psychological development of young children. The article entitled, The crying shame of robot nannies: an ethical appraisal, has been selected as a target article by the journal Interaction Studies and they are seeking commentaries. Readers of this blog can access the article online now. Those wishing to provide commentaries should contact Kerstin Dautenhahn at the University of Hertsfordshire. The abstract and then the request for commentaries follows.
Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is on the cards. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.
CALL FOR COMMENTARIES
Dear colleagues,
For those of you interested in the use of robots in everyday environments, and specifically in the use of robots as toys, interaction partners or possible caretakers of children, or ethical issues involving human-robot interaction, the following target article and call for commentaries might be of interest:
The article The crying shame of robot nannies: an ethical appraisal by Noel Sharkey and Amanda Sharkey has been accepted as a target article to appear in 2010 in the journal Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, published by John Benjamins Publishing Company (2008 Impact Factor: 1.359).
http://www.benjamins.com/cgi-bin/t_seriesview.cgi?series=IS
We are inviting commentaries (up to 2000 words) on this article, the abstract is included below and the final accepted version is available at: http://homepages.feis.herts.ac.uk/~comqkd/SharkeyandSharkey-TargetArticle-IS.pdf
Interaction Studies is an interdisciplinary journal and we invite commentaries from various viewpoints and disciplines.
Deadlines:
If you intend to submit a commentary please contact the k.dautenhahn@herts.ac.uk as soon as possible, stating your proposed title and (briefly) the key message that you would like to address in your commentary.
Submission of commentaries (to k.dautenhahn@herts.ac.uk): 31 October 2009 (PDF, up to 2000 words)
Notification: 20 November 2009
Final version of accepted commentaries to be submitted: 15 December 2009
Publication of target article and commentaries: Interaction Studies, volume 11, 2010
Regards,
Kerstin Dautenhahn
Monday, September 14, 2009
Swedish documentary
Swedish documentary on military robotics. Most of it is in Swedish, but it features interviews (in English) with Noel Sharkey, and lots of footage of military hardware.
Tuesday, September 8, 2009
Replicating the Human Brain Within 10 Years
Science News reports that the Human Brain Could Be Replicated in 10 Years, Researcher Predicts. The researcher is Henry Markham, a neuroscientist at the Brain Mind Institute in Switzerland. "I absolutely believe it is technically and biologically possible. The only uncertainty is financial. It is an extremely expensive project and not all is yet secured."
100 years of neuroscience discovery has led to millions of fragments of data and knowledge that have never been brought together and exploited fully. "Actually no one even knows what we already understand about the brain," says Professor Markram. "A model would serve to bring this all together and then allow anyone to test whatever theory you want about the brain. The biggest challenge is to understand how electrical-magnetic-chemical patterns in the brain convert into our perception of reality. We think we see with our eyes, but in fact most of what we 'see' is generated as a projection by your brain. So what are we actually looking at when we look at something 'outside' of us?"
Peter Asaro Interviewed on Robots in the Military
In his continuing series of interviews regarding military robots, Gerhard Dabringer of the Institut Fur Religion und Frieden talked with Peter Asaro. While the title of the article is in German, the text is in English and can be read here:
PostDoc with Nick Bostrom at Oxford
POSTDOCTORAL RESEARCH FELLOWSHIP IN INTERDISCIPLINARY SCIENCE OR PHILOSOPHY
University of Oxford
Faculty of Philosophy
Future of Humanity Institute, James Martin 21st Century School
Grade 7: £28,839 - £38,757 per annum (as at 1 October 2008)
Applications are invited for a fixed-term Research Fellowship at the Future of Humanity Institute. The Fellowship is available for two years from the date of appointment.
The Future of Humanity Institute is a unique multidisciplinary research institute at the University of Oxford. It is part of the James Martin 21st Century School, and is hosted by the Oxford Faculty of Philosophy. FHI’s mission is to bring excellent scholarship to bear on big-picture questions for humanity. Current work areas include global catastrophic risks, probabilistic methodology and applied epistemology & rationality, impacts of future technologies, and ethical issues related to human enhancement.
The successful candidate will be expected to: develop and implement an independent program of research related to one or more of the Institute’s focus areas; provide research assistance to the Institute’s director, Professor Nick Bostrom; and to participate in examining and administrative duties for the Faculty of Philosophy as required.
The successful applicant will show evidence of exceptional academic research potential and outstanding intellectual capacity. Academic discipline and specialization are open but should be directly relevant to at least one of FHI’s work areas.
Further particulars, including details about how to apply, are available from the website of The Future of Humanity Institute (http://www.fhi.ox.ac.uk/) or from Mrs Nancy Patel (tel: +44 (0)1865 286279; email: fhi@philosophy.ox.ac.uk).
The successful candidate will be able to start as soon as possible, however there is some flexibility with the start date.
The deadline for receipt of applications is 14 October 2009.
University of Oxford
Faculty of Philosophy
Future of Humanity Institute, James Martin 21st Century School
Grade 7: £28,839 - £38,757 per annum (as at 1 October 2008)
Applications are invited for a fixed-term Research Fellowship at the Future of Humanity Institute. The Fellowship is available for two years from the date of appointment.
The Future of Humanity Institute is a unique multidisciplinary research institute at the University of Oxford. It is part of the James Martin 21st Century School, and is hosted by the Oxford Faculty of Philosophy. FHI’s mission is to bring excellent scholarship to bear on big-picture questions for humanity. Current work areas include global catastrophic risks, probabilistic methodology and applied epistemology & rationality, impacts of future technologies, and ethical issues related to human enhancement.
The successful candidate will be expected to: develop and implement an independent program of research related to one or more of the Institute’s focus areas; provide research assistance to the Institute’s director, Professor Nick Bostrom; and to participate in examining and administrative duties for the Faculty of Philosophy as required.
The successful applicant will show evidence of exceptional academic research potential and outstanding intellectual capacity. Academic discipline and specialization are open but should be directly relevant to at least one of FHI’s work areas.
Further particulars, including details about how to apply, are available from the website of The Future of Humanity Institute (http://www.fhi.ox.ac.uk/) or from Mrs Nancy Patel (tel: +44 (0)1865 286279; email: fhi@philosophy.ox.ac.uk).
The successful candidate will be able to start as soon as possible, however there is some flexibility with the start date.
The deadline for receipt of applications is 14 October 2009.
Monday, September 7, 2009
Slime Mold Robots
No, this is not a bad horror picture or another story from The Onion News. A team at the University of the West in England (UWE) has received 228,000 pounds in funding to engineer robots from single celled organisms. The target species of Andrew Adamatzky and his team is a slime mould (Physarum polycephalum) reports an article titled, Plasmobot: the slime mould robot, online at NewScientist.
Affectionately dubbed Plasmobot, it will be "programmed" using light and electromagnetic stimuli to trigger chemical reactions similar to a complex piece of chemistry called the Belousov-Zhabotinsky reaction, which Adamatzky previously used to build liquid logic gates for a synthetic brain. By understanding and manipulating these reactions, says Adamatzky, it should be possible to program Plasmobot to move in certain ways, to "pick up" objects by engulfing them and even assemble them. It should be possible to program it to move, to pick up objects and even assemble them. Initially, Plasmobot will work with and manipulate tiny pieces of foam, because they "easily float on the slime", says Adamatzky. The long-term aim is to use such robots to help assemble the components of micromachines.
Too Much Power To Robots?
The Onion News hosts a panel -- In the Know: Are We Giving The Robots That Run Our Society Too Much Power?
In The Know: Are We Giving The Robots That Run Our Society Too Much Power?
In The Know: Are We Giving The Robots That Run Our Society Too Much Power?
Sunday, September 6, 2009
Robots evolve to exploit inadvertent cues
IEEE Spectrum publlshed a different take on the Swiss research we posted on August 28th, in an article titled Robots evolve to exploit inadvertent cues.
Full post
The researchers set up a group of S-bots equipped with omnidirectional cameras and light-emitting rings around their body in a bio-inspired foraging task (see picture above). Like many animals, the robots used visual cues to forage for two food sources in the arena. Rather than pre-programming the robots' control rules, the researchers used artificial evolution to develop the robots' control systems. As expected, robots capable of efficiently navigating the arena and locating food sources evolved in a matter of a few 100 generations.
This is when things became interesting: Due to the limited amount of food, robots now began to compete for resources. Robots began to evolve strategies to use light inadvertently emitted by their peers to rapidly pinpoint food locations, in some cases even physically pushing them away to make room for themselves. As evolution progressed, the exploited robots were soon all but extinct. A new generation of robots ensued that could conceal their presence by emitting confusing patterns of light or by ceasing to emit light altogether.
Tuesday, September 1, 2009
More on Modelling Morality with Prospective Logic
Luis Moniz Pereira has sent us links for the papers he wrote together with Ari Saptawijaya on the use f propsective logic for modelling morality. As you may remember from an earlier post, the used the trolley car examples for developing a computational system that could consider the consequences of various course of action.
Click on the title if you are interested in reading these papers.
L. M. Pereira, A. Saptawijaya, Modelling Morality with Prospective Logic, to appear in: M. Anderson, S. Anderson (eds.), "Machine Ethics", Cambridge University Press, 2010.
A summary:
L. M. Pereira, Ari Saptawijaya, Computational Modelling of Morality, The Association for Logic Programming Newsletter, Vol. 22, No. 1, February/March 2009.
A shorter, conference version:
L. M. Pereira, A. Saptawijaya, Modelling Morality with Prospective Logic, in: J. M. Neves, M. F. Santos, J. M. Machado (eds.), Progress in Artificial Intelligence, Procs. 13th Portuguese Intl.Conf. on Artificial Intelligence (EPIA'07), pp. 99-111, Springer LNAI 4874, Guimarães, Portugal, December 2007.
Pereira Interview for those reader who speak Portuguese:
L. M. Pereira, Inteligência Artificial: O computador também é capaz de ter moral, Interview by Sandra Pereira, in: "Jornal i", pp. 26-27, 2 September 2009.
Click on the title if you are interested in reading these papers.
L. M. Pereira, A. Saptawijaya, Modelling Morality with Prospective Logic, to appear in: M. Anderson, S. Anderson (eds.), "Machine Ethics", Cambridge University Press, 2010.
A summary:
L. M. Pereira, Ari Saptawijaya, Computational Modelling of Morality, The Association for Logic Programming Newsletter, Vol. 22, No. 1, February/March 2009.
A shorter, conference version:
L. M. Pereira, A. Saptawijaya, Modelling Morality with Prospective Logic, in: J. M. Neves, M. F. Santos, J. M. Machado (eds.), Progress in Artificial Intelligence, Procs. 13th Portuguese Intl.Conf. on Artificial Intelligence (EPIA'07), pp. 99-111, Springer LNAI 4874, Guimarães, Portugal, December 2007.
Pereira Interview for those reader who speak Portuguese:
L. M. Pereira, Inteligência Artificial: O computador também é capaz de ter moral, Interview by Sandra Pereira, in: "Jornal i", pp. 26-27, 2 September 2009.
Interview of Noel Sharkey
In the spirit of sensationalism, NewScientist published an interview of Noel Sharkey under the title, Why AI is a dangerous dream.
Are we close to building a machine that can meaningfully be described as sentient?
I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.
Subscribe to:
Posts (Atom)