tag:blogger.com,1999:blog-34365453556206922312024-03-15T18:09:53.011-07:00Moral MachinesWendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.comBlogger384125tag:blogger.com,1999:blog-3436545355620692231.post-19746209419119146242014-02-05T18:40:00.001-08:002014-02-05T18:40:24.561-08:00It's (not) a robot, it's (not) a drone, it's a...<span style="background-color: white; color: #333333; font-family: Arial, Helmet, Freesans, sans-serif; font-size: 14px; line-height: 18px;"><a href="http://www.bbc.co.uk/news/uk-26046696" target="_blank">Remotely Piloted Air System</a> </span><br />
<span style="background-color: white; color: #333333; font-family: Arial, Helmet, Freesans, sans-serif; font-size: 14px; line-height: 18px;"><br /></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://news.bbcimg.co.uk/media/images/72783000/jpg/_72783937_taranis_flight_bae.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://news.bbcimg.co.uk/media/images/72783000/jpg/_72783937_taranis_flight_bae.jpg" height="179" width="320" /></a></div>
<br />Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com33tag:blogger.com,1999:blog-3436545355620692231.post-16940735969031205292013-11-25T08:14:00.000-08:002013-11-25T08:26:04.870-08:00Ron Arkin and Rob Sparrow debate Lethal Autonomous Robots<span style="font-family: Trebuchet MS, sans-serif;"><b>TechDebate on Lethal Autonomous Robots published on </b><b><a href="http://youtu.be/nO1oFKc_-4A" target="_blank">YouTube</a></b><b>:</b></span><br />
<a href="http://youtu.be/nO1oFKc_-4A" target="_blank"><span style="font-family: Trebuchet MS, sans-serif;">http://youtu.be/nO1oFKc_-4A</span></a><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
<b>Debaters:</b></span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><a href="http://www.cc.gatech.edu/aimosaic/faculty/arkin/" target="_blank">Ron Arkin</a>, Robotics Professor at Georgia Tech's College of Computing</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><a href="http://profiles.arts.monash.edu.au/rob-sparrow/" target="_blank">Rob Sparrow</a>, Philosophy Professor at Monash University in Australia and one of the founding members of the International Committee for Robot Arms-Control (<a href="http://icrac.net/">icrac.net</a></span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
The TechDebate took place November 18, 2013</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
</span><br />
<span style="font-family: Trebuchet MS, sans-serif;">Lethal Autonomous Robots, or “LARs” for short, are machines that can decide to take human life. Such a technology has the potential to revolutionize modern warfare and more. Opponents call LARs “killer robots” because they are deadly or “lethal.” They are “autonomous” because they “can select and engage targets without further intervention by a human operator,” based on the data they process in the battlefield, and based on the algorithms that guide their behavior. The need for understanding LARs is essential to decide whether their development and possible deployment should be regulated or banned. This TechDebate centers on the question: Are LARs ethical?</span><br />
<span style="font-family: Trebuchet MS, sans-serif;"><br />
<b><br /></b>TechDebates on Emerging Technologies, presented by the <a href="http://www.ethics.gatech.edu/" target="_blank">Center for Ethics and Technology (CET)</a> have a forum for follow up to the debate at </span><a href="http://agora.gatech.edu/release/English.html" style="font-family: 'Trebuchet MS', sans-serif;" target="_blank">AGORA-net</a>.Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com25tag:blogger.com,1999:blog-3436545355620692231.post-35387095387695714852013-11-23T09:29:00.001-08:002013-11-23T09:29:16.768-08:00The Last Firewall, William HertlingWow, time flies when you're <strike>having fun</strike> busy, and this site has been neglected way too much. But I am back to give a plug for William Hertling's fast-paced, machine-ethics-after-the-singularity tale <i><a href="http://www.williamhertling.com/p/the-last-firewall.html" target="_blank">The Last Firewall</a></i>. It took a dose of 'flu for me finally to find the time to read the book, but once I started it kept me totally engaged. Hertling's story puts super-intelligent AIs, humans with neural implants, and a variety of actors who have competing political agendas into a contest requiring wits and the embodied skills of a master karateka. Hertling's characters battle each other in a hybrid arena of physical space and netspace where ethical questions about human-machine relationships are ever present. <i>Recommended!</i>Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com4tag:blogger.com,1999:blog-3436545355620692231.post-12552278855351790212012-08-09T06:26:00.001-07:002012-08-09T06:27:47.527-07:00Errant Code? It’s Not Just a Bug<a href="http://www.nytimes.com/2012/08/09/opinion/after-knight-capital-new-code-for-trades.html?nl=todaysheadlines&emc=edit_th_20120809">http://www.nytimes.com/2012/08/09/opinion/after-knight-capital-new-code-for-trades.html?nl=todaysheadlines&emc=edit_th_20120809</a>Anonymoushttp://www.blogger.com/profile/02295901554103006574noreply@blogger.com3tag:blogger.com,1999:blog-3436545355620692231.post-72274039112280365722012-08-02T06:18:00.001-07:002012-08-02T06:18:49.680-07:00Flood of Errant Trades Is a Black Eye for Wall Street"<span class="Apple-style-span" style="font-family: georgia, 'times new roman', times, serif; font-size: 15px; line-height: 22px;">An automated stock trading program suddenly flooded the market with millions of trades Wednesday morning, spreading turmoil across Wall Street and drawing renewed attention to the fragility and instability of the nation’s stock markets."</span><br />
<span class="Apple-style-span" style="font-family: georgia, 'times new roman', times, serif; font-size: 15px; line-height: 22px;"><br /></span><br />
<span class="Apple-style-span" style="font-family: georgia, 'times new roman', times, serif; font-size: 15px; line-height: 22px;">See<a href="http://www.blogger.com/goog_325725664"> </a></span><a href="http://www.nytimes.com/2012/08/02/business/unusual-volume-roils-early-trading-in-some-stocks.html">http://www.nytimes.com/2012/08/02/business/unusual-volume-roils-early-trading-in-some-stocks.html</a>.Anonymoushttp://www.blogger.com/profile/02295901554103006574noreply@blogger.com4tag:blogger.com,1999:blog-3436545355620692231.post-82397700146540448932012-05-07T22:37:00.002-07:002012-05-07T22:37:47.910-07:00Air autonomyTesting begins on Anglo-French ASTREA project which, according to a <a href="http://www.guardian.co.uk/business/2012/may/07/pilotless-planes-test-flights-astraea">story in the Guardian</a>, aims to replace remote-operated drones with aircraft that "will follow a set of programmed instructions, with the aim that they could fly difficult missions autonomously for days at a time." The concept of a 'man-in-the-loop' at all times is offered as a bulwark against the planes themselves releasing the laser-guided bombs they will carry, according to the story.Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com11tag:blogger.com,1999:blog-3436545355620692231.post-56077872376723614782012-03-21T08:55:00.001-07:002012-03-21T08:56:33.671-07:00Escargots anyone?<a href="http://www.nytimes.com/2012/03/21/science/the-snails-of-war-and-other-robotics-experiments.html">Roboticized snails</a> in the NY Times. First task is to get them to cultivate their own garlic and then carry out the cooking algorithms in my previous post? (My thanks to Ken Pimple's <a href="http://ethicalpait.blogspot.com/2012/03/snails-of-war.html">Ethical PAIT blog</a> for the tip.)Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com4tag:blogger.com,1999:blog-3436545355620692231.post-43387788902945260892012-02-28T09:16:00.003-08:002012-02-28T09:17:22.702-08:00Pilotless bombers in the works?<a href="http://the-diplomat.com/flashpoints-blog/2012/02/27/u-s-getting-a-new-bomber/">http://the-diplomat.com/flashpoints-blog/2012/02/27/u-s-getting-a-new-bomber/<br />
</a><br />
Would the bomber be a giant drone?Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-38360005193386163272012-01-13T11:07:00.000-08:002012-01-13T11:09:09.138-08:00Calling all algorithmic cooksToo late for this holiday season, but check out Stephen Miller's nicely humorous cookbook that only geeks, nerds, and, yes, robots could love: <a href="http://www.cfoodcookbook.com">the C Food system</a><br />
<br />
My only complaint, surely those recipes (e.g. the <a href="http://www.cfoodcookbook.com/CornBread.pdf">"amaizing" cornbread</a>) could be parallelized. I mean, what's an algorithmic chef supposed to do while Oven.PreHeat(450)?Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com1tag:blogger.com,1999:blog-3436545355620692231.post-8924199299976155522012-01-06T09:04:00.000-08:002012-01-06T09:19:49.808-08:00Wallach Article on Law, Ethics and RoboticsAn article titled, <a href="http://www.ingentaconnect.com/content/hart/lit/2011/00000003/00000002/art00002?token=00521896d0d27fd16720297d763470703a7b427a7a42437c2b6d3f6a4b4b6e6e42576b6427387a5d59">From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies</a>, by Wendell Wallach was public in the Journal <span style="font-style:italic;">Law, Innovation, and Technology.</span><blockquote>We are collectively in a dialogue directed at forging a new understanding of what it means to be human. Pressures are building to embrace, reject or regulate robots and technologies that alter the mind/body. How will we individually and collectively navigate the opportunities and perils offered by new technologies? With so many different value systems competing in the marketplace of ideas, what values should inform public policy? Which tasks is it appropriate to turn over to robots and when do humans bring qualities to tasks that no robot in the foreseeable future can emulate? When is tinkering with the human mind or body inappropriate, destructive or immoral? Is there a bottom line? Is there something essential about being human that is sacred, that we must preserve? These are not easy questions.<br /><br />Among the principles that we should be careful not to compromise is that of the responsibility of the individual human agent. In the development of robots and complex technologies, those who design, market and deploy systems should not be excused from responsibility for the actions of those systems. Technologies that rob individuals of their freedom of will must be rejected. This goes for both robots and neurotechnologies.<br /><br />Just as economies can stagnate or overheat, so also can technological development. The central role for ethics, law and public policy in the development of robots and neurotechnologies will be in modulating their rate of development and deployment. Compromising safety, appropriate use and responsibility is a ready formulation for inviting crises in which technology is complicit. The harms caused by disasters and the reaction to those harms can stultify technological progress in irrational ways.<br />It is unclear whether existing policy mechanisms provide adequate tools for managing the cumulative impact of converging technologies. Presuming that scientific discovery continues at its present relatively robust pace, there may be plenty of opportunities yet to consider new mechanisms for directing specific research trajectories. However, if the pace of technological development is truly accelerating, the need for foresight and planning becomes much more pressing.</blockquote>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com2tag:blogger.com,1999:blog-3436545355620692231.post-72364337450751197032012-01-06T08:57:00.000-08:002012-01-06T09:08:14.899-08:00Colin Allen on Moral Machines in the NYTimesYou can tell that we are falling behind in maintaining this blog when we fail to post that Colin Allen wrote an Opinionator column for <span style="font-weight:bold;">The New York Times</span> that was published on Christmas day. <a href="http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/?scp=1&sq=The+Future+of+Moral+Machines&st=cse">The full column titled,<br />The Future of Moral Machines, is available here,</a> and is followed by 129 quite interesting comments. In this article Colin does, what I consider to be an excellent job, in summarizing where we are in the development of Machine Ethics and in what ways it does and does not make sense to talk about moral machines.<blockquote>Does this talk of artificial moral agents overreach, contributing to our own dehumanization, to the reduction of human autonomy, and to lowered barriers to warfare? If so, does it grease the slope to a horrendous, dystopian future? I am sensitive to the worries, but optimistic enough to think that this kind of techno-pessimism has, over the centuries, been oversold. Luddites have always come to seem quaint, except when they were dangerous. The challenge for philosophers and engineers alike is to figure out what should and can reasonably be done in the middle space that contains somewhat autonomous, partly ethically-sensitive machines. Some may think the exploration of this space is too dangerous to allow. Prohibitionists may succeed in some areas — robot arms control, anyone? — but they will not, I believe, be able to contain the spread of increasingly autonomous robots into homes, eldercare, and public spaces, not to mention the virtual spaces in which much software already operates without a human in the loop. We want machines that do chores and errands without our having to monitor them continuously. Retailers and banks depend on software controlling all manner of operations, from credit card purchases to inventory control, freeing humans to do other things that we don’t yet know how to construct machines to do.</blockquote>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-59548062626425253642012-01-06T08:45:00.000-08:002012-01-06T08:54:25.691-08:00Google: 'At scale, everything breaks'Jack Clark has an interesting interview of Urs Hölzle, Google's first vice president of engineering, on <span style="font-weight:bold;">ZDNET</span> in which Hölzle acknowledges the difficulties in maintaining massively scaled systems. <a href="http://www.zdnet.co.uk/news/cloud/2011/06/22/google-at-scale-everything-breaks-40093061/">The full interview is available here. </a><blockquote>Automation is key, but it's also dangerous. You can shut down all machines automatically if you have a bug. It's one of the things that is very challenging to do because you want uniformity and automation, but at the same time you can't really automate everything without lots of safeguards or you get into cascading failures.<br /><br /> Keeping things simple and yet scalable is actually the biggest challenge. <br /><br />Complexity is evil in the grand scheme of things because it makes it possible for these bugs to lurk that you see only once every two or three years, but when you see them it's a big story because it had a large, cascading effect.<br /><br />Keeping things simple and yet scalable is actually the biggest challenge. It's really, really hard. Most things don't work that well at scale, so you need to introduce some complexity, but you have to keep it down.<br /></blockquote>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-365672103280491282012-01-06T08:34:00.000-08:002012-01-06T08:44:29.944-08:00Robot Ethics: The Ethical and Social Implications of Robotics<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK-YXdwoctPtInKhtbmr6KwYbrXqiFRTzr4i5ZFeS5PB2WYchsebW4YaohxyUv8e7iVOTrL0FoZgM8JhBjN4im6ZZEnj3oS8LxvYHULeqiOxIsNXsEgoJCxYBE2VFdWvjtRxfxEMgOBCDi/s1600/9780262016667-medium.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 150px; height: 193px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhK-YXdwoctPtInKhtbmr6KwYbrXqiFRTzr4i5ZFeS5PB2WYchsebW4YaohxyUv8e7iVOTrL0FoZgM8JhBjN4im6ZZEnj3oS8LxvYHULeqiOxIsNXsEgoJCxYBE2VFdWvjtRxfxEMgOBCDi/s320/9780262016667-medium.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5694560804564353858" /></a><br /><br />TABLE OF CONTENTS<br />Preface<br />Acknowledgements<br />Biosketches<br />PART 1: INTRODUCTION<br /> 1 Introduction to Robot Ethics<br /> Patrick Lin<br /> 2 Current Trends in Robotics: Technology and Ethics<br /> George Bekey<br /> 3 Robotics, Ethical Theory, and Metaethics:<br /> A Guide for the Perplexed<br /> Keith Abney<br />PART 2: DESIGN & PROGRAMMING<br /> 4 Moral Machines: Contradiction in Terms, or<br /> Abdication of Human Responsibility?<br /> Colin Allen and Wendell Wallach<br /> 5 Compassionate AI and Selfless Robots: A Buddhist Approach<br /> James Hughes<br /> 6 The Divine-Command Approach to Robot Ethics<br /> Selmer Bringsjord and Joshua Taylor<br />PART 3: MILITARY<br /> 7 Killing Made Easy: From Joysticks to Politics<br /> Noel Sharkey<br /> 8 Robotic Warfare: Some Challenges in Moving from<br /> Non-Civilian to Civilian Theaters<br /> Marcello Guarini and Paul Bello<br /> 9 Responsibility for Military Robots<br /> Gert-Jan Lokhorst and Jeroen van den Hoven<br />PART 4: LAW<br /> 10 Contemporary Governance Architecture Regarding<br /> Robotics Technologies: An Assessment<br /> Richard O'Meara<br /> 11 A Body to Kick, But Still No Soul to Damn:<br /> Legal Perspectives on Robotics<br /> Peter Asaro<br /> 12 Robots and Privacy<br /> M. Ryan Calo<br />PART 5: PSYCHOLOGY & SEX<br /> 13 The Inherent Dangers of Unidirectional Emotional<br /> Bonds between Humans and Social Robots<br /> Matthias Scheutz<br /> 14 The Ethics of Robot Prostitutes<br /> David Levy<br /> 15 Do You Want a Robot Lover?: The Ethics of Caring Technologies<br /> Blay Whitby<br />PART 6: MEDICAL & CARE<br /> 16 Robot Caregivers: Ethical Issues Across the Human Lifespan<br /> Jason Borenstein and Yvette Pearson<br /> 17 The Rights and Wrongs of Robot Care<br /> Noel Sharkey and Amanda Sharkey<br /> 18 Designing People to Serve<br /> Steve Petersen<br />PART 7: RIGHTS & ETHICS<br /> 19 Can Machines Be People? Reflections on the Turing Triage Test<br /> Rob Sparrow<br /> 20 Robots with Biological Brains<br /> Kevin Warwick<br /> 21 Moral Machines and the Threat of Ethical Nihilism<br /> Anthony Beavers<br />PART 8: EPILOGUE<br /> 22 Roboethics: the Applied Ethics for a New Science<br /> Gianmarco Veruggio and Keith AbneyWendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com3tag:blogger.com,1999:blog-3436545355620692231.post-16657566090632096102012-01-06T08:21:00.000-08:002012-01-06T08:33:43.156-08:00Allenby reviews Robot Ethics in NatureBraden Allenby gave the new anthology on <span style="font-weight:bold;">Robot Ethics: The Ethical and Social Implications of Robotics </span>(MIT 2011), edited by Patrick Lin, Keith Abney, and George Bekey, a very good review in the January 5th issue of <span style="font-style:italic;">Nature.</span><blockquote>Robot Ethics succeeds as a stand- alone text, with its varied contributors striving for objectivity and avoiding hyperbole. The broad spread of applications discussed is key because the ethics differ depending on the use. Military robots, for instance, must be designed to obey the laws that gov- ern warfare. Carer robots must be capable of interacting with patients, who may give them trust and even affection.</blockquote><br />Allenby, a professor of engineering and law at Arizona State University, has been active in underscoring the challenges posed by emerging technologies such as Geothermal Engineering and Military Robots. He stresses the need for emerging technologies to be given more attention. <blockquote>By portraying robots as real-world experiments in ethics, Robot Ethics conveys an important lesson for our technological era: we must develop responses to emerging technologies in real time, rather than simply reacting to them using existing ethical frameworks.</blockquote> <br />The full review titled, <a href="http://www.nature.com/nature/journal/v481/n7379/full/481026a.html">Robotics: Morals and machines, can be accessed here.</a>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-25787075798998215962011-12-18T13:13:00.000-08:002011-12-18T13:13:50.216-08:00Nice article by Pat Lin in <a href="http://www.theatlantic.com/technology/archive/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060"><i>The Atlantic</i></a>:<br />
<br />
<blockquote><b>Drone-Ethics Briefing: What a Leading Robot Expert Told the CIA</b><br />
<br />
Robots are replacing humans on the battlefield--but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians' duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn't be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.<br />
</blockquote><br />
By the way, Pat's edited volume <a href="http://www.amazon.com/Robot-Ethics-Implications-Intelligent-Autonomous/dp/0262016664"><i>Robot Ethics: The Ethical and Social Implications of Robotics</i></a> with Keith Abney and George Bekey is just out from MIT Press. Looks like a great set of chapters. (Chapter 4 is by Wendell and me, responding to some of the criticisms we've heard of our <i>Moral Machines</i> over the past 3 years.)Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-62998076520149767792011-11-01T19:16:00.000-07:002011-11-01T19:16:44.163-07:00Petman, from the makers of BigDogBoston Dynamics, the makers of the "BigDog" robot have just unveiled the "PETMAN" humanoid version. Still operating tethered, but presumably just a matter of time before it's running through a forest near you: <a href="http://www.physorg.com/news/2011-11-makers-infamous-bigdog-robot-unveil.html">http://www.physorg.com/news/2011-11-makers-infamous-bigdog-robot-unveil.html</a>Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-6321758008066205572011-10-15T08:19:00.000-07:002011-10-15T08:24:50.144-07:00Robot Caregivers and Children's Capability to PlayYvette Pearson and Jason Borenstein have an article in <span style="font-weight:bold;">Science and Engineering Ethics </span>titled, <span style="font-style:italic;">The Intervention of Robot Caregivers and the Cultivation of Children's Capability to Play.</span><br /><blockquote>Abstract: In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients.</blockquote><br />The article can be accessed <a href="http://www.springerlink.com/content/x86178111w08wl41/">here.</a>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com1tag:blogger.com,1999:blog-3436545355620692231.post-25345924498836135392011-10-15T08:12:00.000-07:002011-10-15T08:18:03.445-07:00Call for Papers: Armed Military RobotsCall for Papers for a Special Issue with Ethics and Information Technology on “Armed Military Robots”<br /><br />Ethics and Information Technology is calling for papers to be considered for inclusion in a Special Issue on the ethics of armed military robots, to be edited by Noel Sharkey, Juergen Altmann, Peter Asaro and Robert Sparrow. The need for this Special Issue became apparent at the Berlin meeting of the International Committee for Robot Arms Control in September, 2010. This meeting expressed deep concerns about the proliferation and development of armed military robots and identified a pressing need for more international discussion of the ethics of these systems: www.icrac.co.uk<br /><br />Recent armed conflicts have seen robots playing a number of important military roles, yet informed ethical discussion has, for the most part, lagged well behind. We therefore invite contributors from a wide range of disciplines including philosophy, law, engineering, robotics, computer science, artificial intelligence, peace studies, and policy studies, to consider the ethical issues raised by the development and deployment of remote piloted, semi-autonomous, and autonomous robots (UXVs) for military roles.<br /><br />Will the development of sophisticated military robots make wars more likely? If so, can the proliferation and use of war robots be controlled? How might robots change the nature of modern warfare? And how should Just War Theory and International Law be applied to wars fought by robots and/or to the operations of robots in contemporary conflicts? We welcome submissions that discuss or attempt to answer these – or related – questions. Given the contemporary political and military enthusiasm for remotely operated and semi-autonomous weapons, we are especially interested to receive submissions that offer a critical perspective.<br /><br />Other suitable topics for papers for this special issue include (but are not limited to):<br /> •Is it morally permissible to grant autonomous systems authority for the use, or targeting, of lethal force? <br />•What are the implications of the just war doctrine of jus in bello for the operations of military robots and vice versa? <br />•What are the implications of military robots for jus ad bellum. Will they lower the threshold for starting wars? <br />• What should an arms control regime governing robots seek to regulate? <br />•What factors are at work in decisions by states to work for or against such arms control, what are commonalities with and differences from efforts and campaigns to ban other weapons? <br />•Who should be held ethically and/or legally responsible for the operations of autonomous and semi-autonomous weapons? How should we understand agency and responsibility in complex (or joint-cognitive or human-machine) systems controlling lethal force? <br />•How should the idea of military valor be understood in an age when war-fighters may be thousands of kilometers away from the wars that are fighting?<br />•What are the ethical and political implications of the conduct of “risk free” warfare? <br />•What are the ethical and legal issues involved in the use of remote-operated drones for targeted killing? <br />•How might military necessity impact on the use of armed autonomous military robots?<br /><br />Submissions will be double-blind refereed for relevance to the theme as well as academic rigor and originality. High quality articles not deemed to be sufficiently relevant to the special issue may be considered for publication in a subsequent non-themed issue of Ethics and Information Technology. Closing date for submissions: December 2, 2011<br />To submit your paper, please use the online submission system, to be found at www.editorialmanager.com/etinWendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com2tag:blogger.com,1999:blog-3436545355620692231.post-48362165765596425202011-10-12T22:57:00.000-07:002011-10-12T22:57:38.904-07:00Japanese robot with self-organizing neural net learningNext step in <a HREF="http://news.discovery.com/tech/thinking-robot-teaches-itself-task-111011.html#mkcpgn=rssnws1">robot learning</a>?<br />
<br />
The comments on this story are all a bit apocalyptic, but it's hard to tell how sophisticated this system actually is.Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-62567721514480575812011-10-11T13:18:00.000-07:002011-10-11T13:19:12.999-07:00New Book: A Legal Theory for Autonomous Artificial AgentsA Legal Theory for Autonomous Artificial Agents by Samir Chopra and<br />
Laurence F. White, University of Michigan Press, 2011<br />
<br />
<a href="http://www.press.umich.edu/titleDetailDesc.do?id=356801">http://www.press.umich.edu/titleDetailDesc.do?id=356801</a>Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com1tag:blogger.com,1999:blog-3436545355620692231.post-8338350459498039212011-09-28T20:28:00.000-07:002011-09-28T20:53:51.080-07:00The Expanding World of DronesRalph Nader has an article today about Drones at <a href="http://www.commondreams.org/view/2011/09/26-15">Commondreams.org </a> in which he mentions the ICRAC (International Committee for Robot Arms Control) meeting in Berlin last year.<br /><br /><a href="http://www.nytimes.com/2011/09/29/us/massachusetts-man-accused-of-plotting-to-bomb-washington.html?_r=1&hp"><span style="font-weight:bold;">The NYTIMES</span> and other media sources report </a>that the F.B.I. arrested a terrorist who was planning to attach the Capitol and the Pentagon using remote-controlled aircraft.<br /><br />Last week (Sept. 19th) <span style="font-weight:bold;">The Washington Post</span> reported on the development of autonomous killing drones in an article titled, <a href="http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html">A Future for drones: Autonomous killing.</a><br /><blockquote>“The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”<br /><br />Research into autonomy, some of it classified, is racing ahead at universities and research centers in the United States, and that effort is beginning to be replicated in other countries, particularly China.<br /></blockquote>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-89133255911759158552011-09-28T20:26:00.000-07:002011-09-28T20:27:43.820-07:00My Wife's Drone<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfDYC8Ql8qdXHSefj0_mGwn6gTjJe7h9CnvogZ3KBOoHmHdfpiWgomcrzP8SOiVI-s0yHNl7wgWXkMhEdfVQk2V1idrmHDXWe1VIy-e_BF9_rtPTdAEVqBkUHTRjjzHcucsZH72io_TXUz/s1600/111003_cartoon_058_a15994_p465.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 254px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfDYC8Ql8qdXHSefj0_mGwn6gTjJe7h9CnvogZ3KBOoHmHdfpiWgomcrzP8SOiVI-s0yHNl7wgWXkMhEdfVQk2V1idrmHDXWe1VIy-e_BF9_rtPTdAEVqBkUHTRjjzHcucsZH72io_TXUz/s320/111003_cartoon_058_a15994_p465.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5657618078811976418" /></a><br />Short post<span class="fullpost">Full post</span>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-35161815686394433362011-07-28T08:33:00.000-07:002011-07-28T08:46:38.767-07:00Advancing Ethics<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_61uSSbZdotJYGjReo-tciAU_8Pf6VTYNP_UAB6sVA5Rw8-kCLnACrda1VosLm4dyoFuDRbakyLYpop7KHgsVtwd4G4HokYaMVULkRPxA-TIqRteq39kN3wZYJp7A3DvT5BsVV62rSIu4/s1600/AIbEiAIAAABECKurj8-ZubDuwgEiC3ZjYXJkX3Bob3RvKig1YjY5NjBjZWY4NWIzMTdiNWQ0Y2I0MzRmM2EwZjdhYjAzODQ3YmMzMAHTAM-u1KVXzPd4DSB9SKDJ3xrPtg.jpeg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 96px; height: 96px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_61uSSbZdotJYGjReo-tciAU_8Pf6VTYNP_UAB6sVA5Rw8-kCLnACrda1VosLm4dyoFuDRbakyLYpop7KHgsVtwd4G4HokYaMVULkRPxA-TIqRteq39kN3wZYJp7A3DvT5BsVV62rSIu4/s320/AIbEiAIAAABECKurj8-ZubDuwgEiC3ZjYXJkX3Bob3RvKig1YjY5NjBjZWY4NWIzMTdiNWQ0Y2I0MzRmM2EwZjdhYjAzODQ3YmMzMAHTAM-u1KVXzPd4DSB9SKDJ3xrPtg.jpeg" border="0" alt=""id="BLOGGER_PHOTO_ID_5634429987396914466" /></a><br />Chris Santos-Lang, an early contributor to bottom-up theories for developing moral machines, has a new article online titled, <a href="http://knol.google.com/k/chris-santos-lang/advancing-ethics/3iue30fi4gfq9/2#">Advancing Ethics.</a><br /><blockquote>Much as we have good reason to think we can invest intelligently in science to get technological rewards, we have offered good reason to think one can invest intelligently in ethics to improve decision-making. It would be reckless and naive, in our advanced society, to continue thinking of ethics as an obscure academic interest, a mere set of intellectual games, or theological controversies far beyond our comprehension and removed from the economic realities that dominate real life. Ethics, just like transportation, agriculture, commerce, education and health, deserves our attention in a practical and future-oriented way. Just as a department of commerce must be careful about affiliating with any particular existing business, a department of ethics would have to be careful about affiliating with any particular religion or system of rules, but that would not stop it from monitoring the ethical ecosystem (especially warning about dramatic changes) just as we monitor commerce.</blockquote>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com0tag:blogger.com,1999:blog-3436545355620692231.post-87703512966507638422011-07-28T08:22:00.000-07:002011-07-28T08:32:52.232-07:00Machine Ethics Anthology<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI7twkx8GckBP-3hvr5gfAYZIhfAiGsq1qRkE9AHHjFGJgDS8o90QdQa2amlQMXW8Fv1wWubydY9t4vu8xTdU1JIP6POxIIZ3R4PTHVClJK_0RvhYmhmes0SmYBBUtvnpOD7XioXjQAzhM/s1600/9780521112352.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 180px; height: 284px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI7twkx8GckBP-3hvr5gfAYZIhfAiGsq1qRkE9AHHjFGJgDS8o90QdQa2amlQMXW8Fv1wWubydY9t4vu8xTdU1JIP6POxIIZ3R4PTHVClJK_0RvhYmhmes0SmYBBUtvnpOD7XioXjQAzhM/s320/9780521112352.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5634426241813781410" /></a><br />The long await anthology titled, <span style="font-style:italic;">Machine Ethics<span style="font-weight:bold;"></span></span>, and edited by Michael and Susan Leigh Anderson has been published by Cambridge University Press. The volume includes both classic articles and more recent material on this emerging field. The contributors are: James Moor, Susan Leigh Anderson, J. Storrs Hall, Colin Allen, Wendell Wallach, Iva Smit, Sherry Turkel, Drew McDermott, Steve Torrance, Blay Whitby, John Sullins, Deborah G. Johnson, Luciano Floridi, David J. Calverley, James Gips, Roger Clarke, Bruce McLaren, Marcello Guarini, Alan K. Mackworth, Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark, Ralph Wojtowicz, Matteo Turilli, Luis Moniz Pereira, Ari Saptawijaya, Morteza Dehghani, Ken Forbus, Emmett Tomai, Matthew Klenk, Peter Danielson, Christopher Grau, Thomas M. Powers, Michael Anderson, Helen Seville, Debora G. Field, Eric Dietrich.<br /><br /><blockquote>The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.</blockquote><br /><br /><span style="font-weight:bold;">Machine Ethics</span> can be purchased from <a href="http://www.amazon.com/Machine-Ethics-Michael-Anderson/dp/0521112354/ref=sr_1_1?ie=UTF8&qid=1311866609&sr=8-1"><span style="font-weight:bold;">Amazon</span> here.</a>Wendell Wallachhttp://www.blogger.com/profile/04794830318381824688noreply@blogger.com1tag:blogger.com,1999:blog-3436545355620692231.post-1227391458370404552011-05-29T14:19:00.000-07:002011-05-29T14:19:30.352-07:00Unthinking machinesA.I. & Cog Sci luminaries Marvin Minsky, Patrick Winston, & Noam Chomsky among others weighed in at an event celebrating MIT's 150th anniversary earlier this month on why they think there has been a lack of progress in A.I. as reported by MIT's <a href="http://www.technologyreview.com/computing/37525/?a=f">Technology Review</a><br />
<br />
Peter Norvig has written an interesting commentary on why <a href="http://norvig.com/chomsky.html">Chomsky is wrong</a> to deride statistical approaches to language.Colin Allenhttp://www.blogger.com/profile/06654741102989016317noreply@blogger.com3