Thursday, August 2, 2012

Flood of Errant Trades Is a Black Eye for Wall Street

"An automated stock trading program suddenly flooded the market with millions of trades Wednesday morning, spreading turmoil across Wall Street and drawing renewed attention to the fragility and instability of the nation’s stock markets."


Monday, May 7, 2012

Air autonomy

Testing begins on Anglo-French ASTREA project which, according to a story in the Guardian, aims to replace remote-operated drones with aircraft that "will follow a set of programmed instructions, with the aim that they could fly difficult missions autonomously for days at a time." The concept of a 'man-in-the-loop' at all times is offered as a bulwark against the planes themselves releasing the laser-guided bombs they will carry, according to the story.

Wednesday, March 21, 2012

Escargots anyone?

Roboticized snails in the NY Times. First task is to get them to cultivate their own garlic and then carry out the cooking algorithms in my previous post? (My thanks to Ken Pimple's Ethical PAIT blog for the tip.)

Friday, January 13, 2012

Calling all algorithmic cooks

Too late for this holiday season, but check out Stephen Miller's nicely humorous cookbook that only geeks, nerds, and, yes, robots could love: the C Food system

My only complaint, surely those recipes (e.g. the "amaizing" cornbread) could be parallelized. I mean, what's an algorithmic chef supposed to do while Oven.PreHeat(450)?

Friday, January 6, 2012

Wallach Article on Law, Ethics and Robotics

An article titled, From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies, by Wendell Wallach was public in the Journal Law, Innovation, and Technology.
We are collectively in a dialogue directed at forging a new understanding of what it means to be human. Pressures are building to embrace, reject or regulate robots and technologies that alter the mind/body. How will we individually and collectively navigate the opportunities and perils offered by new technologies? With so many different value systems competing in the marketplace of ideas, what values should inform public policy? Which tasks is it appropriate to turn over to robots and when do humans bring qualities to tasks that no robot in the foreseeable future can emulate? When is tinkering with the human mind or body inappropriate, destructive or immoral? Is there a bottom line? Is there something essential about being human that is sacred, that we must preserve? These are not easy questions.

Among the principles that we should be careful not to compromise is that of the responsibility of the individual human agent. In the development of robots and complex technologies, those who design, market and deploy systems should not be excused from responsibility for the actions of those systems. Technologies that rob individuals of their freedom of will must be rejected. This goes for both robots and neurotechnologies.

Just as economies can stagnate or overheat, so also can technological development. The central role for ethics, law and public policy in the development of robots and neurotechnologies will be in modulating their rate of development and deployment. Compromising safety, appropriate use and responsibility is a ready formulation for inviting crises in which technology is complicit. The harms caused by disasters and the reaction to those harms can stultify technological progress in irrational ways.
It is unclear whether existing policy mechanisms provide adequate tools for managing the cumulative impact of converging technologies. Presuming that scientific discovery continues at its present relatively robust pace, there may be plenty of opportunities yet to consider new mechanisms for directing specific research trajectories. However, if the pace of technological development is truly accelerating, the need for foresight and planning becomes much more pressing.

Colin Allen on Moral Machines in the NYTimes

You can tell that we are falling behind in maintaining this blog when we fail to post that Colin Allen wrote an Opinionator column for The New York Times that was published on Christmas day. The full column titled,
The Future of Moral Machines, is available here,
and is followed by 129 quite interesting comments. In this article Colin does, what I consider to be an excellent job, in summarizing where we are in the development of Machine Ethics and in what ways it does and does not make sense to talk about moral machines.
Does this talk of artificial moral agents overreach, contributing to our own dehumanization, to the reduction of human autonomy, and to lowered barriers to warfare? If so, does it grease the slope to a horrendous, dystopian future? I am sensitive to the worries, but optimistic enough to think that this kind of techno-pessimism has, over the centuries, been oversold. Luddites have always come to seem quaint, except when they were dangerous. The challenge for philosophers and engineers alike is to figure out what should and can reasonably be done in the middle space that contains somewhat autonomous, partly ethically-sensitive machines. Some may think the exploration of this space is too dangerous to allow. Prohibitionists may succeed in some areas — robot arms control, anyone? — but they will not, I believe, be able to contain the spread of increasingly autonomous robots into homes, eldercare, and public spaces, not to mention the virtual spaces in which much software already operates without a human in the loop. We want machines that do chores and errands without our having to monitor them continuously. Retailers and banks depend on software controlling all manner of operations, from credit card purchases to inventory control, freeing humans to do other things that we don’t yet know how to construct machines to do.

Google: 'At scale, everything breaks'

Jack Clark has an interesting interview of Urs Hölzle, Google's first vice president of engineering, on ZDNET in which Hölzle acknowledges the difficulties in maintaining massively scaled systems. The full interview is available here.
Automation is key, but it's also dangerous. You can shut down all machines automatically if you have a bug. It's one of the things that is very challenging to do because you want uniformity and automation, but at the same time you can't really automate everything without lots of safeguards or you get into cascading failures.

Keeping things simple and yet scalable is actually the biggest challenge.

Complexity is evil in the grand scheme of things because it makes it possible for these bugs to lurk that you see only once every two or three years, but when you see them it's a big story because it had a large, cascading effect.

Keeping things simple and yet scalable is actually the biggest challenge. It's really, really hard. Most things don't work that well at scale, so you need to introduce some complexity, but you have to keep it down.

Robot Ethics: The Ethical and Social Implications of Robotics

1 Introduction to Robot Ethics
Patrick Lin
2 Current Trends in Robotics: Technology and Ethics
George Bekey
3 Robotics, Ethical Theory, and Metaethics:
A Guide for the Perplexed
Keith Abney
4 Moral Machines: Contradiction in Terms, or
Abdication of Human Responsibility?
Colin Allen and Wendell Wallach
5 Compassionate AI and Selfless Robots: A Buddhist Approach
James Hughes
6 The Divine-Command Approach to Robot Ethics
Selmer Bringsjord and Joshua Taylor
7 Killing Made Easy: From Joysticks to Politics
Noel Sharkey
8 Robotic Warfare: Some Challenges in Moving from
Non-Civilian to Civilian Theaters
Marcello Guarini and Paul Bello
9 Responsibility for Military Robots
Gert-Jan Lokhorst and Jeroen van den Hoven
10 Contemporary Governance Architecture Regarding
Robotics Technologies: An Assessment
Richard O'Meara
11 A Body to Kick, But Still No Soul to Damn:
Legal Perspectives on Robotics
Peter Asaro
12 Robots and Privacy
M. Ryan Calo
13 The Inherent Dangers of Unidirectional Emotional
Bonds between Humans and Social Robots
Matthias Scheutz
14 The Ethics of Robot Prostitutes
David Levy
15 Do You Want a Robot Lover?: The Ethics of Caring Technologies
Blay Whitby
16 Robot Caregivers: Ethical Issues Across the Human Lifespan
Jason Borenstein and Yvette Pearson
17 The Rights and Wrongs of Robot Care
Noel Sharkey and Amanda Sharkey
18 Designing People to Serve
Steve Petersen
19 Can Machines Be People? Reflections on the Turing Triage Test
Rob Sparrow
20 Robots with Biological Brains
Kevin Warwick
21 Moral Machines and the Threat of Ethical Nihilism
Anthony Beavers
22 Roboethics: the Applied Ethics for a New Science
Gianmarco Veruggio and Keith Abney

Allenby reviews Robot Ethics in Nature

Braden Allenby gave the new anthology on Robot Ethics: The Ethical and Social Implications of Robotics (MIT 2011), edited by Patrick Lin, Keith Abney, and George Bekey, a very good review in the January 5th issue of Nature.
Robot Ethics succeeds as a stand- alone text, with its varied contributors striving for objectivity and avoiding hyperbole. The broad spread of applications discussed is key because the ethics differ depending on the use. Military robots, for instance, must be designed to obey the laws that gov- ern warfare. Carer robots must be capable of interacting with patients, who may give them trust and even affection.

Allenby, a professor of engineering and law at Arizona State University, has been active in underscoring the challenges posed by emerging technologies such as Geothermal Engineering and Military Robots. He stresses the need for emerging technologies to be given more attention.
By portraying robots as real-world experiments in ethics, Robot Ethics conveys an important lesson for our technological era: we must develop responses to emerging technologies in real time, rather than simply reacting to them using existing ethical frameworks.

The full review titled, Robotics: Morals and machines, can be accessed here.