We are collectively in a dialogue directed at forging a new understanding of what it means to be human. Pressures are building to embrace, reject or regulate robots and technologies that alter the mind/body. How will we individually and collectively navigate the opportunities and perils offered by new technologies? With so many different value systems competing in the marketplace of ideas, what values should inform public policy? Which tasks is it appropriate to turn over to robots and when do humans bring qualities to tasks that no robot in the foreseeable future can emulate? When is tinkering with the human mind or body inappropriate, destructive or immoral? Is there a bottom line? Is there something essential about being human that is sacred, that we must preserve? These are not easy questions.
Among the principles that we should be careful not to compromise is that of the responsibility of the individual human agent. In the development of robots and complex technologies, those who design, market and deploy systems should not be excused from responsibility for the actions of those systems. Technologies that rob individuals of their freedom of will must be rejected. This goes for both robots and neurotechnologies.
Just as economies can stagnate or overheat, so also can technological development. The central role for ethics, law and public policy in the development of robots and neurotechnologies will be in modulating their rate of development and deployment. Compromising safety, appropriate use and responsibility is a ready formulation for inviting crises in which technology is complicit. The harms caused by disasters and the reaction to those harms can stultify technological progress in irrational ways.
It is unclear whether existing policy mechanisms provide adequate tools for managing the cumulative impact of converging technologies. Presuming that scientific discovery continues at its present relatively robust pace, there may be plenty of opportunities yet to consider new mechanisms for directing specific research trajectories. However, if the pace of technological development is truly accelerating, the need for foresight and planning becomes much more pressing.
Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Friday, January 6, 2012
Wallach Article on Law, Ethics and Robotics
An article titled, From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies, by Wendell Wallach was public in the Journal Law, Innovation, and Technology.
Subscribe to:
Post Comments (Atom)
2 comments:
I am very interested in your writing and wish to do my PhD on Ai, moral agency and its implications on law, but I have tried 15 Universities and they do not have the staff (i.e. Dr. or Prof.) in the field of ethics that I am interested in. Can you make any suggestions? Please let me know, my email is keyserjean@gmail.com
Thanks for the update you have nicely covered this topic. keep it up....
Post a Comment