Alan E. Singer has written a most interesting review of Moral Machines for Human Systems Management.
Designers of artificial moral agents (AMA’s) or eth- ical (ro-)bots will be informed by this book. However, it will also challenge moral philosophers and anyone involved in teaching ethics. Indeed, an alternative sub- title: “teaching ethicists right from wrong” would be quite appropriate. The book demonstrates quite con- vincingly that “you don’t really know how something works if you can’t build it”, so that “robotocists are doing philosophy, whether or not they think this is so” . Yet this “philosophy” is plugged: an experimental and constructive “computational philosophy” that fits well with the notion of knowledge as coordination-of- action (e.g. ) and the associated position that the physical and mental worlds (are becoming) one and the same1 In addition, the task of AMA design and con- struction repeatedly spins-off sharply-framed questions that are both philosophical and technological.
 D.C. Dennett, Cog as a thought experiment, Robotics &
Autonomous Systems 20(2–4) (1997), 251–256
 M. Zeleny, Human Systems Management, World Scientific, London, 2007.
The full review titled, Philosophy plugged: How robotics informs ethics, can be found here.