Saturday, August 28, 2010

Call for Papers

IEEE Transactions on Affective Computing

Special Issue on Ethics and Affective Computing

The pervasive presence of automated and autonomous systems necessitates the rapid growth of a relatively new area of inquiry called machine ethics. If machines are going to be turned loose on their own to kill and heal, explore and decide, the need for designing them to be moral becomes pressing. This need, in turn, penetrates to the very foundations of ethics as robot designers strive to build systems that comply. Fuzzy intuitions will not do when computational clarity is required. So, machine ethics also asks the discipline of ethics to make itself clear. The truth is that at present we do not know how to make it so. Rule-based approaches are being tried even in light of an acknowledged difficulty to formalize moral behavior, and it is already common to hear that introducing affects into machines may be necessary in order to make machines behave morally. From this perspective, affective computing may be morally required by machine ethics.

On the other hand, building machines with artificial affects might carry with it negative ethical consequences. In order to make humans more willing to accept robots and other automated computational devices, creating them to display emotion will be a help, since if we like them, we will, no doubt, be more willing to welcome them. We might even pay dearly to have them. But do artificial affects deceive? Will they catch us with our defenses down, and do we have to worry about Plato's caveat in the Republic that one of the best ways to be unjust is to appear just? Automated agents that seem like persons might appear congenial, even as any moral regard is ignored, making them dangerous culprits indistinguishable from automated "friends." In this light, machine ethics might demand that we exercise great caution in using affective computing. In radical cases, it might even demand that we not use it at all.

We would seem to have here a quandary. No doubt there are others. The purpose of this volume is to explore the range of ethical issues related to affective computing. Is affective computing necessary for making artificial agents moral? If so, why and how? Where does affective computing require moral caution? In what cases do benefits outweigh the moral risks? Etc.

Invited Authors:
Roddy Cowie (Queen's University, Belfast)
Luciano Floridi (University of Hertfordshire and University of Oxford)
Matthias Scheutz (Tufts University)
Papers must not have been previously published, with the exception that substantial extensions of conference papers can be considered. The authors will be required to follow the Author’s Guide for manuscript submission to the IEEE Transactions on Affective Computing at http://www.computer.org/portal/web/tac/author. Papers are due by March 1st, 2011, and should be submitted electronically at https://mc.manuscriptcentral.com/taffc-cs. Please select the "SI - Ethics 2011" manuscript type upon submission. For further information, please contact guest editor, Anthony Beavers at afbeavers@gmail.com.

1 comment:

Unknown said...

Nice site you got there. I really appreciate your topics in the site.
looking forward for the new improvements in your site.


poker francais


thanks.