Joanna Bryson has written a piece titled Robots Should Be Slaves that according to her website is forthcoming as a chapter of a book edited by Yorick Wilks. Joanna is a computer scientist at the University of Bath who worked on COG as a graduate student in Rodney Brooks' robotics lab at MIT.
More evidence that this area is heating up!
7 comments:
Interesting! It seems that she's just assuming either (1) that it's impossible to build a moral agent, or (2) that no one will. I think that for (1) the burden of proof is upon her, and if (1) is false, then (2) is obviously false.
Still, the article's a good read - I think she poses some valid concerns regarding AI that's not a moral agent, as well as pointing out the usual problems regarding the chain of responsibility.
I'm not sure she needs to assume that no one can or will ever build an artificial moral "agent" (although actually I think she really has moral subjects, not agents, in mind). The premise she needs for her argument is that for now and the medium term, any robots that we (will) have won't really be moral agents and it's misleading to label them as such. Now, she could be wrong even about the short term, although I'm inclined to think that the burden of proof is shifted back to the other side if the assumption is understood as restricted in this way.
Hmmm... I think she's a bit naive when it comes to the philosophical literature & terminology, but that essentially she's taking a standpoint that ethical obligation emerges from social consensus, and she's trying to form a consensus around the notion that people should take responsibility for the robots they own and build.
A couple of her other papers seem to concede that robots might already be described as conscious, so she certainly seems to think they are autonomous actors and as such "are capable of acting with reference to right and wrong", but nevertheless she's arguing that unlike humans, they cannot hold ultimatel "responsibility for making moral judgments and taking actions that comport with morality" (the two definitions of moral agency from Wikipedia :-)
Not to put words in her mouth.
Wait - was that the kettle calling the kettle naive? :-)
Anyway, Joanna, I think we can agree that ethical obligation arises from social consensus, and we certainly agree that people should take responsibility for the robots they own and build. In fact, in MM we argue that some of this is likely to come about through ordinary product liability litigation.
However, I guess I'm unclear about your distinction between "acting with reference to right and wrong" and "taking actions that comport with morality." If, as you say, robots can (already) do the former, why can't they (already) doing the latter?
Perhaps we just have a terminological difference. In MM, we distinguish between operational morality (all on the programmer/designer-side), functional moral agency (some autonomous capacity to use morally relevant information or principles in selecting actions), and full moral agency (whatever you think you need for human-level moral agency). In the short term, and perhaps longer, we think that full moral agency is out of sight. I guess then we might just be arguing about whether or not it's appropriate to call machines that have functional morality (autonomous capability of acting with reference to right and wrong) agents -- a political issue more than an ontological one.
Hi Colin -- I should certainly read your book if I'm going to make this more than a hobby / quad-annual publication topic.
The distinction I am trying to draw is really pretty simple, and only two-part. The first case is the ability for AI to perceive and therefore act on categories with moral associations. To the extent we can agree on these categories ourselves, I don't see an in-principle reason that AI should be any poorer at perceiving them than the average human. See e.g. the King & Lowe (2003) evaluation of an automatic system for evaluating Reuters-reported outcomes from the Balkan wars.
But the second is purely credit / blame assignment. There is a huge temptation for many reasons to allocate emotional investment & with it ethical responsibility to robots. These reasons range from commercial (it sells) through legal (it might limit liability, though I agree the courts will probably sort that, but I worry more about the court of public opinion e.g. with military robots) through self-concept (we like to be godlike). Negative outcomes from this are 1) wasted time / lives / resources of those spending time "pleasing" robots and 2) individuals, companies & governments getting off the hook by passing responsibility to their artifacts. In my opinion, both of these concerns would be addressed by acknowledging that their artifacts are essentially a part of us. This is sort of a moral extension of the Extended Mind hypothesis (Chalmers & Clark 1998).
The book is out now http://www.benjamins.com/cgi-bin/t_bookview.cgi?bookid=NLP%208
Post a Comment