Our critique reveals that robots need two key capabilities: responsiveness and smooth transfer of control. Our proposed alternative laws remind robotics researchers and developers of their legal and professional responsibilities. They suggest how people can conduct human–robot interaction research safely, and they identify critical research questions.
Ironically, Asimov’s laws really are robot-centric because most of the initiative for safety and efficacy lies in the robot as an autonomous agent. The alternative laws are human-centered because they take a systems approach. They emphasize that
• responsibility for the consequences of robots’ successes and failures lies in the human groups that have a stake in the robots’ activities, and
• capable robotic agents still exist in a web of dynamic social and cognitive relationships.
Ironically, meeting the requirements of the alternative laws leads to the need for robots to be more capable agents—that is, more responsive to others and better at interaction with others.