Yesterday on NPR there was an "All Tech Considered" piece about the latest generation of smart elevator controllers that can compute in real time the most efficient allocation of stops to floors to minimize passenger waiting time. The story dwelled quite a bit on the loss of human operators, but it was casually mentioned that a company is developing a smartphone application that will communicate with the elevator controller so that it is "aware" that you will be arriving at the elevator shaft within a few minutes, and schedule accordingly. This raises a number of interesting issues, quite aside from the surveillance opportunities it affords. For instance, how will the system know whether you are just leaving work to run an errand, or that you have a particular situation (e.g. a medical emergency at home) that might require a "less efficient" decision to be taken in order to transport you before other people who might have been waiting longer. Could such machines be designed better to detect and respond flexibly to such contingencies? I don't see why not, but what are the dangers of going down the route towards autonomous machines making decisions that are sensitive to the ethically relevant features of not entirely predictable situations?
Letters: Football, Elevator Technology
January 12, 2010 ... SIEGEL: Yesterday in our All Tech Considered segment, we heard about the latest in elevator technology. ...
http://www.npr.org/templates/story/story.php?storyId=122498267
3 comments:
Whenever someone suggests that it's scary that machines might be making decisions that are sensitive to ethics, it occurs to me that the scary situation is the one where the machine fails to do so.
Right, but those who find any degree of ethical sensitivity threatening would not prefer ethically insensitive machines, but but would rather see no machine in the position to make a decision at all. I'm enough of a pessimist to think that we can't roll back automation that far, however.
Come on - this doesn't remind anyone of Douglas Adams on elevators? I can't resist it, then.
In more serious response to the post, the first generation will presumably not know the difference between urgent and non-urgent elevatoring. But that is no argument against smarter elevators, as you acknowledge, since the current ones certainly can't, and on average the elevatoring will be faster for all - so both parties should prefer it.
As for options later, until the elevators have person-level intelligence, they are likely to have options like "obey overrides by passengers, and resolve conflicts by ID'd corporate rank" - in other words, like most sub-person-level intelligent technology, the ethics will still lie squarely with the users rather than with the machine itself; who gets override priority with the elevator resource would probably be decided like it's decided now who gets priority with the copy machine resource. Presumably some would abuse such options, and some would not.
Post a Comment