Kit Eaton reports in Quartz:
How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself?
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers in Massachusetts are trying something that many a science fiction movie has already anticipated: They’re teaching robots to say “no” to some instructions.
For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming. But we’re building more-clever robots every day and we’re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself?
This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting
human orders.
The strategy works similarly to the process human brains carry out when we’re given spoken orders. It’s all about a long list of trust and ethics questions that we think through when asked to do something. The questions start with “do I know how to do that?” and move through other questions like “do I have to do that based on my job?” before ending with “does it violate any sort of normal principle if I do that?” This last question is the key, of course, since it’s “normal” to not hurt people or damage things.
The Tufts team has simplified this sort of inner human monologue into a set of logical arguments that a robot’s software can understand, and the results seem reassuring. For example, the team’s experimental android said “no” when instructed to walk forward though a wall it could easily smash because the person telling it to try this potentially dangerous trick wasn’t trusted.
Machine ethics like this are becoming a serious matter, as recent news about Google’s self-driving cars show. These cars are in effect robots, and on the open road they’re likely to encounter complex situations that could put their riders in danger if they blindly follow instructions.
Imagine yourself working alongside a 6-foot version of the Tufts android in the future. You can see how it would be better for the robot to think and say “I’m sorry I can’t do that” if its orders seemed likely to break it or hurt a human (such as fleshy, vulnerable
0 comments:
Post a Comment