Developing the First Law of Robotics

wabrandsma sends this article from New Scientist: In an experiment, Alan Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm. At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.

Read more of this story at Slashdot.




















from Slashdot http://ift.tt/1u2VPRb

via http://ift.tt/1u2VPRb
Share on Google Plus
    Blogger Comment