Robot Frets Over Moral Puzzle, Humans Die

Robot Frets Over Moral Puzzle, Humans Die #994

Legged Squad Support System (LS3), 7 mph With speeds topping out at 7 mph on flat surfaces, you may think you could outrun the LS3, but I’d think twice before challenging this robot to a race. For starters, to be fair, you’d need to strap on a …

We can teach robots to do just about anything, it seems. But can we teach them moral imperatives?

That’s the intriguing question behind a series of experiments by robotic technicians in Britain, who devised an “ethical trap” related to sci-fi author Isaac Asimov’s famous “Three Laws of Robotics.”

Sci-fi nerds will recall that the first of Asimov’s Three Laws goes like this: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Researcher Alan Winfield, of Bristol Robotics Laboratory in the U.K., devised an experiment to test this system by setting up a particular scenario. The test robot was programmed to be an “ethical zombie” whose sole mission is to save the life of another robot — posing as a human — by preventing the second ‘bot from falling into a hole.

The robot acted virtuously enough until it was presented with a new dilemma — two “humans” heading toward the same hole at the same time.

The results were, well, mixed. In a few instances, the test robot was able to figure out a way to save both “humans.” But other times, the robot could only save one. And in 14 out of 33 trials, the robot fretted so much over its decision that both “humans” fell into the hole. You can check out the video from New Scientist here.

The study was designed, in part, to address issues with emerging technologies like self-driving cars, in which a robot may have to decide between the safety of its passengers versus other motorists or pedestrians.

Results from the study were presented at the 2014 Towards Autonomous Robotic Systems (TAROS 14) meeting in Birmingham, U.K.

For more details, visit news.discovery.com

Advertisements