Posted by & filed under Artificial intelligence, Ethical Issues, self driving cars.

Let’s say you’re driving down Main Street and your brakes give out. As the terror hits, a gaggle of children spills out into the road. Do you A) swerve into Keith’s Frozen Yogurt Emporium, killing yourself, covering your car in toppings, and sparing the kids or B) assume they’re the Children of the Corn and just power through, killing them and saving your own life? Any decent human would choose the former, of course, because even murderous kiddie farmers have rights.

Source: Wired Magazine

Date: March 15th, 2017

Link (article and video): https://www.wired.com/2017/03/make-us-safer-robocars-will-sometimes-kill/

Discussion

1) “Not only will robocars fail to completely eliminate traffic deaths, but on very, very rare occasions, they’ll be choosing who to sacrifice—all to make the roads of tomorrow a far safer place.”   What are some other types or uses of technology where the technology, not the human, gets to decide who lives and who dies?

2) How do you program a car to make these choices between who has to die?

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *