We are not used to the idea of machines making ethical decisions, but the day when they will routinely do this – by themselves – is fast approaching. So how, asks the BBC’s David Edmonds, will we teach them to do the right thing?
Source: BBC Technology
Date: October 17th, 2017
1) “The best way to teach a robot ethics, they believe, is to first programme in certain principles (“avoid suffering”, “promote happiness”), and then have the machine learn from particular scenarios how to apply the principles to new situations.” Do you agree with this approach?
2) Who gets to decide that “avoid suffering” is the correct approach, and how do you define “suffering”?