Courts, schools and other public agencies that make decisions using artificial intelligence should refrain from using “black box” algorithms that aren’t subject to outside scrutiny, a group of prominent AI researchers says.
The concern is that, as algorithms become increasingly responsible for making critical decisions affecting our lives, it has become more difficult to understand and challenge how those decisions — which in some cases, have been found to have racist or sexist biases — are made.
Source: CBC News
Date: October 20th, 2017
1) Did the biases mentioned in the article come from the AI itself, or the programmers of the AI?
2) Could you set up a cyber-audit company to provide services to audit AIs for bias?