Facebook has begun using artificial intelligence to identify members that may be at risk of killing themselves. The social network has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response. After confirmation by Facebook’s human review team, the company contacts those thought to be at risk of self-harm to suggest ways they can seek help. A suicide helpline chief said the move was “not just helpful but critical”.
Source: BBC Technology News
Date: March 1st, 2017
1) What are some of the ethical issues surrounding the use of technology in this way?
2) What other types of behaviour do you think Facebook is tracking using this sort of technology?