Posted by & filed under AI/Artificial Intelligence, Civil Liberties, Ethical issues.

Police in Durham, England are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody.  The system classifies suspects at a low, medium or high risk of offending and has been tested by the force.  It has been trained on five years’ of offending histories data.

Source: BBC Technology News

Date: May 11th, 2017


1) “Forecasts that a suspect was low risk turned out to be accurate 98% of the time, while forecasts that they were high risk were accurate 88% of the time.  This reflects the tool’s built in predisposition – it is designed to be more likely to classify someone as medium or high risk, in order to err on the side of caution and avoid releasing suspects who may commit a crime.”   Discuss the issue of “err on the side of caution” here in the design decisions around this particular AI.

2) “The investigation suggested that the algorithm amplified racial biases, including making overly negative forecasts about black versus white suspects.”   The alternate argument has been made, that AI actually reduces biases, including racial bias, because it is a machine.  How might biases enter in to AI decision making processes?

Leave a Reply

Your email address will not be published. Required fields are marked *