Posted by & filed under AI/Artificial Intelligence, Machine Learning.

As banks increasingly deploy artificial intelligence tools to make credit decisions, they are having to revisit an unwelcome fact about the practice of lending: Historically, it has been riddled with biases against protected characteristics, such as race, gender, and sexual orientation. Such biases are evident in institutions’ choices in terms of who gets credit and on what terms. In this context, relying on algorithms to make credit decisions instead of deferring to human judgment seems like an obvious fix. What machines lack in warmth, they surely make up for in objectivity, right?
Sadly, what’s true in theory has not been borne out in practice. Lenders often find that artificial-intelligence-based engines exhibit many of the same biases as humans. They’ve often been fed on a diet of biased credit decision data, drawn from decades of inequities in housing and lending markets. Left unchecked, they threaten to perpetuate prejudice in financial decisions and extend the world’s wealth gaps.

Source: Forbes

Date: November 9th, 2022



  1. “In our work with financial services companies, we find the key lies in building AI-driven systems designed to encourage less historic accuracy but greater equity. That means training and testing them not merely on the loans or mortgages issued in the past, but instead on how the money should have been lent in a more equitable world.”
    It is worth making sure that students know how AIs are trained. Typically it is “machine learning” which means you feed in information to the AI, let it do its work to find the answer, and then tell it whether or not it got the answer correct. Over time it learns how to get the correct answer from the data it is given.
  2. “by using AI, one lender discovered that, historically, women would need to earn 30% more than men on average for equivalent-sized loans to be approved. It used AI to retroactively balance the data that went into developing and testing its AI-driven credit decision model by shifting the female distribution, moving the proportion of loans previously made to women to be closer to the same amount as for men with an equivalent risk profile, while retaining the relative ranking. As a result of the fairer representation of how loan decisions should have been made, the algorithm developed was able to approve loans more in line with how the bank wished to extend credit more equitably in the future.”
    Use this example to explain how to correct for AI machine-learning inaccuracies

Leave a Reply

Your email address will not be published. Required fields are marked *