
AI algorithms are increasingly responsible for a variety of today’s interactions and innovations—from personalized product recommendations and customer service experiences to banks’ lending decisions and even police response.
But for all the benefits they offer, AI algorithms come with big risks if they aren’t effectively monitored and evaluated for resilience, fairness, explainability and integrity. To assist business leaders with monitoring and evaluating AI, the study referenced above shows that a growing number of business leaders want the government to regulate AI in order to allow organizations to invest in the right technology and business processes. For the necessary support and oversight, it’s wise to consider external assessments offered by a service provider with experience in providing such services.
Source: Forbes
Date: November 2nd, 2022
Discussion
- “AI algorithms come with big risks if they aren’t effectively monitored and evaluated for resilience, fairness, explainability and integrity.”
This is an excellent list of potential issues with AI algorithms and it would be worth discussing each of the four with class.
Resilience = ability to work in different situations. What happens when bad data goes in?
Fairness = shows no bias. I usually cover the example of Google, who used machine-learning AI to hire new software engineers. The machine-learning looked at how was a successful software engineer at Google and concluded (correctly, based on the past and data it was fed) that you needed to have software engineering degree from Stanford, and be a white male. Instead of reducing bias in hiring it actually increased bias in hiring.
Explainability = we know how the AI came to its conclusion(s).
Integrity = make sure you have validation of what you are doing - What role can an MIS major play in being an “Algorithm assessor”?
Leave a Reply