Posted by & filed under AI/Artificial Intelligence, Machine Learning.

A black woman looks directly into the camera lens in a photo with overlaid graphics indicating "scanning"

In September last year, a university employee noticed that when he posted two photos – one of himself and one of a colleague – Twitter’s preview consistently showed the white man over the black man, no matter which photo was added to the tweet first.

Other users discovered the pattern held true for images of former US President Barack Obama and Senator Mitch McConnnell, or for stock images of businessmen of difference racial backgrounds. When both were in the same image, the preview crop appeared to favour white faces, hiding the black faces until users clicked through to the full photo.

Source: BBC Technology News

Date: May 26th, 2021

Link: https://www.bbc.com/news/technology-57192898

Discussion

  1. Most AI racial bias seems to come from the fact that the algorithms are trained using a technique called machine learning. Machine learning means that you feed into the machine (the AI) a set of know things (in this case images of people’s faces) and then let the AI make a decision (in this case the decision would be “this is a face to show”). You score the AI based on what you know from the images of people’s faces that you are training the AI on. So, if the AI gets it correct you say “correct” and if not you say “incorrect”. The AI “learns” when it is getting things correct and when it is getting things incorrect, and adjusts (or should adjust) accordingly. What could be going wrong here with Twitter’s AI?
  2. How do you design AI better?

Leave a Reply

Your email address will not be published.