/https://www.thestar.com/content/dam/thestar/business/2021/10/21/new-toronto-company-can-alert-firms-to-unintended-consequences-in-their-ai/2021102019108-441fd5d6be107261a199998a6affb7d71ff9c669734135437c622e8c93d44f29.jpg)
Rarely a week goes by without Toronto tech worker Karthik Ramakrishnan seeing another example of artificial intelligence gone wrong.
Systems programmed with the technology have led to a French medical chatbot suggesting someone commit suicide, another created by Microsoft tweeting a 9/11 conspiracy theory and an Amazon.com Inc. recruiting tool downgrading resumes with references to women.
But Ramakrishnan is convinced this pattern can be eased and many of the problemsstemming from AI — machine-based technologies that learn from data — can be prevented.
That’s why he, Dan Adamson and Rahm Hafiz co-founded Armilla AI, which launched Thursday with $1.5 million in financial backing from investors including AI godfather Yoshua Bengio and Two Small Fish Ventures, a fund run by Wattpad’s Alan and Eva Lau.
Source: Toronto Daily Star
Date: October 22nd, 2021
Discussion
- What is it about AI that means it can get to a place where ” the technology ha[s] led to a French medical chatbot suggesting someone commit suicide “?
- ” a situation where an AI system for detecting cancerous skin lesions was less likely to pick up cancers in dark-skinned people because it had been developed from a database comprised of mostly light-skinned populations. ” How can this be fixed?
Leave a Reply