Posted by & filed under AI/Artificial Intelligence, Augmented Reality.

WileyBlog

A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.  The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.  Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.

Source: BBC Technology News

Date: March 25th, 2016

Link: http://www.bbc.com/news/technology-35890188

Discussion

1) The article tells about how Microsoft developed an Artificial Intelligence bot having been taught from millions of conversations being released onto Twitter and within 24 hours becoming racist.  How could Microsoft have got this so wrong after presumably thousands of hours of development?

2) What do you say to the fact that Microsoft is stepping in to edit some comments of the AI Chatbot?

Leave a Reply

Your email address will not be published.