The instance I found of bias in machine learning is Microsoft’s AI millennial chatbot “Tay”.
Tay it supposed to be a chatbot that mimics the talking style of a teenage girl. It will learn from people’s tweets on twitter and response to people. However, after just one day it started to publish really biased comments that are racist and terrible.
“Garbage in, Garbage out.” Microsoft probably just underestimated the “power” of people’s hate comments on twitter and the influences they have on the behavior of “Tay”