Artificial intelligence systems are developed by humans to take humans out of the decision-making process.
So what happens when those systems make decisions that humans find to be objectionable?
Understanding how that happens is at the heart of ethical – or responsible – AI, and is also the heart of Rumman Chowdhury’s work.
She talked about different ways that bias might creep into AI systems. There could be data bias, which comes when data isn’t representative of the population it is seeking to represent. Or response/reporting bias, which comes when the respondents don’t provide honest answers. Or design bias, which emerges when the model isn’t actually structured to do what it’s supposed to do.
That could result in autonomous systems that, for example, favor one racial group over another. Is the AI system racist or biased? Or just poorly constructed?
“People will talk about a racist algorithm or a sexist algorithm, but a human created that,” she said. “It’s really just a bunch of technology. It’s a code.”
Chowdhury introduced fellows to the terminology of the ethics-in-AI world, including “the filter bubble,” which has emerged over the last decade as people more and more exist only in the world created by AI algorithms and their own preferences. In some cases, the filter bubble is harmless, such as when Netflix suggests movies you might like based on your previous viewing. But the bubble can be harmful to society, such as when people only see news and views that conform to their existing biases.
“We have been in the filter bubble for 10 years,” she said. “Human beings are lazy. We like being right. And we love confirmation bias.”
This program is funded by IBM. NPF is solely responsible for the content.