Artificial intelligence has begun to infiltrate our daily lives, influencing our experience as consumers, determining what we view on social media and impacting the legal system. But how do we know if these data-driven decisions are fair?
Solon Barocas, assistant professor of computing and information science, has spent years researching bias and discrimination in machine learning and is sharing that knowledge in the course Ethics and Policy in Data Science.
The class focuses on issues stemming from algorithms making important decisions in people’s lives – who gets hired, who gets into college and even who gets released on parole.
“There’s the possibility that even though these decisions are data driven, they are discriminatory in some way,” said Barocas. “While many of us were excited to push high-stakes decision-making onto a more rigorous and reliable empirical foundation, we have quickly learned that data themselves can be the mechanism by which bias is relearned by the algorithms.”
As the use of algorithms continues to flourish, so do concerns about privacy.
“Machine learning makes it possible to infer deeply personal information about you from unlikely sources – determining sensitive health conditions from seemingly benign social media posts, for example. The policy world is unprepared to deal with the possibilities arising from these computational techniques,” said Barocas.
Barocas is the lead organizer behind the first-ever conference on fairness, accountability and transparency in algorithmic systems. More than 450 attendees are expected in New York City in mid-February to explore research addressing the dangers of inadvertently encoding bias into automated decisions. The conference builds on several years of workshops on the topics of ethics and fairness in machine learning.
Leslie Morris is director of communications for Computing and Information Science.