Tip Sheets

‘Fairness’ can have multiple meanings in audits of AI in insurance

Media Contact

Kaitlyn Serrao

The public comment period on New York’s proposed AI guidance to head off discrimination in insurance wraps this week. The guidance builds on AI policy put forward by Governor Kathy Hochul.


Allison Koenecke, assistant professor of information science at Cornell University, studies fairness in algorithmic systems.

Koenecke says:

Implementing AI audits is a step in the right direction to highlight and ameliorate potential biases in machine learning-based systems used – from hiring to criminal justice to insurance. But, ‘fairness’ has multiple meanings, and not all of them can be simultaneously satisfied mathematically.

“Audits can be subject to ‘null compliance’: for example, if insurers can claim that their systems are not ‘artificial intelligence systems,’ they may not be beholden to the expected audit. As such, fairness audits will only have teeth if reasonable domain-specific metrics are reported transparently with a nuanced understanding of the trade-offs between different definitions of fairness, and there are disincentives against audit null compliance.”

Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.