Media Contact
New analysis from Cornell University researchers shows few companies have disclosed how algorithms influence their hiring decisions since New York City began enforcing the nation’s first-of-its-kind AI transparency law six months ago.
Lucas Wright, a PhD candidate, co-authored the study. With the help of 155 undergraduates, researchers found only 18 bias audits and 11 transparency notices from nearly 400 employers analyzed.
Wright says:
“We found that, because the law gives discretion to employers to determine whether the law applies to their use of automated employment decision tools (AEDTs), we can only make claims about compliance with the law, not non-compliance. While the law has created a market for independent algorithm auditors and led to some transparency, it has actually created incentives for employers to avoid auditing.
“From the outside, we cannot tell the difference between an employer that has not posted an audit because they are out of compliance with the law and an employer that has not posted an audit because they simply do not use AEDTs in the way the law describes. An employer could conduct an audit and, if the results fall below the cutoff used by the federal government for enforcing anti-discrimination law, never post it, and the public, regulators and job seekers would never know.
“We asked our student investigators to provide us with feedback on the experience of searching for this information in order to understand what a potential job seeker might have to go through in order to find this information. We found that audits and transparency notices are typically hard to find and that, when they are found, often are very difficult for a non-expert to understand.
“These findings are important because lawmakers around the world are actively exploring the best ways to create accountability in the fast-moving AI industry. Audits are one tool in their toolkit, and this law was the first to attempt to mandate them. We think this law was an important first step that reveals how future AI policies can close loopholes in accountability.”