Race-blind college admissions harm diversity without improving quality
By Patricia Waldron
Critics of affirmative action in higher education have argued that the policy deprives more qualified students of a spot at a university or college. A new study by Cornell researchers finds that ignoring race leads to an admitted class that is much less diverse, but with similar academic credentials.
The Cornell team used data from an unnamed university to simulate the impacts of the 2023 Supreme Court ruling in Students for Fair Admissions (SFFA) v. Harvard, which prohibits colleges and universities from considering race in admissions. They found that the number of top-ranked applicants who identified as underrepresented minorities (URM) dropped by 62% when removing race as a factor from the school’s applicant-ranking algorithm. At the same time, the test scores of top-ranked applicants did not meaningfully increase.
“We see no evidence that would support the narrative that Black and Hispanic applicants are admitted even though there are more qualified applicants in the pool,” said senior author René Kizilcec, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science.
Jinsook Lee and Emma Harvey, both doctoral students in the field of information science and co-first authors, presented the study, “Ending Affirmative Action Harms Diversity Without Improving Academic Merit,” at the ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’24), Oct. 31 in San Luis Potosí, Mexico.
With standardized tests becoming less accessible during the pandemic, selective universities have increasingly turned to artificial intelligence-based approaches to help prioritize which of the tens of thousands of applications to review first. The Cornell research team realized these algorithms presented an opportunity to test out potential changes in admissions policies before they are implemented.
In the new study, the researchers started by building an AI-based ranking algorithm for the university, which they trained on past admissions decisions to predict the likelihood of a candidate’s acceptance based on their common application. Then they retrained the algorithm without features related to race and rescored the applicants to see how the recommendations changed.
“There’s a huge drop in the URM students when you look at the top-ranked pool of applicants,” Lee said. In the original algorithm, 53% of the top group consisted of URM students, which is similar to the composition of the admitted class before the SFFA ruling. After they removed race, the top-ranked group had only 20% URM students.
Taking race out of the equation did result in a tiny increase in the average standardized test scores among the top-ranked students. But the change was negligible – equivalent to the difference between scoring a 1480 and a 1490 on the SAT.
Additional analysis showed that the subset of qualified students in the top-ranked pool under the original algorithm was somewhat arbitrary, because there were so many excellent applicants – the ranking changed substantially when the algorithm was trained with different random subsets of the data. But the rankings became even more arbitrary when race was removed from consideration.
When the study university announced the demographics of the fall 2024 incoming class, the team’s predictions were spot on. “When we saw the actual numbers, our approach had done a pretty good job of predicting the decline in URM students,” said Harvey.
She notes, however, that not all schools saw a drop in diversity following the SFFA ruling – a few even saw increases. “Looking at how different schools responded to the SFFA decision will be very interesting for future research – to see why so many schools had such different outcomes,” she said.
Kizilcec expects that the use of AI to support admissions staff in reviewing applications will continue to grow at colleges and universities. “This work is critical to make sure that AI is used responsibly in admissions,” he said.
Nikhil Garg, assistant professor of operations research and information engineering (ORIE) at Cornell Tech, as part of the Jacobs Technion-Cornell Institute, and Thorsten Joachims, the Jacob Gould Schurman Professor of computer science and information science are co-authors on the study.
The researchers received support from the National Science Foundation, the Amazon Research Award, the Graduate Fellowships for STEM Diversity, the Urban Tech Hub at Cornell Tech and a seed grant from the Cornell Center for Social Sciences.
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe