ITHACA, N.Y. – A new study by Cornell University researchers finds that ignoring race leads to an admitted class that is much less diverse, but with similar academic credentials.
The team used data from an unnamed university to simulate the impacts of the 2023 Supreme Court ruling in Students for Fair Admissions (SFFA) v. Harvard, which prohibits colleges and universities from considering race in admissions. They found that the number of top-ranked applicants who identified as underrepresented minorities (URM) dropped by 62% when removing race as a factor from the school's applicant-ranking algorithm. At the same time, the test scores of top-ranked applicants did not meaningfully increase.
"We see no evidence that would support the narrative that Black and Hispanic applicants are admitted even though there are more qualified applicants in the pool," said senior author René Kizilcec , associate professor of information science.
Jinsook Lee and Emma Harvey, both doctoral students in the field of information science and co-first authors, presented the study, " Ending Affirmative Action Harms Diversity Without Improving Academic Merit ," at the ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO '24).
In the new study, the researchers started by building an AI-based ranking algorithm for the university, which they trained on past admissions decisions to predict the likelihood of a candidate's acceptance based on their common application. Then they retrained the algorithm without features related to race and rescored the applicants to see how the recommendations changed.
"There's a huge drop in the URM students when you look at the top-ranked pool of applicants," Lee said. In the original algorithm, 53% of the top group consisted of URM students, which is similar to the composition of the admitted class before the SFFA ruling. After they removed race, the top-ranked group had only 20% URM students.
Taking race out of the equation did result in a tiny increase in the average standardized test scores among the top-ranked students. But the change was negligible – equivalent to the difference between scoring a 1480 and a 1490 on the SAT.
Additional analysis showed that the subset of qualified students in the top-ranked pool under the original algorithm was somewhat arbitrary, because there were so many excellent applicants – the ranking changed substantially when the algorithm was trained with different random subsets of the data. But the rankings became even more arbitrary when race was removed from consideration.
The researchers received support from the National Science Foundation, the Amazon Research Award, the Graduate Fellowships for STEM Diversity, the Urban Tech Hub at Cornell Tech and a seed grant from the Cornell Center for Social Sciences.