Carlotta Rigotti and Eduard Fosch-Villaronga have published a new article that offers an insightful and critical literature review on fairness and AI in the labor market as part of the BIAS project.
The ever-increasing adoption of AI technologies in the hiring landscape to enhance human resources efficiency raises questions about algorithmic decision-making's implications in employment, especially for job applicants, including those at higher risk of social discrimination. Among other concepts, such as transparency and accountability, fairness has become crucial in AI recruitment debates due to the potential reproduction of bias and discrimination that can disproportionately affect certain vulnerable groups. However, the ideals and ambitions of fairness may signify different meanings to various stakeholders.
Fairness, AI & Recruitment
To fill this gap, Carlotta Rigotti and Eduard Fosch-Villaronga worked on a European endeavor that explored the intersections between 'Fairness, AI & Recruitment,' and published the result in the prestigious Computer, Law and Security Review journal. This insightful piece, part of the HE BIAS project, provides a critical literature review on the intersection of fairness and AI in the labor market.
Conceptualizing fairness is critical because it may provide a clear benchmark for evaluating and mitigating biases, ensuring that AI systems do not perpetuate existing imbalances and promote, in this case, equitable opportunities for all candidates in the job market.
Scoping literature review
To that end, Carlotta and Eduard conducted a scoping literature review on fairness in AI applications for recruitment and selection purposes, with special emphasis on its definition, categorization, and practical implementation. They started by explaining how AI applications have been increasingly used in the hiring process, especially to increase the efficiency of the HR team. Then they moved to the limitations of this technological innovation, which is known to be at high risk of privacy violations and social discrimination.
Against this backdrop, Carlotta and Eduard focused on defining and operationalizing fairness in AI applications for recruitment and selection purposes through cross-disciplinary lenses. Although the applicable legal frameworks and some research currently address the issue piecemeal, they observe and welcome the emergence of some cross-disciplinary efforts aimed at tackling this multifaceted challenge.
They conclude the article with some brief recommendations to guide and shape future research and action on the fairness of AI applications in the hiring process for the better.
Link to the publication
The publication is open access and available here.
eLaw and the BIAS Project
This work is an outcome of Carlotta and Eduard's desk research within the HE BIAS project, particularly within WP2 focusing on stakeholder involvement, needs assessment, and co-creation. If you are interested in exploring the preliminary research that informed this article and supports participatory project activities aimed at enhancing understanding of fairness and diversity in AI applications within the labor market, you can access the report.
Stay tuned!
If you would like to stay updated on the BIAS project activities or participate in it in various capacities, we invite you to join our national communities of stakeholders from diverse ecosystems, including HR officers, AI developers, scholars, policymakers, trade union representatives, workers, and civil society organizations. To join our national labs, please click on this link.