BIAS Project Joins Law And Language School at Pavia

Carlotta Rigotti, Postdoc researcher at eLaw, and Eduard Fosch-Villaronga, Associate professor at eLaw, delivered a lecture on AI and non-discrimination, engaging students with the Debiaser demo students with the Debiaser demo.

The eLaw Center for Law and Digital Technologies at Leiden University recently took part in the summer school on 'Law and Language,' co-organised by the University of Pavia, Würzburg University, Poitiers University, and Pázmány Péter Catholic University. Held in Pavia (Italy) from 16-20 September, the summer school brought together experts and students to explore the intersections of law and language. Topics ranged from developing English legal skills and understanding language rules within EU institutions to examining AI as a new 'language' that the law must adapt to and regulate.

Representing the BIAS project which aims to mitigate diversity biases of AI systems in the labor market, Carlotta Rigotti and Eduard Fosch-Villaronga led a two-day lecture on AI and non-discrimination during the summer school.

Eduard Fosch-Villaronga and Carlotta Rigotti

The second day focused on the legal and ethical requirements laid out by the High-Level Expert Group on AI (AI HLEG) for achieving trustworthy AI. After discussing their potentially diverse interpretations, Carlotta and Eduard introduced the BIAS project and its Debiaser demo, inviting students to actively engage withAfter discussing their potentially diverse interpretations, Carlotta and Eduard introduced the BIAS project and its Debiaser demo, inviting students to actively engage with this tool designed to help the user rank and motivate job applicants applying for a certain vacancy. In this interactive exercise, students were divided into groups and asked to role-play as human resources (HR) personnel in a fictitious hiring scenario. They performed candidate rankings both manually and with the help of the Debiaser demo, allowing them to reflect on diversity biases and ethical considerations like transparency and autonomy in both approaches. The lecture concluded with a general discussion on the opportunities and limitations of AI in the labor market.

The BIAS Project: Get involved!

The BIAS project aims to identify and mitigate diversity biases (e.g. related to gender and race) of artificial intelligence (AI) applications in the labor market, especially in human resources (HR) management.

To gain new and consolidated knowledge about diversity biases and fairness in AI and HR, the BIAS Consortium is currently involved in several activities that you might be interested in discovering and joining, like capacity-building sessions and ethnographic fieldwork.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.