HUDERIA: Tool Evaluates AI Impact on Human Rights

Council of Europe

A new Council of Europe tool provides guidance and a structured approach to carry out risk and impact assessments for Artificial Intelligence (AI) systems. The HUDERIA Methodology is specifically tailored to protect and promote human rights, democracy and the rule of law. It can be used by both public and private actors to help identify and address risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems.

The methodology provides for the creation of a risk mitigation plan to minimise or eliminate the identified risks, protecting the public from potential harm. If an AI system used for example in hiring is found to be biased against certain demographic groups, the mitigation plan might involve adjusting the algorithm or implementing human oversight.

The methodology requires regular reassessments to ensure that the AI system continues operating safely and ethically as the context and technology evolve. This approach ensures that the public is protected from emerging risks throughout the AI system's life cycle.

The HUDERIA Methodology was adopted by the Council of Europe's Committee on Artificial Intelligence (CAI) at its 12th plenary meeting, held in Strasbourg on 26-28 November. It will be complemented in 2025 by the HUDERIA Model, which will provide supporting materials and resources, including flexible tools and scalable recommendations.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.