The Commission and the European Board for Digital Services welcome the integration of the revised 'Code of conduct on countering illegal hate speech online +' into the framework of the Digital Services Act (DSA) , which encourages voluntary codes of conduct to tackle risks online.
The Code of conduct+, which builds on the 2016 on the initial Code of conduct on countering illegal hate speech online was signed by Dailymotion, Facebook, Instagram, Jeuxvideo.com, LinkedIn, Microsoft hosted consumer services, Snapchat, Rakuten Viber, TikTok, Twitch, X and YouTube.
The Code of conduct+ will strengthen the way online platforms deal with content that EU and national laws define as illegal hate speech. The integrated Code of conduct will facilitate compliance with and the effective enforcement of the DSA when it comes to risks of dissemination of illegal content on their services.
Following this integration, online platforms who are designated under the DSA can adhere to the Code of conduct+ to demonstrate their compliance with the DSA obligation to mitigate the risk of the dissemination of illegal content on their services. The compliance with the Code of conduct+ commitments will be part of the annual independent audit which these platforms are subject to under the DSA and which contributes to reenforcing the platforms transparency and accountability.
Concretely, the signatories of the Code of conduct+ commit to, among other things:
- Allow a network of 'Monitoring Reporters', which are not-for-profit or public entities with expertise on illegal hate speech, to regularly monitor how the signatories are reviewing hate speech notices: Monitoring reporters may include entities designated as 'Trusted Flaggers' under the DSA.
- Undertake best efforts to review at least two thirds of hate speech notices received from Monitoring Reporters within 24 hours.
- Engage with well-defined and specific transparency commitments as regards measures to reduce the prevalence of hate speech on their services, including through automatic detection tools.
- Participate in structured multi-stakeholder cooperation with experts and civil society organisations that can flag the trends and developments of hate speech that they observe, helping to prevent waves of hate speech from going viral.
- Raise, in cooperation with civil society organisations, users' awareness about illegal hate speech and the procedures to flag illegal content online.
As part of their respective assessments of the Code of conduct+, the Commission and the European Board for Digital Services encourage the signatory platforms to take into account several recommendations when implementing the Code of conduct+, including:
- Providing information, as part of their reporting, on the outcome of the measures taken, as well as additional data related to hate speech on their platforms. This may include, for example, the role of recommender systems and the organic and algorithmic reach of illegal content prior to its removal.
- Presenting country-level data broken down by the internal classification of hate speech (such as race, ethnicity, religion, gender identity or sexual orientation) and ensuring adequate follow-up to input derived from multi-stakeholder cooperation.
Next Steps
The Commission and the Board will monitor and evaluate the achievement of the Code of conduct+ objectives, as well as their recommendations, and facilitate the regular review and adaptation of the Code. This process will be part of the continuous monitoring of platforms' compliance with existing rules.
Background
Freedom of expression is a fundamental right and a cherished value which – as enshrined in human rights law - must not be exploited to incite hatred and violence. Illegal hate speech represents a systemic risk for democracy and fundamental rights and a threat to the common values of respect for human dignity, freedom, democracy and equality enshrined in Article 2 of the Treaty on European Union.
The 2008 Framework Decision on combating racism and xenophobia requires Member States to criminalise the public incitement to violence or hatred against a group of persons or a member of such a group, on grounds of race, colour, religion, descent or national or ethnic origin. Several Member States have expanded the criminal law definition of hate speech under the Framework Decision to include additional grounds, such as sexual orientation, gender identity or disability. The recently adopted Directive on combating violence against women and domestic violence establishes that cyber incitement to violence or hatred on grounds of gender is a criminal offence. The Commission has also proposed to extend the list of EU crimes in Art 83 of the Treaty of the Functioning of the EU to hate crime and hate speech. It is now for Member States to act by reaching unanimity.
All conducts defined as hate speech, both in the national laws transposing the Framework Decision and in any other provisions of national law, and taking place online, constitute hate speech for the purposes of the Code. It is the prerogative of national courts and other relevant judicial or administrative authorities to issue orders to act against illegal hate speech, based on applicable EU or national law. Under the DSA, online platforms have the obligation to inform the relevant authority of the effect given to such an order without undue delay.
While remaining voluntary instruments, codes of conduct under the DSA can play an important role in the wider system of enforcement. However, participating in and implementing a given code of conduct nonetheless does not in itself presume compliance with the DSA and is without prejudice to the Commission's assessment on a case-by-case basis.