Seeking to limit “viral hate speech,” EU and social sites announce Code of Conduct
Facebook, YouTube, Twitter and Microsoft have agreed to the measures.
The European Commission, Facebook, Google (YouTube), Twitter and Microsoft have announced a “Code of Conduct” governing “illegal hate speech” online in the EU. Earlier today, the various involved entities issued a statement explaining the new framework:
In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated.
In principle, this framework is not unlike a copyright-related takedown procedure. Upon notice of the prohibited hate speech, the tech companies mentioned will either remove the targeted content or disable access to it within 24 hours.
Beyond removal, there is also a series of commitments to educate and counter “hateful rhetoric and prejudice” that the various companies have pledged to undertake.
Illegal hate speech is defined under European law as follows:
- public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, colour, descent, religion or belief, or national or ethnic origin . . .
- publicly condoning, denying or g