• Website

    These cookies are strictly necessary to enable you to move about the site or to perform functions you have requested

    CookiePurposeValue
    PHPSESSIDStores a unique session ID which references data held on our server about your session, such as login information[Not Set]
    cookieconsentStores the cookie preferences you select here[Not Set]
  • Allows us to improve your experience by collecting anonymous usage data

    CookiePurposeValue
    _gaAn anonymous unique ID assiged to your browser for Google Analytics tracking[Not Set]
    _gidAn anonymous unique ID assiged to your browser session for Google Analytics tracking[Not Set]
    _gatA counter that is used to throttle the request rate to Google Analytics[Not Set]

Going beyond regulation to tackle online hate

Online hate speech plays a significant role in young people’s online media experience, and policy makers and legislators seek, both nationally and internationally, to address and resolve increasing concerns in this regard.

In Europe, the Code of Conduct on countering illegal hate speech online is the first major attempt to address how online companies should respond to online hate speech. The Code of Conduct was agreed on in 2016 by the European Commission and four leading IT companies and social media platforms (Facebook, Twitter, YouTube and Microsoft), on a voluntary basis. In this framework, industry stakeholders are encouraged to take quick action to assess the “majority of valid notifications for removal of illegal hate speech” in less than 24 hours.

The European Commission has preferred a voluntary approach over regulation, as the latter can be seen as hampering the development of the Digital Single Market – whereas the voluntary approach seeks to establish the possibility of addressing digital innovations’ potentially negative side-effects, while preserving the conditions to encourage innovation in the digital economy.

The Code of Conduct has had some positive results, as shown in the figures of the fourth monitoring report, made available in February 2019: tech companies were then assessing 89 per cent (compared to 40 per cent in 2016) of flagged content within 24 hours, and 72 per cent (compared to 28 per cent in 2016) of content deemed to be illegal hate speech was removed, according to European Commission estimates.

Yet, after this reporting exercise, Věra Jourová, Commissioner for Justice, Consumers and Gender Equality, warned that “if it slows down or it stops delivering the results, we will consider some kind of regulation”. And it now seems like the European Commission is following the footsteps of some big member states such as Germany, France and the United Kingdom, who are passing national legislation to target hate speech and illegal content. Indeed, the European Union Digital Services Act, due to be unveiled at the end of 2020, will compel online companies to remove illegal content, including illegal online hate speech, or be sanctioned.

Some observers have criticised the Code of Conduct for stating that online companies are “taking the lead on countering the spread of illegal hate speech online”. David Kaye, the United Nations’ Special rapporteur on the promotion and protection of the right to freedom of opinion and expression, asks: “do we want companies that enjoy ever-expanding control of our access to information and public debate to also have the power and responsibility to make decisions about where to draw the lines on content?” Indeed, by forcing online companies to take on more and more responsibility regarding the moderation of hate speech on their platforms, public authorities may actually entrust them with more powers, which is all the more a cause for concern given that “the only real gap that remains [in the EU Code of Conduct] is transparency and the feedback to users who sent notifications [of hate speech]” – in the words of Commissioner Jourová.

The SELMA approach

Beyond this debate, one should bear in mind that, even in their most effective form, top-down initiatives to regulate, monitor or report online hate speech only scrape the surface of the broader culture of online hate, and fail to address the root causes of this phenomenon. This moves us well beyond narrow legal rules or procedures. Rather, it touches upon the nature of online hate, its causes and consequences.

SELMA holds the view that a more pro-active awareness and education effort is needed, one driving the multiple stakeholder approach full circle. Because indeed, to affect change, a more systemic shift is necessary: one which informs and prepares children and young people for online hate by talking about it, in dialogue with their teachers, parents or other professionals or carers. This greater focus on education, on younger generations and on addressing the fundamental causes of hate speech have been echoed by the United Nations, through its Strategy and Plan of Action on Hate Speech.

For further information on the SELMA approach, browse the SELMA research report “Hacking Online Hate: Building an Evidence Base for Educators”, and read “The SELMA approach”. If you want to become part of multi-stakeholder discussions on tackling online hate speech, you may want to join the "Drive change, hack online hate" SELMA conference in Brussels, Belgium, on Thursday, 10 October 2019.

Back to News

Related Articles