• Website

    These cookies are strictly necessary to enable you to move about the site or to perform functions you have requested

    CookiePurposeValue
    PHPSESSIDStores a unique session ID which references data held on our server about your session, such as login information[Not Set]
    cookieconsentStores the cookie preferences you select here[Not Set]
  • Allows us to improve your experience by collecting anonymous usage data

    CookiePurposeValue
    _gaAn anonymous unique ID assiged to your browser for Google Analytics tracking[Not Set]
    _gidAn anonymous unique ID assiged to your browser session for Google Analytics tracking[Not Set]
    _gatA counter that is used to throttle the request rate to Google Analytics[Not Set]

What is hate speech?

Context

  • Back
  • Next
  • Sections

Why this theme?

Hate crimes have been investigated around the world since the 1940s but, over recent years, these cases have grown with the opportunities for abuse facilitated by internet technologies.

The EU Convention on CyberCrime (also known as the Budapest Convention) created the first international guidelines for national cybercrime legislation and was followed by an additional protocol concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems. These two instruments mandate that EU countries should punish hate speech conducted over the internet. But the term hate speech means different things to different people and, with no single definition, it can be challenging to understand what content can be considered hateful.

When viewing online content, it is natural for users to consider this content in the light of their own personal experiences prior to viewing that content. So as danah boyd said:

“Why do people from different worldviews interpret the same piece of content differently? Rather than thinking about the intention behind the production, let’s analyze the contradictions in the interpretation. This requires developing a strong sense of how others think and where the differences in perspective lie.”

Research Evidence

Hate speech is not a new phenomenon. It is as old as the formation of human societies and the organisation of people into groups. Historically, it did not always affect the same people, nor was it always expressed in the same way. Yet, concerns about its impact on individuals and society have largely been the same: hate speech may affect individuals’ wellbeing; it may weaken, disintegrate or even destruct social cohesion; it might lead to violence between individuals, groups and communities, sometimes even threatening peace. In a digital world, these kind of concerns have only increased, because of the rapid pace at which online hate speech messages can be uttered and spread.

Over the years, a large number of academic and policy publications have emerged which define hate speech in relation to these kind of societal concerns. Reference is often made to the historical or socio-economical context in which hate speech occurs, with possible solutions being explored. Due to its applied and multidisciplinary nature, an eclectic mix of concepts and ideas are being used, interchangeably or not, to describe its nature and dimensions. Is hate speech similar to dangerous speech, or inflammatory speech or even bullying? And how does this all translate to the online environment in which children and young people are nowadays growing up? If researchers and policy makers struggle to agree and understand what hate speech is, how can we possibly expect teenagers to come to terms with how online hate speech may affect their everyday lives?

In the SELMA research report, Hacking Online Hate: Building an Evidence Base for Educators, we recognise this diversity of perspectives, but equally try to end up with a comprehensive online hate speech definition. From an education point of view, children and young people need a meaningful starting point from which they can explore and reflect upon their own views and experiences.

For clarity and consistency, we therefore define online hate speech as:

  • “Every form of expression, including text messages, images, music, videos, games, or other symbols and signs;
  • Disseminated by any possible form of digital media, including websites, forums, blogs, email, social media platforms, or other online communication channels;
  • Targeting an individual or a group of people based on protected characteristics, such as race or ethnic origin, gender or gender identity, sexual orientation, religious affiliation, or disability;
  • With the intention of inciting, spreading or promoting hatred or other forms of discrimination, or when the message can be reasonably understood as likely to produce that effect.”

Or, in simplified form:

  • “Any online content targeting someone based on protected characteristics with the intent or likely effect of inciting, spreading or promoting hatred or other forms of discrimination.”

Prompt Questions

These questions are provided as examples to initiate and guide discussions around the topics in this theme.

  • What things online make you happy/feel positive?
  • What things online make you feel upset/worried/sad/negative?
  • What do you think hate speech is?
    • Who is it targeted at?
    • Why might people behave in this way?
    • How does it make you feel?
    • How does it make the person/people targeted feel?
    • How might it make the perpetrator feel?
    • Do some forms of hate speech affect you more than others? Why?
    • Is something still considered hate speech if the intended target isn’t affected by it? Why/why not?
  • Is there a difference between being hateful/hurtful and hate speech online?
    • Why?