• Website

    These cookies are strictly necessary to enable you to move about the site or to perform functions you have requested

    PHPSESSIDStores a unique session ID which references data held on our server about your session, such as login information[Not Set]
    cookieconsentStores the cookie preferences you select here[Not Set]
  • Allows us to improve your experience by collecting anonymous usage data

    _gaAn anonymous unique ID assiged to your browser for Google Analytics tracking[Not Set]
    _gidAn anonymous unique ID assiged to your browser session for Google Analytics tracking[Not Set]
    _gatA counter that is used to throttle the request rate to Google Analytics[Not Set]

Are my people really using hate speech?

Media Production

  • Back
  • Prev
  • Next
  • Sections



The SEL activities supported the learners in identifying the importance of context in framing the emotional response of the viewer. In other words, what was the intended meaning behind the statement and the importance of understanding the surrounding conversations, context and commentary.

The Media analysis unit provided learners with an opportunity to explore hate speech statements, distinguishing fact from opinion, and opinion from hate speech.

Prompt Questions

These questions are provided as examples to initiate and guide discussions around the topic in this focus area.

  • What are the characteristics of hate speech?
  • How do social media companies identify hate speech in the millions of posts each day?
  • How do social media companies refine their systems to identify hate speech?
  • What are the challenges in identifying online hate speech?
  • As an online user, how would you know if social media companies were successfully identifying and removing hate speech?

Main Activity

The SELMA project short definition of hate speech is:

“Any online content targeting someone based on protected characteristics with the intent or likely effect of inciting, spreading or promoting hatred or other forms of discrimination.”

Spotting hate: creating an algorithm

This activity is designed to help learners further understand the characteristics of hate speech and the complexity of identifying hate speech in online content. The aim of this activity is for your learners to create and test a “hate speech” algorithm.

Some learners will understand what an algorithm is, while others may not. While understanding the computing is helpful, it’s not necessary for this activity. You can find out more about coding at the All You Need is Code site.

An algorithm can be defined as:

“a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”

In this activity, your learners need to create an algorithm using a set of given test statements. There will then be a final set to try out on their algorithm

Further useful information about algorithms can be found on the BBC Bitesize website.

Share the definition of an algorithm and hate speech from the slides.

Discuss the task and ask if the learners have any questions, queries or misunderstandings. You might find it helpful to review the information on the#wecounterhate site. Particularly the Technology and Progress pages. Care should be exercised if using this with your group as is is likely that offensive statements will be represented here.

Now share the sample hate speech statements. Again, for this exercise, Xenovia should be treated as a real country. Explain that these sample sentences have been designed to represent hate speech statements, but do not use real protected characteristics or bad language, because that may make them offensive to some people.

Ask the learners to identify some common characteristics in the statements. Guide them towards the following list:

  • All caps
  • Bad language
  • Using the country name or name of a specific group
  • Using the protected characteristic
  • Making threats
  • Shooting/killing/hurting
  • Likening people to animals.

Now split the learners up into smaller groups and then set the challenge:

Produce an algorithm to identify hate speech using a series of tests. The purpose of the algorithm is to identify if a statement is, or is not, hate speech. For example: Does the statement include bad language?

Encourage the learners to consider what their algorithm will need to know to help make decisions; for example, to understand what bad language is, the algorithm will need a list (database) of words that are considered bad language.

They should repeatedly test their algorithm with the sample set of statements provided to make sure it works.

Once they have completed and tested their statements there will be a new, final set provided to see how effective their algorithm is.

Ensure that the learners use the statements provided as they develop their algorithm and repeatedly test them. They can draw/sketch their algorithm by hand, use online flowchart tools or use the provided operator cards to physically construct their algorithm.

Once their algorithms are well developed or complete, draw the learners back together and explain that you’re now going to test their algorithms with some new statements.

Group by group, share the final statements from the slides and ask the learners to “run” the statements through their algorithm and, based on their algorithm, decide which are/are not hate speech. After each group, discuss anything that went wrong and invite the learners to refine their algorithm, providing them with a copy of the second set of statements for testing.

Once each group has tested their algorithms you may wish to discuss some of the challenges that this exercise identified.

  • What were the issues with their algorithm?
  • What was different about the second set of statements?
  • What had they missed?
  • How far could algorithms go in identifying hate speech?

Depending on the learners in your group, you may wish to set this task:

Find some comments from your community that you think might be hate speech and run these through your algorithm.

Should you choose to set this task you should agree some parameters to ensure the learners do not put themselves at risk.

Outcome Criteria

  • Develop a suitable way to categorise hateful content.
  • Understand the complexity and limitations of capturing hate speech.


Develop a suitable way to categorise hateful content. Understand the complexity and limitations of capturing hate speech.


Spotting hate: creating an algorithm

Developing a suitable way to categorise hateful content.


Operator cards

Cards to support developing a suitable way to categorise hateful content.


Sample hate speech statements

Sample statements for the 'creating an algorithm' activity


Sample hate speech statements 2

A second set of sample statements for the 'creating an algorithm' activity