The SEL activities supported the learners in identifying the importance of context in framing the emotional response of the viewer. In other words, what was the intended meaning behind the statement and the importance of understanding the surrounding conversations, context and commentary.
The Media analysis unit provided learners with an opportunity to explore hate speech statements, distinguishing fact from opinion, and opinion from hate speech.
These questions are provided as examples to initiate and guide discussions around the topic in this focus area.
- What are the characteristics of hate speech?
- How do social media companies identify hate speech in the millions of posts each day?
- How do social media companies refine their systems to identify hate speech?
- What are the challenges in identifying online hate speech?
- As an online user, how would you know if social media companies were successfully identifying and removing hate speech?
The SELMA project short definition of hate speech is:
“Any online content targeting someone based on protected characteristics with the intent or likely effect of inciting, spreading or promoting hatred or other forms of discrimination.”
Spotting hate: creating an algorithm
This activity is designed to help learners further understand the characteristics of hate speech and the complexity of identifying hate speech in online content. The aim of this activity is for your learners to create and test a “hate speech” algorithm.
Some learners will understand what an algorithm is, while others may not. While understanding the computing is helpful, it’s not necessary for this activity. You can find out more about coding at the All You Need is Code site.
An algorithm can be defined as:
“a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”
In this activity, your learners need to create an algorithm using a set of given test statements. There will then be a final set to try out on their algorithm
Further useful information about algorithms can be found on the BBC Bitesize website.
Share the definition of an algorithm and hate speech from the slides.
Discuss the task and ask if the learners have any questions, queries or misunderstandings. You might find it helpful to review the information on the #wecounterhate site. Particularly the Technology and Progress pages. Care should be exercised if using this with your group as is is likely that offensive statements will be represented here.
Now share the sample hate speech statements. Again, for this exercise, Xenovia should be treated as a real country. Explain that these sample sentences have been designed to represent hate speech statements, but do not use real protected characteristics or bad language, because that may make them offensive to some people.
Ask the learners to identify some common characteristics in the statements. Guide them towards the following list:
- All caps
- Bad language
- Using the country name or name of a specific group
- Using the protected characteristic
- Making threats
- Likening people to animals.
Now split the learners up into smaller groups and then set the challenge:
Produce an algorithm to identify hate speech using a series of tests. The purpose of the algorithm is to identify if a statement is, or is not, hate speech. For example: Does the statement include bad language?
Encourage the learners to consider what their algorithm will need to know to help make decisions; for example, to understand what bad language is, the algorithm will need a list (database) of words that are considered bad language.
They should repeatedly test their algorithm with the sample set of statements provided to make sure it works.
Once they have completed and tested their statements there will be a new, final set provided to see how effective their algorithm is.
Ensure that the learners use the statements provided as they develop their algorithm and repeatedly test them. They can draw/sketch their algorithm by hand, use online flowchart tools or use the provided operator cards to physically construct their algorithm.
Once their algorithms are well developed or complete, draw the learners back together and explain that you’re now going to test their algorithms with some new statements.
Group by group, share the final statements from the slides and ask the learners to “run” the statements through their algorithm and, based on their algorithm, decide which are/are not hate speech. After each group, discuss anything that went wrong and invite the learners to refine their algorithm, providing them with a copy of the second set of statements for testing.
Once each group has tested their algorithms you may wish to discuss some of the challenges that this exercise identified.
- What were the issues with their algorithm?
- What was different about the second set of statements?
- What had they missed?
- How far could algorithms go in identifying hate speech?
Depending on the learners in your group, you may wish to set this task:
Find some comments from your community that you think might be hate speech and run these through your algorithm.
Should you choose to set this task you should agree some parameters to ensure the learners do not put themselves at risk.
These tools allow for quick creation of flowcharts and algorithms (note: some require creation of a free account):
Develop a suitable way to categorise hateful content. Understand the complexity and limitations of capturing hate speech.