#wecounterhate site. Particularly the Technology and Progress pages. Care should be exercised if using this with your group as is is likely that offensive statements will be represented here.
Now share the sample hate speech statements. Again, for this exercise, Xenovia should be treated as a real country. Explain that these sample sentences have been designed to represent hate speech statements, but do not use real protected characteristics or bad language, because that may make them offensive to some people.
Ask the learners to identify some common characteristics in the statements. Guide them towards the following list:
Now split the learners up into smaller groups and then set the challenge:
Produce an algorithm to identify hate speech using a series of tests. The purpose of the algorithm is to identify if a statement is, or is not, hate speech. For example: Does the statement include bad language?
Encourage the learners to consider what their algorithm will need to know to help make decisions; for example, to understand what bad language is, the algorithm will need a list (database) of words that are considered bad language.
They should repeatedly test their algorithm with the sample set of statements provided to make sure it works.
Once they have completed and tested their statements there will be a new, final set provided to see how effective their algorithm is.
Ensure that the learners use the statements provided as they develop their algorithm and repeatedly test them. They can draw/sketch their algorithm by hand, use online flowchart tools or use the provided operator cards to physically construct their algorithm.
Once their algorithms are well developed or complete, draw the learners back together and explain that you’re now going to test their algorithms with some new statements.
Group by group, share the final statements from the slides and ask the learners to “run” the statements through their algorithm and, based on their algorithm, decide which are/are not hate speech. After each group, discuss anything that went wrong and invite the learners to refine their algorithm, providing them with a copy of the second set of statements for testing.
Once each group has tested their algorithms you may wish to discuss some of the challenges that this exercise identified.
Depending on the learners in your group, you may wish to set this task:
Find some comments from your community that you think might be hate speech and run these through your algorithm.
Should you choose to set this task you should agree some parameters to ensure the learners do not put themselves at risk.
These tools allow for quick creation of flowcharts and algorithms (note: some require creation of a free account):
Develop a suitable way to categorise hateful content. Understand the complexity and limitations of capturing hate speech.
Licensed under a Creative Commons Attribution-NonCommercial 4.0 International License