Remind participants about the exploration and labelling of positive and negative feelings using the RULER quadrant, and the feelings that arise when putting oneself in the position of receiving different types of comments and speech (including hate speech) online.
Recognising how we feel about things expressed online is key to understanding how hate speech can affect us and others; the next step is to identify some of the features of media that evoke these feelings and constitute hate speech, as well as identify the protected characteristics that are targeted.
These questions are provided as examples to initiate and guide discussions around the topic in this focus area.
- What do you think hate speech online is? Other than what is said/expressed, what else do you think might be important in deciding if something is “hate” (e.g. context, time period, societal norms/attitudes (online and offline), legislation)?
- What does online speech have to contain to be classed as “hate”?
- Is hate speech always just words or what people say? Can it be other types of online content? Ask for examples.
- What do you understand as “freedom of expression”? Can we say whatever we want, whenever we want?
- How does hate speech differ from freedom of expression?
- Who does hate speech target? Why are people/groups targeted?
- What are “protected characteristics”? Give some examples.
- Do you think hate speech is illegal? Why/why not?
- What makes it illegal?
The SELMA project short definition of hate speech is:
“Any online content targeting someone based on protected characteristics with the intent or likely effect of inciting, spreading or promoting hatred or other forms of discrimination.”
Xorg the Xenovian
Using the statements made against Xorg the Xenovian (in the SEL main activity) and the sorting headings, ask participants to work in small groups to sort the statements in different ways over three rounds:
- Positive or negative? - Sort the statements into three piles/groups: positive, neutral (or unclear as to whether it is positive/negative), negative.
- Crossing the line - Using the worksheet titled “Crossing the line”, sort the statements into two groups either side of the line: legal and illegal. For examples where it is unclear or ambiguous as to whether the law may have been broken, these cards could be placed on the line, or close to the line on either the legal/illegal half to show how close participants think a statement is to being unlawful.
- Protected characteristics - Using the heading cards, sort the statements based on the protected characteristic(s) they mention or attack. Some statements may contain more than one protected characteristic; in this case (if possible) a judgement should be made on which characteristic is being attacked more severely and sorted under that heading card.
After each round, encourage participants to feed back their thoughts on how they have sorted different statements.
For Round 1 (Positive or negative), ask the following questions:
- Which statements were clearly positive?
- Which were clearly negative?
- Which statements were unclear?
- Were there any statements that appeared positive/neutral but may have been intended as negative (e.g. sarcasm, passive aggressive)?
For Round 2 (Crossing the line), ask the following questions:
- Were there any statements that were clearly illegal? How did you know?
- What was it about those statements that made you unsure as to whether it was illegal/legal?
- What other information might you need to make a firmer decision? (e.g. context, more information about your country’s laws, other content created by the sender, etc.)
For Round 3 (Protected characteristics), ask participants to consider the following questions:
- Which of these characteristics have you seen attacked or targeted by “hate” online?
- From your own experience, are there some characteristics that are targeted more than others online?
- Do the characteristics targeted vary between different types of online media/activity? (e.g. across different social media services, games, online forums/message boards, communities.)
- What is the response to online hate speech across different online media/services? Does the response differ depending on the service/app/community? (e.g. ignored, challenged, amplified, automatically removed, often reported but remains live, often reported and is removed, user is banned/blocked/warned.)
For details about why these are the protected characteristics in the UK, please see the Research Report, section 4.4, page 44.
Using the social media profile for Xynthia the Xenovian, ask participants to consider and record their ideas for the following questions:
- What aspects/characteristics about Xynthia could be attacked by someone online?
- Which of the characteristics have you identified are protected characteristics?
- What characteristics could be targeted by hateful/hurtful comments but would not constitute hate speech?
- What advice would you give to Xynthia if he/she received hate speech online?
A person's "characteristics" are parts of what make them who they are (such as their gender or their country of birth). Those characteristics are "protected" in the sense that as a society we prohibit (by law or other means) anyone from discriminating against someone on the basis of those characteristics. As we have established, hate speech is any form of expression that incites violence or other forms of discrimination against someone or a group of people on the basis of one or more protected characteristics. The list of protected characteristics as it relates to hate speech varies by country and by social media platform and evolves over time. Common protected characteristics include race, nationality/ ethnicity, sexual orientation, gender, transgender identity, and disability.
Recognise the protected characteristics targeted by hate speech. Make judgements on whether content is hate speech.