The SEL activities supported the learners in identifying the importance of context in framing the emotional response of the viewer. In other words, what was the intended meaning behind the statement and the importance of understanding the surrounding conversations, context and commentary.
The Media analysis unit provided learners with an opportunity to explore hate speech statements, distinguishing fact from opinion, and opinion from hate speech.
The Media production unit provided learners with an opportunity to explore the characteristics of online hate speech, categorising sample statements into hate speech and not hate speech.
This Citizenship unit provides learners with opportunity to “unplug” their algorithm and present it to their school/community to help explain the characteristics of hate speech.
These questions are provided as examples to initiate and guide discussions around the topic in this focus area.
- What are the main characteristics of hate speech?
- What challenges does the online context present when identifying hate speech?
- What opportunities does the online context present when dealing with hate speech content?
- What do you think your school community/peers understand hate speech to be?
- What methods could you use to educate your peers?
The SELMA project short definition of hate speech is:
“Any online content targeting someone based on protected characteristics with the intent or likely effect of inciting, spreading or promoting hatred or other forms of discrimination.”
Learners present their algorithms using “unplugged” techniques. This means that you actually “do” the coding in a real environment. For example, each group could write their code (decisions) onto a series of cards, or pieces of paper and give these to learners to hold in a flowchart arrangement in a large space. Other learners could then use either given hate speech statements, or found statements and approach each stage of the algorithm. The learner with the decision can ask the question, the learner with the statement answers and moves to the relevant next step in the algorithm. By the end of this activity, you’re aiming for a set of “hate speech” and “not hate speech” statements.
A good follow-up activity would be to then review all the sorted statements, looking for errors and debugging the algorithm to see why those errors came about.
Call to action
After presenting their “unplugged” algorithm, learners could consider ways of inviting their community to get involved in developing the algorithm further. This may be to test it, suggest alternative methods or to engage in discussions around online hate speech.
Whatever the method used, the key aim is to engage with the school community/local community to raise awareness about how they use hate speech, and to start discussions about hate speech that may appear in the community.
Code.org has a variety of unplugged lessons available.
Challenge learners with an understanding of coding to convert their algorithm into a coding language (scratch, python, C++, etc.). You may need to enlist the help of some coding experts. You might be able to convince a local IT teacher to help, or you could encourage and support learners to share their code via GitHub: this is a free-to-use (in public format) developer platform; you share your code and others can comment and improve it. Who knows what could happen if you share this online?
Engage with own community to raise awareness about how they are using hate speech. Move the community from rules and procedures to shared values.