• Website

    These cookies are strictly necessary to enable you to move about the site or to perform functions you have requested

    PHPSESSIDStores a unique session ID which references data held on our server about your session, such as login information[Not Set]
    cookieconsentStores the cookie preferences you select here[Not Set]
  • Allows us to improve your experience by collecting anonymous usage data

    _gaAn anonymous unique ID assiged to your browser for Google Analytics tracking[Not Set]
    _gidAn anonymous unique ID assiged to your browser session for Google Analytics tracking[Not Set]
    _gatA counter that is used to throttle the request rate to Google Analytics[Not Set]

Why should social media providers care about SELMA?

As explained in the SELMA research report Hacking Online Hate: Building an Evidence Base for Educators, online companies and intermediaries might exercise their power of hate speech in various ways.

Transparency is key, with online media platforms being clear and specific in Terms of Service agreements or Community Guidelines about the harms that their hate speech policies address, as well as the consequences of policy violations. Even if online intermediaries deem it inappropriate to remove certain types of offensive content, because it is not illegal in a certain country, they can still help with countering hate speech through different means, for instance by providing an alternative narrative.

The role of online intermediaries goes well beyond content removal and counter-narrative strategies. They should educate and empower users to respond to hate speech on their platforms and sites, while creating contexts in which users feel compelled to reflect on their own rights and responsibilities. While this is in part a matter of putting in place clear Community Guidelines and reporting tools and processes, these kind of civic norms should also be encouraged through architectural choices which discourage speakers from fleeing responsibility for their own hateful expressions. For example, while anonymity is valuable when it enables speakers to avoid retaliation, it can also simply encourage one to avoid responsibility for socially destructive behaviour. Online intermediaries can shape these kind of norms, for instance by permitting anonymity by default, but revoking it when a user violates the Terms of Service. Likewise, systems might be designed to slow down the posting or sharing process in certain circumstances, requiring a waiting or cool-off period, prompting the user to more carefully consider the possible impact of what is going to be communicated. Liking, sharing or comment features might be redesigned, so as to avoid controversy or polarisation becoming the implicit norm for popularity on a social media platform. Meanwhile, more respectful and tolerant types of behaviours could be modelled and rewarded.

In our view, SELMA can play a role in facilitating these kind of discussions. If social media providers are developing services specifically for children or young people, they should be mindful of how the content, app or service can be empowering, engaging, stimulating and safe. The popularity of social media services among teenagers to a large extent corresponds to social and emotional development processes, in particular their desire to explore and develop their identity; to connect with peers anywhere and anytime; to stay in touch, express themselves and share experiences, while having fun together.

Social media providers need to take these elements into account in order to make informed decisions in terms of the online media experiences they provide. This is exactly what the SELMA Toolkit is about!