Where’s the harm in online hate speech?

Where’s the harm in online hate speech?

The SELMA project sits down with Professor Victoria Nash of the Oxford Internet Institute to unpack what we mean by ‘online harm’, and discuss some of the promises – and limitations – of both policy and education when it comes to dealing with online hate speech.

The SELMA project: Can you start by giving us a little insight into your background and areas of interest?

Victoria: I am Associate Professor and Policy Fellow here at the Oxford Internet Institute, and as a department as a whole, we spend a lot of our time thinking about the societal implications of digital technologies broadly. My own area of research and policy interest has traditionally focused on questions about how you regulate and govern digital spaces, which then on the flipside has also led me to spending a lot of time thinking about types of content and types of behaviour that are problematic.

I use the word “problematic” rather than “harmful”, I am sure for reasons that I will come to discuss. I have had a particular fascination with how children are treated by policies relating to the internet. Typically, these are focused very much on protecting them [children] from certain harms and rather less on empowering or enabling their flourishing. Broadly I have spent the last ten to fifteen years working on these sorts of questions.
 

The SELMA project: I’d like to pick up on that point you have just mentioned about using the word “problematic” instead of “harmful”. In policy circles there is often talk of online “harms”, and some of your work has looked at this word “harm” and what we mean by it in discussions about the online world, sometimes challenging researchers and policymakers to be clearer about its definition. Can you tell us a bit more about this?

Victoria: As a political theorist, I am very aware that if you look back at justifications for government policy and intervention policy, people often refer back to people like John Stuart Mill, who argue that the only reason for interfering in the lives of others was to prevent harms. This sounds like a very plausible concept and it has often formed the basis of regulatory interventions, but I would say it is most easily justifiable in the sphere of physical harms. So, for example, clearly we stop individuals from attacking others physically, we have health and safety regulation, we have laws governing industries like the tobacco industry largely because there is very clear evidence that you not only cause harm to individuals but that you can generalise this across a wide population.

You might want to see, therefore, the same sorts of justification for regulatory interventions in relation to the internet and for that reason I have spent quite a while thinking about what evidence we have for harms resulting from online experiences and online behaviour.

Now, one of the first things you have to think about, is what you can actually measure and what you can find evidence for. Some of the best research we have in the UK includes things like the EU kids online project and the amazing survey research that has been collected. Certainly, in the early days what was a lot easier to measure, and more commonly measured, was exposure to risk. It was common to ask how many children might have experienced violent content online, pornographic content online and might have been subject to bullying et cetera.

But what is typically much harder to get at, particularly through survey research, is what harms result from those exposures. So this is why I am quite careful in terms of how I talk about this and I tend to refer to “problematic” behaviour and “problematic” content rather than “harmful” simply because actually when you look at the evidence base it’s very rare that you can show that a particular type of content is necessarily or intrinsically harmful across a wide population. Equally, it is much more likely that you might occasionally observe specific harms to specific individuals in particular types of contexts, or you might find that certain types of behaviour or content raise alarm bells maybe morally or ethically, or individuals express subjective dissatisfaction or upset at seeing them, but actually you have almost no evidence for there being wider sorts of harms. This is why, I suppose, I am quite careful with my language.

Now, in the context of things like hate speech, it’s important to note that actually we don’t require the external empirical evidence of a harm being created, because this is speech which we have already debated politically and determined to be unacceptable in our society for a variety of reasons, not least that it does often create harms to specific groups. But yes, typically for me there is often a distinction between what is termed ‘illegal’ content and then ‘legal but harmful’ content. It’s in the latter context that I often much prefer to say it’s ‘legal but problematic’, and it may well be harmful for specific individuals, or we may think it could be harmful or have no evidence yet… that’s the reason for the sort of conceptual prevarication!

I guess the other thing that I would just flag up, is that one of the difficulties we also have specifically to online content and experience, is that we may sometimes find evidence where content is beneficial to some individuals but harmful to others. Where researchers find it difficult to decide what to judge overall. An example would be often something like eating disorder content, where specific pieces of content may feel very alarming and worrying and may ultimately be linked with harm, but actually if you look at its patterns overall, you may find examples where some individuals are triggered by exposure to those sorts of content and for others, this is the only chance they have to talk about matters which really concern them and cause them a great deal of pain and upset. Therefore, for example if you were to base your decision on whether or not you should ban that sort of content, how do you weigh up those two scenarios?

So that’s the other reason why this notion of “harm” is so problematic, because again unlike the physical injuries, these are very complex interactions here between content and behaviour online and individuals’ particular responses and particular contexts.


The SELMA project: You pointed out that our focus on online “harms” from a policy perspective is partly rooted in a legal philosophy which may be more obviously applicable in the sphere of physical harm. Sometimes there is a sense, in discussions around hate speech, that what we are really looking for is this link between online hate speech and then a “real world” physical attack. Is that the right way to think about the harm hate speech does?

Victoria: In terms of hate speech, you might seek to justify it by saying “I have said all these things and, look, nothing’s happened!” (although here, I disagree with the idea of there somehow being a “real world” that’s separate from the online world) – but yes, if what you were interested in was physical harm, there not being clear evidence of a link between something said and a physical act, could be a purported justification for using online hate speech.

However, there are at the very least two other types of harm. One would be, certainly, emotional, like offence, for example, again to specific individuals and specific groups. Additionally, you could also probably point to symbolic harms as well which would result from the undermining or demonstrable lack of respect of particular groups and society - which go above and beyond their effect on any particular individual on any particular moment in time. So certainly, if I think about it from a political theory and democracy perspective one of the reasons why hate speech is - and should be - illegal is because it demonstrably undermines the equal respect and concern that is due to every member of society, regardless of their background.


The SELMA project: In the case of online hate speech, what is the “harm” that it causes and how concerned should we be about it, relative to other potential harms?

Victoria: As a researcher, a part of me would want to say the big empirical question here which, as far as I can see, we don’t know the answer to, is: if you set aside the fact that this is illegal and is largely offensive and that it is disrespectful of large portions of your population, what other effects follow on from that? And I don’t think that we know that answer. There is lots of speculation that it fuels populism, that it fuels uncivil political debate, that it fuels divides in society and I think that, again, as a researcher, I don’t think you would ever be able to prove that - it’s empirically impossible.

However, I don’t think that that’s a scenario in which you need to further justify it by pointing to those harms, except, I suppose, it might inform your strategies for dealing with online hate speech. So, for example, if you thought that actually the only effects of hate speech that really matter were whether it immediately caused civil unrest or attacks, one option would simply be to put more police on the streets and that would deal with that concern. However, for me, that would be an inadequate response because I would still worry immensely about the broader divisions, social divisions that are created, the changes in public understandings due to the demeaning of particular groups, that no amount of police and law and order could prevent. So I think theoretically, conceptually we might imagine, quite reasonably, that there might be these other types of effects beyond civil unrest or attacks, and that might be one reason why we might like to see hate speech online removed or minimise it, or alternatively you have strategies that tackle it when we see it online to ensure that there is a positive story too. Whilst I recognise you can’t measure those other effects, I think that you have to keep them in mind when we are thinking about strategies to tackle it.

I suppose that for me, I would focus less on how you define what is harmful and what you lump under that, and more on what I would call procedural accountability and how you enable these [social media] platforms to be both more transparent and accountable for their decisions, but also how you can ensure that they are obeying their public responsibility particularly when it comes to illegal content which obviously hate speech is on the list of important types.


The SELMA project: When we think of policies around online hate speech and striking the balance between moderation and censorship on the one hand and upholding free speech on the other, have policies tended to think about young people in a particular way? Clearly, the need for self-expression is strong at this age, but young people are also vulnerable and impressionable - so do you think specific policies are needed, and have they been developed?

Victoria: My impression would be that we actually focus much more on the sensitivity and developmental needs of young people, in policies relating to their expression online, than we have given thought to their rights of expression and perhaps their desire to find their own political identity. It is a period where we know individuals are autonomously developing their own views, their own values… they may be playing with aspects of their identity.

So, I think on the one hand, it is often recognised that they are still morphing and changing. So, for example, things like the Prevent strategy obviously very much recognises the vulnerability of young people to radicalisation. Equally things like the Online Harms White Paper is very focused on the vulnerability and welfare needs of people under 18. However, I think that does sometimes ignore the flipside of that which is that you are developing your own identity and values, views, you may still be learning what is acceptable to say and what’s not, you may be finding your vocabulary, you may still be wanting to engage and play with different political positions.

I think that there is a general lack of understanding for young people’s political autonomy and personal autonomy. Many ways in which young people are framed in policies surrounding the Internet are better fitted perhaps for very young children, tweens, than they are for teens. And I think that’s just because it’s very difficult to create policies that meet the needs of all different types of groups.  It is just very difficult from a policy perspective to balance those responsibilities of protection and empowerment at a time where teens in particular are going through a stage where that might mean wrestling with all sorts of quite difficult behavioural decisions and choices.


The SELMA project: What do you think the main gaps or areas for development are in terms of how we are thinking about the problem of online hate speech at a policy level, both generally and as it relates to young people specifically?

Victoria: I think in general for me there are two big problems, one which I think is decreasing and one which is not going away.

The one that is decreasing is our ability to automatically identify examples of hate speech and make sure that it is taken down quickly. If you compare hate speech to other types of illegal online content, like child sexual abuse imagery, the latter can obviously quite easily be identified by using hashtags and technologies. Also, there is perhaps less at stake in terms of removal and false positives. With hate speech, again we will want to see it removed quickly wherever it is posted, whenever it is posted, but it has been technically much more difficult to ensure that those decisions are made accurately.

I think that is improving. I don’t think that we will ever get to a position of it being 100% accurate. We might get to the point where people can live with the levels of accuracy that we have. And equally, I think at the same time, particularly a lot of the big tech companies, are getting much better at prompts and some nudging technologies which might enable us to make better decisions, asking “do you really want to post this?”, for example.

I think the bigger problem is the fact that, actually, there are margins to what counts as illegal hate speech and those blend into what might be legitimate but dislikeable political views, and, insofar as we live in a democracy, I think we probably need a bigger societal discussion about how we deal with that, quite apart from the online side. There is obviously a very big question at the moment about how we deal with, maybe not far right, but more extreme right politics, which seems to have undercurrents, for example, in views towards immigration which sometimes border on, it seems to me, hate speech or racism - but where we know that there have been very robust defences of it being a case of freedom of speech and political speech. So, I think that is going to be a fairly absorbing problem for the next few years.

How you deal with that socially first of all, and then secondly, that will have its online implications. How do you police that? Can platforms make decisions about what is happening at the margins of illegal hate speech? How they choose to police that and how transparent they should be and how open to rights of appeal et cetera? I think that’s particularly important considering a project like SELMA because insofar as there is an education-based element it has to be able to have those difficult conversations.


The SELMA project: What role do you see for education specifically in tackling online hate speech, as opposed to the role that something like policy or regulation might play?

Victoria: I feel that there are three pillars to this. One is education, one is policy and the other is much wider public norms and public debate.

I think that education is really vital on this type of issue. In the first instance obviously we know that some of the things that alarm and upset young people online the most actually are related to violent content and hate content, we know that is not an uncommon experience, so at the very least we ought to be teaching them how to protect themselves from it and what strategies to employ if they see it.

But I think what SELMA does, is that it goes beyond that to try and hopefully nip these sorts of behaviours in the bud: to help young people understand a bit about why hate speech is so dangerous and damaging and also things like recognising the truth of how it arrives online. I think that’s what is so interesting about the project, is that it’s about recognising that it’s not an individual thing where one individual decides to say something really mean online… you’re recognising network effect, recognising peer pressure, context. So, I think there is a really crucial role both in protecting young people, as I say, who are exposed to it and who see it, but also enabling individuals to understand where these behaviours come from and therefore hopefully step back from it.

I do think though, that there are limits to that approach. So, on the one hand you have policy obviously around removal of that sort of hate speech and even, in the worst circumstances, prosecution. However, the bigger problem, I think, is understanding the social prompts and why these individuals feel the need to talk about others in these terms, and so what is driving that, is that a mix of poverty, is that economic exclusion, where is that anger coming from? And I suppose if I am sometimes pessimistic about the ability of both education and policy to make great achievements, it is simply because I don’t think we are anywhere near understanding how to root out those much deeper social causes.
 

__________________

Prof Victoria Nash’s replies have been very lightly edited due to space constraints.

The title of this article is a play on the title of Jeremy Waldron’s famous book ‘The Harm in Hate Speech’ (Harvard University Press, 2012).

Back to News

Related Articles