Chloe Colliver

Chloe Colliver is a leading voice against disinformation and online hate speech.

The Head of Digital Policy and Strategy at the Institute for Strategic Dialogue, Chloe studies extremist messaging and conspiracies online, to equip organisations with an eye for disinformation.

Via The Motivational Speakers Agency, we interviewed Chloe to hear her insight into how Covid-19 messaging, and political campaigns are influenced by disinformation. Build resilience against online harassment and manipulation with our latest interview.

You talked on disinformation and the Covid-19 vaccine, what was the biggest cause of disinformation?

When we talk about disinformation, we’re specifically thinking about intentional false information. So, that’s separate from the general false information you might share with your friends and family, or you might see on your social media from people in your social networks.

With disinformation, it’s quite hard to draw a comparison of the reach of disinformation from say, foreign states like China, or politicians, celebrities or influences that might be shouting it from their own accounts or profiles. There’s some research this year that shows, particularly around the Covid-19 pandemic, that disinformation was disproportionately shared by celebrities online, which is a very interesting finding.

It shows the importance of public figures in disseminating false narratives and conspiracy theories around really important issues like public health.

Why are unfounded conspiracy theories so quickly believed, is there a social or psychological element to it?

Conspiracy theories are as old as time, really, and they’ve always thrived in times of crisis and social upheaval. And that is partly due to the way that our brains work, showing curiosity around conspiracy theories.

There are a few reasons for this. People are drawn to conspiracy theories in part to try and satisfy a psychological motive. So, the need for knowledge or certainty – we sometimes see that low education rates can correlate with people who are susceptible to conspiracy theories.

That’s not because those people are silly. It’s because they’re actually in search of knowledge, often in places that aren’t reliable, and they don’t have the tools or the people around them to help them know where reliable and trusted sources of knowledge might come from.

But then there’s also a psychological need to feel safe and secure in the world. That’s also part of why people seek out, and believe, conspiracy theories. [Covid-19] is a really good example on this one because a genuine external threat can affect how people interpret information.

And then finally, the other psychological aspect to this is the science suggests people have a desire to feel good about themselves as individuals, but also the groups that they’re part of.

So there’s a little bit of an inner dynamic that conspiracy theorists often promote to enhance the sense of belonging to a community or opposition to a supposed ‘bad guy’. And we see that a lot with conspiracy theories that overlap with extremist propaganda, for example.

How can organisations ensure clear and concise communication with their consumers, to avoid disinformation?

So, our team at ISD have done quite a lot of work, both with businesses but also in schools and youth communities, to think about building resilience against disinformation and other kinds of online harms. Transparency and clarity of communication with peers and networks is really critical to that.

We’re really thinking about getting ahead of the curve on these issues, building resilience, helping people understand critical thinking about information, rather than debunking information after the fact, which can often be counterproductive or really difficult to achieve success with.

The advice that I would give to businesses or organisations working with large audiences or consumers is to always consider transparency and clarity in your messaging, and to make sure you’re directing people to sources of information about your products or your organisation that are clear and trustworthy.

That’s really the first step we can take to make sure we’re all taking part in a much more open information system that doesn’t promote these kinds of disinformation or conspiracy theories.

What role does disinformation play in political campaigns, like Brexit?

Disinformation is often most heavily publicised around elections or referendums. We saw this in a big way during the Brexit campaign, as well as recent elections like the US presidential election in 2020. Disinformation has always been part of the toolkit of political operations, and that’s no different these days.

But what we see now, is that the social media revolution means it’s much more accessible to many more people, so the bar for creating, promoting and targeting disinformation is much lower than it’s been in the past.

We no longer rely on leaflets or TV campaigns to get those messages out, instead very cheap, targeted ads on Facebook do the job for you. So we see this information at a much greater scale and also at a hyper targeted level, which means people are receiving very personalised disinformation.

How has the digital revolution increased hate speech, and what more must social media sites do to clamp down on online bullying?

It’s difficult to tell whether social media has created more hate or more hate speech, but what it’s certainly done is make it much more accessible to many more people. Visibility and accessibility of hate speech means that the victims of this kind of content are manifold, and they’re receiving [hate] not just in the streets, but also in their bedrooms, on their phones and all around them. So, we really need to be able to apply existing laws better when it comes to hate and harassment in the online space.

That’s one aspect of this. We’re not really set up very well to deal with existing legal parameters in a very fast paced informational world. But we also need to adjust those laws and those expectations better, given that we have a whole new way of communicating with one another.

There are a number of developments looking at whether online platforms should take responsibility for some of the content that is on their sites, including hate speech, terrorist content, disinformation. There’s a really fine line between censorship and expectations of censorship from these platforms, but keeping people safe and secure at the same time.

What we can see, is platforms need to impose their own existing terms of services much more effectively to protect people from hate speech and targeted harassment.