Humankind has had a long history with misleading information. In fact, the spread of misunderstood or fabricated information for personal gain was already described over 2,000 years ago by Aristotle and got a negative connotation when at the time of the World Wars (Lasswell, 1934).
The modern information environment seems more difficult to navigate than back then, with concepts like the post-truth era generating more friction and with empirical data outlining how misleading information affects democracies (Desigaud, 2017; Guess, Nyhan, Reifler, 2018; Machado and Vanderbiest, 2017; Silverman, 2015). So what exactly are we dealing with here and what should we keep in mind as educators?
Information disorder is essentially a more inclusive term when denoting the phenomena that used to be referred to as fake news. The term was widely popularised in 2016 during and after the United States presidential election and the UK Brexit referendum, as well as the French presidential elections in 2017, but has since been appropriated by politicians when referring to news organisations they find disagreeable (Wardle and Derakhshan, 2017); for this reason, information disorder is preferred instead.
There are different types of information disorder and the intent of the creator or multiplier of the information becomes key when trying to differentiate.
Misinformation is false, but it is not created with the intent to be harmful (Wardle and Derakhshan, 2017); therefore, it can include everyday interactions in which miscommunication takes place as well as content that could be understood in several ways.
For example, following the shooting dead of 29-year old Mark Duggan by police during the 2011 London Riots, misinformation and panic spread from social to mainstream media. More than 2.6 million tweets were published about the riots, whilst official information was shared in a slow manner, so journalists used tweets as sources, some of which were naturally exaggerated or made up (Himma-Kadakas, 2017; Richards and Lewis, 2011). The intent of the tweeters was probably not to mislead, rather to be helpful and share what they knew in the midst of the mayhem, but they themselves might not have assessed the information adequately in the first place (Wardle and Derakhshan, 2017).
Disinformation is false and is created with the intent to harm a person, group or country. It can include imposter content, false context, manipulated content and fabricated content (Wardle and Derakhshan, 2017). Misinformation becomes disinformation when the creator or multiplier of the information has the intent to mislead the recipient (Karlova and Fisher, 2013).
The word comes from Russian, where disinformatzya is a digital propaganda tool that does not necessarily aim to persuade the recipient of the truthfulness of the information, rather simply to create doubt and distrust in institutions, organisations or countries (Morgan, 2018; Wardle and Derakhshan, 2017). Scholars began to analyse disinformation campaigns as a whole new phenomenon within the framework of information disorder around 2014 when Crimea was annexed by the Russian Federation. As disinformation was spread by both the Western and Russian media, it could no longer be seen as a one-sided tool in hybrid warfare (Dougherty, 2014).
Mal-information is fact-based, but it is used with the intent to harm, and includes leaks, harassment and hate speech (Wardle and Derakhshan, 2017). Leaks that occur right before an important election are a good example here, such as the photos of Canada’s prime minister Justin Trudeau wearing blackface as a 29-year old teacher surfacing a month before the 2019 elections (Howard, 2019). The photos were not manipulated, the event did happen and the leak was definitely timed by Trudeau’s opponents to harm his chances at being re-elected.
Therefore, actors also play a central role in attempting to understand information disorders by analysing intent. Who is trying to dis-, mis- or mal-inform and what is their motivation?
There are official agents, such as countries or companies who handle public relations. In Western democracies, disinformation campaigns are usually conducted overtly by PR agencies, and the state actors are usually governments of authoritarian regimes. One of the biggest state actors is China, the government of which has been estimated to fabricate and post more than 400 million social media comments per year in order to distract the public from matters that might lead to protests or unrest against the regime (King, Pan & Roberts, 2016). Besides governments and public relations companies, official actors also include powerful families trying to influence the political sphere (Wardle and Derakhshan, 2017), as well as political groups of citizens.
Unofficial agents, on the other hand, either work alone or in loose networks, with different incentives (Wardle and Derakhshan, 2017). Some might want to entertain others, while others want to harm a specific person or create false content in order to influence public opinion on something important to them (Wardle and Derakhshan, 2017). Some do it for the money, as the cash flow from advertising has moved from mass media outlets to online communities and social media. For example, a small town called Veles in Macedonia had more than a 100 so-called fake news factories registered in the weeks leading up to the 2016 US presidential election – a group of friends ran pro-Trump pages and fabricated stories according to what got them the most traffic and ad revenue, with no regard to the content itself (Subramanian, 2017). And, of course, some people are unpaid trolls – people who express controversial or offensive views in order to get into arguments and provoke people (Collins, 2019). Disinforming is part of our individual information behaviour patterns – a troll wants to and will find a way to deceive (Karlova and Fisher, 2013).
The problem is that the agents creating, producing and distributing falsified content are not the only ones who see it. Digital content can be re-produced by liking or sharing it over and over again by agents who have no intent to harm or disinform whatsoever. Besides the ability to assess the quality of information, our social media usage habits make us all possible agents. Many people share news articles on Twitter without even opening the link, instead sharing it based on the title, photo or caption (Gabielkov, Ramachandran, Chaintreau, Legout, 2016). We spend less time analysing information as there is more and more of it. In the midst of digital pollution, it is hard to stay vigilant all the time.
We, as humans, are faulty in the way that unlike computers which can be programmed and reprogrammed, we are more receptive to information that we believe to be true (Lewandowsky, Ecker, Seifert, Schwarz, Cook, 2012). We also tend to remember the first piece of information we heard on the matter, even if corrected later, thanks to social homogeneity (Vicario, 2017).
So what can we do as educators? Increasing media literacy in students and understanding of the digital ecosystem in ourselves is key. If we can’t be misinformed in the first place, there is nothing to overwrite.