Understanding the proposed solutions to the problem of online mis/disinformation

Anya Schiffrin | Columbia University’s School of International and Public Affairs

Understanding the proposed solutions to the problem of online mis/disinformation

In 2016, the votes for Brexit and Donald Trump and the later Cambridge Analytica scandal made the public aware of the prevalence of online disinformation (Wardle 2016; Tandoc et al. 2018). Outrage grew as information trickled out about the role of the Russian government, the lies spread by Stephen Bannon and the far-right Breitbart News Network, as well as Fox News. Attention turned to Facebook and Twitter, which were blamed for spreading lies in the relentless quest for clicks and likes. As journalists began writing about the spread of disinformation, the public and policy makers came to understand that the platforms’ business model was based on generating outrage and anger (Bell 2016; Angwin and Grassegger 2017). The problem was systemic.

It was a shocking wakeup call. The consequences went well beyond interference in democratic elections. Fury at the platforms intensified as it became clear that rumors and hate spread on Facebook and WhatsApp had fueled attacks on Muslims and ethnic minorities in India, Myanmar, and other places (Ingram 2018). By 2019, the Anti-vaxxer movement had grown so large that measles had returned in New York, the Philippines, and Italy and polio had made a comeback in Pakistan (Masood 2019; Shahzad and Ahmad 2019; McNeil Jr 2013). During the Covid pandemic of 2020, conspiracy theories and disinformation spread widely online along with vaccine disinformation, further fueling worries about the power of big tech to spread false information. After the US 2020 presidential elections, the January 6th storming of the US capital reinforced the view that a “weaponization of the digital influence machine” had taken place (Nadler, Crain & Donovan 2018).

In the wake of 2016, policy makers, the platforms, entrepreneurs, journalists and educators galvanized, setting up committees, commissions, and research groups, searching for new ways – and even new laws and regulations – aimed at tackling the problem of online disinformation. These steps were taken while the academic research was still underway, so the proposed solutions were often not informed by evidence as to what would work or even a deeper analysis of the problem. However, it was a case of needing to do something, so actions were taken before all the needed information was in (Engelke 2019; Nelson 2018).

This paper outlines a taxonomy of solutions that covers many of the different initiatives aimed at solving the problem of online mis/disinformation, providing a brief outline of the rationales and an update as to where things currently stand. Our original area of study was the post-2016 period. Now, five years later, it is clear that the European Union is far ahead of the US in the regulation of big tech and some of the EU’s policies have implications for platform practices globally. It’s also clear that the spread of mis/disinformation during the Covid pandemic, and the prevalence of anti vaccination mis/disinformation, has accelerated the desire to take action.

Why such different ideas about solutions?

And yet, as well as a lack of political will, there is disagreement as to what actions should be taken. Why do so many thoughtful and experienced people come up with such radically different solutions to the problem of online mis/disinformation? One obvious reason is that there are very different financial interests involved. The second reason has to do with the underlying beliefs of the groups proposing the solutions, including the US aversion to government regulation.

The third reason could be viewed as the exposure effect, as repeated exposure to an idea breeds support for it (Zajonc 1968). Organizations do what they are used to doing and this familiarity makes them think they are doing the right thing. Journalists believe in journalism and so think that more and better journalism is the solution. Wedded to the belief that trust in the media is somehow related to journalism practice, journalists also hope to improve standards and build trust through engagement and fact-checking (Ferrucci 2017; Wenzel 2019; Nelson 2018; Graves 2016). Fact-checkers believe that supporting a culture of truth may save not just journalism but also democracy (Ferrucci 2017; Graves 2016; Wenzel 2019; Cheruiyot and Ferrer-Conill 2018; Amazeen 2018a). Journalists believe they can build trust by engaging with audiences and that this can restore journalism to its rightful role in society (Robinson 2019; Ferrucci 2017). Groups that teach media and promote literacy believe that is the answer (Mihailidis and Viotty 2017). The large platforms and tech entrepreneurs seek to suppress disinformation by doing what they know how to do, i.e. hiring content moderators, changing platform algorithms and blocking certain kinds of false or inciteful content (Dreyfuss and Lapowsky 2019). Similarly, regulators seek regulation. The innate bias towards what is familiar is part of why different actors have backed different solutions.

The demand for disinformation and the supply of it

This paper proposes an analytical framework with which we can assess different solutions and which we believe provides some understanding of the limitations of each. For an overall understanding of the different ideas about solutions, we find that the economics terms “supply side” and “demand side” provide a useful framework for understanding the belief systems of the different groups involved in promoting decisions to the mis/disinformation problem. Guy Berger notes that the creation and dissemination of information lies on a continuum that includes production, transmission, reception and reproduction, and many of the efforts aimed at fixing the problem emphasize one part of the continuum over another (Posetti & Bontcheva 2020; Author interview, Guy Berger 2019).

Those regulators who focus on the supply and transmission, of course, understand that there has always been some mis/disinformation – a point frequently made by those focused on audience consumption patterns. Societies can cope with small amounts that are of limited reach (such as a niche magazine with low circulation) but excessive supply of false information/rumors seeps into mainstream conversations, overwhelms audiences, results in cognitive fatigue and makes it hard to distinguish true information from false information. Repeated exposure may aggravate the problem as the more audiences see something, the more they believe it (Pennycook et al. 2018), even if it’s factually incorrect and later discredited. Corrections may not be seen by the people who originally saw the false information and may not be persuasive when someone’s mind is made up and they want to see their ideas confirmed (Kolbert 2017). Indeed, corrections, rather than having the intended effects, may only enhance distrust (Karlsson et al. 2017).

The regulators who focus on the prevalence of mis/disinformation see the problem as related to an excess supply of mis/disinformation. They focus on the incentives to supply it and the consequences of an excess supply. They ask how changing incentives by putting in regulations, codes of conduct, etc. can lessen the supply of mis/disinformation. The supply siders want Facebook, WhatsApp, and Twitter to limit what they circulate and promote and stop allowing people to make money off producing and disseminating false information. Another way to change the platforms’ incentives would be to make them liable for what appears on their platforms. To the extent that such changes in incentives do not suffice, some regulators believe regulations are necessary, including laws against hate speech or limits on the ability to make certain messages go viral.

By contrast, others focus on improving the ability of consumers to evaluate the information with which they are confronted. They may be relatively unconcerned, arguing that “fake news” and mis/disinformation has always existed, and there is little evidence that its audiences are persuaded by what they see online and that, accordingly, there is no reason to panic (Allcott, Gentzkow and Yu 2019). The tech companies fall in this category, expressing the view that they should not be blamed, and that the responsibility lies with society more generally and the responsibility of individual users. Some, including Facebook and various foundations (Murgia 2017) fund the teaching of media literacy in schools so that audiences will become more discerning consumers. Others believe in labelling non-verified news in the hope this will get audiences to stop circulating it. Facebook is funding fact-checking efforts throughout the world (Funke 2019). Many free expression groups, particularly in the US, oppose hasty government responses that broaden censorship and liability for the platforms arguing that these could do long-term harm.

The role of motivated reasoning, financial incentives and ideology

Incentives and ideology help us understand the position taken by various parties on the desirability of the appropriate measures to deal with mis/disinformation. A term that originated in social psychology and is used in economics to understand different perspectives is “motivated reasoning” or “reasoning in the service of belief” (Epley and Gilovich 2016; Kunda 1990).

Unsurprisingly, many of the beliefs about solutions to the problem of online mis/disinformation often correspond with the financial incentives particular to each belief-holder. As US muckraking journalist Upton Sinclair is quoted as saying: “It is difficult to get a man to understand something when his salary depends on his not understanding it”.

In the case of the tech companies, there is a vast amount of money at stake. Facebook and Twitter don’t want to be regulated or change their business models, so they would rather off-load responsibility for fixing the problem and donate small amounts of money to help solve it (author interview, anonymous, May 2019). Their ideology is often that of techno-libertarianism, so they reject regulation, or at least regulation that is likely to affect their revenues.

Financial incentives underscore the belief systems of the tech giants but belief in certain solutions over others also results from underlying ideology and belief in what one does. “If you have a hammer then everything looks like a nail” (Maslow 1966). Journalists believe in journalism and so are more likely than others to believe that more and better journalism is the solution. Wedded to the belief that trust in the media is somehow related to journalism practice, journalists also hope to improve standards and build trust through engagement and fact-checking. So, too, foundations are accustomed to giving grants, so they see the problem as one that they can help solve by giving grants to organizations trying to research and fix the problem.

Solutions that focus on reducing the supply of false information online are controversial and difficult to implement. Fixes that focus on audience demand may seem more do-able in the short term. It takes years of complicated negotiations to pass a law about online hate speech or transparency of political advertising. Giving a grant to a pre-existing news literacy NGO or a fact-checking organization can be done in a matter of weeks. The appeal of short-term solutions to the tech companies is obvious. Offloading the problem of mis/disinformation takes the onus away from the platforms and puts it on journalists and consumers (Bell 2016). It would be simple and convenient if these ideas worked, but they were implemented at a time when evidence was lacking. Moreover, they are expensive, hard to scale, and slow (Schiffrin et al. 2017). 

The role of national bias: US focuses on individual responsibility, Europe is more supportive of regulation. Repressive regimes are repressive

In looking around the world at the different solutions proposed it is clear that national bias and ideology play an important, if unspoken, role. The US is more suspicious of government regulation than Europe and less likely to push for government-led solutions than Germany. Differences within the EU Commission as to how to solve the problem stem in part from the ideologies of Commission officials, with members from former Communist countries less likely to support government regulation and more likely to skew towards voluntary efforts by the platforms (author interviews, Brussels, March 2019).

Governments with less open, or downright repressive, attitudes toward freedom of expression have little compunction in cracking down on the platforms and using the fear of fake news as a reason to practice censorship online. Cuba, China, Singapore, Turkey, and Vietnam are all examples that come to mind. For instance, in Singapore, journalists face potential jail time if they publish stories that are perceived as “falsehoods with malicious intent or going against Singapore’s public interest” under the 2019 law intended to combat mis/disinformation (Vaswani 2019).

Many of the US responses highlight the individual responsibility of audience members, exhorting people not to circulate or forward information that is false and to learn how to tell the difference between true and false information. Alan Miller (2019), the founder of the US educational nonprofit News Literacy project, explains, “We need a change in consciousness to counteract this fog of confusion and mistrust. First, we must understand  –  and take responsibility for –  our roles in the 21st-century information ecosystem. Misinformation can’t spread virally unless we infect others with it. We need to slow down before we hit ‘share’ or ‘retweet’ or ‘like,’ and ask ourselves if doing so will mislead, misinform or do harm”. But without regulation this “slowing down” is unlikely to occur. Those spreading information often have reasons for doing so beyond just carelessness. The spreading of political disinformation or non-scientific beliefs such as the anti-vaxxer movement are just two examples.

Defining our terms

There are many kinds of mis/disinformation and several attempts have been made to provide typologies. Tandoc, Lim, and Ling (2017) reviewed 34 scholarly articles published between 2003 and 2017 and came up with a typology that included: satire, parody, false images, advertising and public relations, which sometimes overlaps with propaganda. For our purposes we will consider, in a following chapter, the relationship between propaganda and disinformation and focus too on what Tandoc, Lim, and Ling describe as “news fabrication”. This is often done with the intention to deceive, and the false news is often difficult to identify as it presents as a traditional piece of news with similar format and conventions.

As Tandoc, Lim, and Ling (2017) note:

As with the case of parody, a successful fabricated news item, at least from the perspective of the author, is an item that draws on pre-existing memes or partialities. It weaves these into a narrative, often with a political bias, that the reader accepts as legitimate. The reader faces further difficulty in verification since fabricated news is also published by non-news organizations or individuals under a veneer of authenticity by adhering to news styles and presentations. The items can also be shared on social media and thus further gain legitimacy since the individual is receiving them from people they trust.

The authors also note that “facticity” is another question in the determination of false news, as the false information might be derived from, or rely on, something that is true or partially true: for example, the right-wing website that slaps a false headline on an article from a reputable media outlet. Audience matters as well because under certain conditions, audiences are more receptive to false news.

Another set of discussions around the problem of false news has been the recent interest in disinformation, which is false information spread deliberately to deceive. The English word disinformation resembles the Russian word “dezinformatsiya”, derived from the title of a KGB black propaganda department.1 The typology created by Claire Wardle, executive director of First Draft, discusses this phenomenon and has been widely used. In her influential papers and reports Wardle said the term “fake news” is misleading and in 2017 released her rubric “Fake News, It’s Complicated”, which is now a standard for the discussion about the problem. In this paper, Wardle describes the types of mis/disinformation as satire and parody, misleading content, imposter content and fabricated content, false connection, false context, and manipulated content (Wardle & Derakhshan 2017). Her paper with Hossain Derakhshan also included a rubric of who the different actors and targets are such as states targeting states, states targeting private actors, corporates targeting consumers (Wardle & Derakhshan 2017).

They further make the point that the intentions of the person creating and/or amplifying the false information are relevant to the definition.

  • Misinformation is when false information is shared, but no harm is meant.
  • Disinformation is when false information is knowingly shared to cause harm.
  • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.

Of course, disinformation disguised as parody can spread into conversations.[1]

A word about trust

Many of the “demand-side” solutions proposed for the problem of mis/disinformation are grounded in discussions about trust and credibility. Both are part of the larger question of persuasion, and fears of persuasion underscore the anxiety about mis/disinformation. Societies worry that repeated exposure to an argument (whether right or wrong) will influence behavior and, indeed, the rise in hate crimes, the election of demagogues in parts of the world, and the drop in vaccination rates in many countries all suggest that online mis/disinformation is, indeed, a powerful persuader. Parsing the impact of information on the human psyche is extremely difficult, if not impossible, but understanding what we know about trust in media is part of thinking about the impact of information

There are, of course, historical precedents. Some of the early discussion about media trust took place in the period around World War II when intellectuals in Europe and the US (particularly those in the Frankfurt School) tried to understand how citizens could be susceptible to Nazi propaganda (Jeffries 2016). In the US there was worry about the influence of demagogues, such as Father Coughlin, who used radio to get their messages across. Fear of new kinds of technology has often contributed to fears of mis/disinformation (Tucher 2013) but we believe that it’s a mistake to dismiss such fears as mere Luddism. Rather, there were objective political catastrophes taking place, which were to have global consequences, and worrying about that and trying to understand the role of propaganda, including how it was created, disseminated, and had influence, was a necessary response to the times.

Demand Side efforts: media literacy, building trust and engagement with media

The rise of Fascism and Communism in the 1920s and 1930s provides a backdrop to the rise of media literacy. One of the earliest efforts was former journalist Clyde Miller’s attempts in the 1930s to teach U.S. schoolchildren how to understand and resist propaganda. Miller was a former journalist who worked at Columbia Teachers College for 10 years. During that time, he raised one million dollars from Boston businessman Edward A. Filene for the Institute for Propaganda Analysis (IPA). Miller’s story has not been fully told and provides important insights for current debates. His taxonomies of propaganda techniques and his work analyzing examples of disinformation anticipated many of the techniques used today. Miller and his colleague, Violet Edwards, worked closely with teachers and provided material for them to use, as well as weekly mailings for school children.

As well as media literacy efforts, there are other efforts that focus on audience demand for and trust in quality journalism. These efforts try to build trust in journalism by establishing journalism as a force for truth-telling (in the case of fact-checking) or by trying to make media outlets relevant to audiences (through community engagement efforts). After 2016, foundations and the platforms reached for fixes and funded efforts that tackled the demand side for a range of reasons. Facebook wanted to avoid regulation and, further, believing in the importance of free dissemination of ideas and information online, thought that helping audiences become more educated would be a suitable fix. We examine the efforts to combat mis/disinformation online by building trust in journalism and the ability of audiences to distinguish good from bad information.

Supply side solutions: dissemination, deplatforming, regulation, provision of quality information

Our taxonomy includes solutions related to the supply, transmission and reproduction of online mis/disinformation: the attempt to use algorithms and machine learning to block and suppress content, and the possibilities for regulation. This would include the use of artificial intelligence and natural language processing and assesses the likelihood of these being effective and able to scale. There are also a few initiatives (such as Newsguard, the Trust Project, the Journalism Trust Initiative) that rate news outlets and propose standards for journalists to follow and suggest that these may be faster to scale than the tech solutions of the small start-ups.

Other supply side ideas include the provisioning of quality information, which is done by governments, the private sector and foundations. The tech companies have also begun doing this by providing, for example, trusted information about elections or Covid-19 vaccines. Defamation lawsuits brought by people who’ve been injured by online mis/disinformation are another attempt at affecting the supply. So is deplatforming of those who break the rules of the platforms. Disclosure as to who is putting out mis/disinformation is another solution. Journalists cover the influencers and the Troll Farms. Some governments regulate political advertising.

Another set of supply side laws are those that affect the liability of the tech companies and thus incentivize them to remove illegal speech. In the US there has been an extensive debate about modifying Section 230 which could allow people who have been harmed by online speech to sue the tech companies. However, the US commitment to the First Amendment and free expression would preclude many possible solutions that other countries might be willing to undertake (Benkler, Faris, Roberts 2018) and so far, Section 230 has not been modified.

Germany was the first European country to pass a law making the tech companies liable for illegal speech. The so-called NetzDG law made the platforms responsible for repeated violations, levying fines on them.

Many “copycat laws” that explicitly reference NetzDG have popped up in at least 13 countries (according to Freedom House): Turkey, Singapore, Russia, Venezuela, Malaysia, the Philippines, Honduras, Vietnam, Belarus, Kenya, India, France, and Australia. In many of these countries, critics say the law is used to censor online speech.

In June 2020, the EU announced a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re hosting. They released a draft on December 15, 2020. The rules set limits on content removal and allow users to challenge censorship decisions but don’t address user control over data or establish requirements that the mega platforms work towards interoperability. The other half of this legislation is the Digital Markets Act which targets “gatekeepers”, core platforms that act as a gateway between business users and customers that use unfair practices and lack of contestability which lead to higher prices, lower quality and less innovation in the digital economy.

The United Kingdom is also planning an expanded government role and has broadened its mandate by discussing harms that result from online mis/disinformation as well as illegal speech. They are discussing a new regulatory unit and have called for platforms to exercise a ‘Duty of Care’ as well as liability and fines for the social media platforms, codes for political advertising, mandated disclosure by social media platforms, auditing and scrutiny of the platforms and their algorithms, protection of user data and antitrust measures. On April 7, 2021 the CMA launched the Digital Markets Unit (DMU) to introduce and enforce a new code of conduct to improve the balance of power between the platforms and news publishers. The DMU will coordinate with other UK organisations like Ofcom and international partners grappling with tech regulation. The DMU is still awaiting government legislation to give it the power it requires.

Moves by the tech companies

Since we began our research, the tech companies have stepped up their efforts to control mis/disinformation, though not to the satisfaction of their critics. They’ve expanded their labeling, fact-checking and removal efforts. They’ve also dramatically increased the amount of money given to support journalism and journalism outlets, including extra grant making in 2020 during the Covid pandemic.

Facebook’s long-awaited oversight board[2] (its supreme court for content moderations decision) launched in May 2020 and in spring 2021 handed down a much-publicized but inconclusive decision about the deplatforming of Donald Trump.

During the November 2020 US election, Facebook promoted its election integrity measures[3] including: removing misleading voting information, blocking political and issue ads the week prior to the election, and partnering with Reuters and the National Election Pool to provide “authoritative information about election results”.

Conclusion

In short, since 2016 there have been many attempts at curbing the spread of online mis/disinformation or at making recipients less susceptible.

The challenge is to find solutions that work, that do not threaten free expression, and, above all, that cannot be gamed by interest groups such as the tech giants or politicians who are not acting in good faith. Finding laws that cannot be abused by those in power will be difficult. But there is a lack of evidence showing that fixes like fact-checking, media literacy, community engagement, and tweaking algorithms are sufficient. Further, the tech companies have demonstrated that voluntary codes of conduct are not enough. It is the threat of regulation that propels them to act. For this reason, we argue in the final parts of this dissertation that regulations – most likely originating in Europe – will be an essential part of fixing the problem. We further support the initiatives underway to create large funds that will support public-interest media whether by expanding public-service broadcasting or by supporting small, community-run news outlets.

What is clear is that, while the problem is urgent, there is disagreement as to the solutions and unwillingness by Facebook, Twitter and Google to change their business model. Given how serious the problems of online mis/disinformation are it’s hard to see how they will be solved. More likely we will continue to see fragmented and piecemeal measures.

Bibliography

Allcott, Hunt; Matthew Gentzkow and Chuan Yu. “Trends in the diffusion of misinformation on social media”. Research & Politics (2019): 1-8

Amazeen, Michelle. “Practitioner perceptions: Critical junctures and the global emergence and challenges of fact-checking”. International Communications Gazette 0.0 (2018a): 1-21.

Angwin, Julia & Hannes Grassegger. “Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children”. Propublica, 28 June 2017.

Bell, Emily. “Facebook is eating the world”. Columbia Journalism Review, 7 March 2016.

Benkler, Yochai; Robert Faris and Hal Roberts. Network Propaganda. Oxford: Oxford University Press, 2018.

Cheruiyot, David and Raul Ferrer-Conill. “Fact-checking Africa, epistemologies, data and the expansion of journalistic discourse”. Digital Journalism 6.8 (2018): 964-975.

Dreyfuss, Emily and Issie Lapowsky. “Facebook is changing newsfeed (again) to stop fake news”. Wired, 10 April 2019.

Edwards, Violet. Group leader’s guide to propaganda analysis. New York City, Institute for Propaganda Analysis, 1938.

Engelke, Katherine. “Audience Participation in Professional Journalism; a Systematic Literature Review”. Draft paper for the 2019 ICA.

Epley, Nicholas and Thomas Gilovich. “The Mechanics of Motivated Reasoning”. Journal of Economic Perspectives 30.3 (2016): 133-140.

Ferrucci, Patrick. “Exploring public service journalism: Digitally native news nonprofits and engagement”. Journalism & Mass Communications Quarterly 94.1 (2017): 355-370.

Funke, Daniel. “In the past year, Facebook has quadrupled its fact-checking partners”. Poynter, 29 April 2019.

Graves, Lucas. Deciding what’s true: The rise of political fact-checking in American journalism. Columbia University Press, 2016.

Ingram, Matthew. “Facebook now linked to violence in the Philippines, Libya, Germany, Myanmar, and India”. Columbia Journalism Review, 5 September 2018.

Jeffries, Stuart. Grand Hotel Abyss: The Lives of the Frankfurt School. Verso, 2016.

Karlsson, Michael, Christer Clerwall and Lars Nord. “Do Not Stand Corrected: Transparency and Users’ Attitudes to Inaccurate News and Corrections in Online Journalism”. Journalism & Mass Communication Quarterly 94.1 (2017): 148- 167.

Kunda, Ziva. “The case for motivated reasoning”. Psychological Bulletin 108.3 (1990): 480-498.

Lapowsky, Issie and Caitlin Kelly. “FTC reportedly hits Facebook with record $5 billion settlement”. Wired, 12 July 2019.

Masood, Salman. “Pakistan’s War on Polio Falters Amid Attacks on Health Workers and Mistrust”. The New York Times, 29 April 2019.

McNeil Jr, Donald. “Pakistan battles polio, and its people’s mistrust”. The New York Times, 21 July 2013.

Mihailidis, Paul and Samantha Viotty. “Spreadable spectacle in digital culture: Civic expression, fake news, and the role of media literacies in ‘post-fact’ society”. American Behavioral Scientist 61.4 (2017): 441-454.

Miller, Alan. “Stop the misinformation virus: Don’t be a carrier”. Medium, 22 April 2019a.

Murgia, Madhumita. “Facebook launches $14m collaborative news literacy project”. Financial Times, 3 April 2017.

Nadler, Anthony; Matthew Crain & Joan Donovan. “Weaponizing The Digital Influence Machine: The Political Perils of Online Ad Tech”. Data & Society, 2018. Retrieved 4/30/20: https://datasociety.net/library/weaponizing-the-digital- influence-machine/

Nelson, Jacob. “Partnering with the Public: The Pursuit of ‘Audience Engagement’ in Journalism”. Conference presentation. Association of Internet Researchers, Montréal, Canada, October 10-13, 2018.    

Pennycook, Gordon; Tyrone Cannon and David Rand. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147.12 (2018): 1865-1880.

Pomerantsev, Peter. “Authoritarianism goes global (II): The Kremlin’s information war”. Journal of Democracy 26.4 (2015): 40-50.

Posetti, Julie & Kalina Bontcheva. “Disinfodemic: Deciphering COVID-19 disinformation”. UNESCO Policy brief, 2020. Retrieved 4/30/20: https://en.unesco.org/sites/default/files/disinfodemic_deci phering_covid19_disinformation.pdf

Robinson, Sue. “Crisis of shared public discourses: Journalism and how it all begins and ends with trust”. Journalism 20.1 (2019): 56-59.

Schiffrin, Anya et al. Bridging the Gap: Rebuilding citizen trust in the media. Global Investigative Journalism Network, 2017.

Shahzad, Asif & Jibrad Ahmad. “Monstrous rumors stoke hostility to Pakistan’s anti-polio drive”. Reuters, 2 May 2019.

Tandoc, Edson; Zhang Lim and Richard Ling. “Defining ‘Fake News’”. Digital Journalism 6.2 (2017): 1-17.

Tucher, Andie. The True, the False, and the “Not Exactly Lying”. In M. Canada (Ed.), Literature and Journalism: Inspirations, Intersections, and Inventions from Ben Franklin to Stephen Colbert. New York: Palgrave Macmillan (2013): 91-118.

Vaswani, Karishma. “Concern over Singapore’s anti-fake news law”. BBC, 4 April 2019.

Wardle, Claire & Hossein Derakhshan. Information Disorder: Toward an interdisciplinary framework for research and policy making (Council of Europe Report DGI). Strasbourg: Council of Europe, 2017.

Wenzel, Andrea. “To verify or disengage: Coping with ‘Fake News’ and ambiguity”. International Journal of Communications 13 (2019): 1977-1995.

Zajonc, Robert. “The attitudinal effects of mere exposure”. Journal of Personality and Social Psychology, 9 (Monograph Suppl. no. 2, Pt. 2, 1968), 1-27.

 

END NOTES

[1] A relatively harmless example is the letter purportedly written by F. Scott Fitzgerald about the Spanish Flu which circulated widely in March 2020.
https://www.reuters.com/article/uk-factcheck-quarantine-fitzgerald-lette/false-claim-this-is-a-1920-letter-from-scott-fitzgerald-in-quarantine-during-the-spanish-influenza-idUSKBN21733X
More serious examples can be found in the works by Peter Pomerantsev and Yochai Benkler, Robert Faris and Hal Roberts which we cite frequently in this dissertation.
[2] https://www.theguardian.com/technology/2020/sep/24/facebook-oversight-board-launch-us-election
[3] https://www.facebook.com/zuck/posts/10112270823363411