Automated Disinformation Campaigns

Kirsi Helkala | Norwegian Defence University College / Norwegian Defence Cyber Academy

Automated Disinformation Campaigns


This short paper handles automated disinformation campaigns: what is it, who does it, and how it is done. The text is from the presentation in “Pacem in Terris: War and other Obstacles to Peace” workshop 19-20 September 2023.

1.    What is disinformation?

Merriam-Webster defines disinformation as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth”. Figure 1 presents an illustration of disinformation.

2.    Who spreads disinformation?

In the field of cybersecurity, we often use categories for threat actors.

I have used Telenor’s – Norwegian Telecommunication Service Provider (Telenor, 2018) – threat actor pyramid to illustrate these, see left side on Figure 2.

On the right side of Figure 2 is a more detailed description about the actors that spread disinformation placed on the estimated threat level based on the actors’ followers, resources, and individual skills. Governments and their governmental units belong to the state-level. Examples of contractors are troll[1] farms, disinformation-as-a-service[2] providers and collections of bots[3] supervised by contractors. Examples of the disinformation spreaders that have followers, resources and/or skill levels equal to organized crime levels are fake-news websites, highly partisan media outlets, mainstream media, political parties, public relation firms, and again bots supervised by these. At the politically motivated hacktivists level corresponding disinformation spreaders are for example conspiracy theorists, politicians, and influencers. And at the individual level, there are independent trolls and common citizens like yourself. Examples are retrieved from the following sources: (Tucker et al., 2018), (ENISA, 2022), and (Helkala & Rønnfeldt, 2022).

The three first levels are the actors that have resources to carry on long-lasting and thoroughly thought information campaigns, similarly as they are able to carry out Advanced Persistent Threats (Telenor, 2018).

3.    Drivers of false beliefs

You have already seen hints of the automated parts of the disinformation campaigns, as I have mentioned bots, but before we go a bit deeper to those, let’s first take a look at what are the drivers of false beliefs. They can be, or let’s say it promptly, are used to build up disinformation campaigns.

Ecker et al. in their article “the psychological drivers of misinformation belief and its resistance to correction” categorised the drivers of false beliefs in two main blocks: Cognitive drivers and Socio-affective drivers (Ecker et al., 2022). Further, cognitive drives are divided into three categories: intuitive thinking (lack of analytical thinking), cognitive failures (neglects or forgets source cues and/or counter-evidence) and illusory truth (familiarity, fluency). Socio-affective drives on the other hand include source cues (elite, in-group, attractive), emotion (emotive information) and world view (personal views, partisanship).

These findings fit well with the findings in the report “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature” (Tucker et al., 2018). Tucker et al. (2018) found the following.

  1. Elite behavior is driving political polarization. Partisan cues can also encourage partisans to accept and propagate inaccurate information.
  2. Group cues and usage of stereotypes can make the acceptance of mis/disinformation easier.
  3. Emotions are important. Anger makes people more likely to trust inaccurate information that supports their views; anxiety can have the opposite effect.
  4. People are more likely to be affected by inaccurate information if they see more and more recent messages reporting facts, irrespective of whether they are true.
  5. Viral mass-scale diffusion of messages is relatively rare.
  6. There is reason to believe that audio-visual messages can be both more persuasive and more easily spread than textual messages, but dynamics needs to further be studied.

Regarding the last point, illustrations and videos are used to raise emotions and get attention. Disinformation campaign’s main narrative can be brought out and supported with selective illustration. In addition, visual disinformation can be hard to detect by machine learning algorithms as well as humans especially if the image itself is not modified after the picture is taken. See some illustrative examples on Quora[4] website showing examples of deceiving pictures based on the perspective of the camera.

4.    Tactics for Spreading Disinformation

The report “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature” considered four tactics for spreading disinformation (Tucker et al., 2018).

  1. Selective censorship involves removing some content (opinions) from online platforms, while leaving other content (opinions) alone. For example, removing opposing parties point of views from the discussions.
  2. Manipulation of search algorithms is to make certain news stories (disinformation) more likely to appear during search. Tactics include for example adding popular keywords to promote websites in search engine rankings and grouping websites with links pointing to each other. On social media platforms, this can be done with trending topics and hashtags on Twitter and Facebook.
  3. A third tactic is hacking sensitive and damaging information and then leaking the information in either its real form or manipulated form.
  4. The fourth, and perhaps most important, tactic is directly introducing disinformation onto social media platforms and then helping to spread it.

Both bots and paid trolls can play important roles in manipulating search rankings, sharing hacked information, and directly introducing and sharing disinformation on social media platforms.

5.    Automation in disinformation campaigns

Then we finally get to the automation part of the disinformation campaigns. Let’s also concentrate on the fourth tactic “directly introducing disinformation onto social media platforms and then helping to spread it” and how the automation factors in it. I have divided this into three areas:

a)    Finding people to target

b)    Spreading the disinformation

c)    Writing the content

5.1 Targeting people

People receive targeted information (including advertisements, disinformation and social engineering attacks) based on our demographics (race, sex, age, geological place) and psychographics (activities, interests and opinions) (Hayes, 2023; Hiller, 2021).

The social media platforms collect these by having access to our profiles, and archiving our likes, posts, comments, retweets etc. Other applications on our phones and PCs can do similar data collection.

Some social media also have advertisement services. A customer selects the targeting criteria and the period for the advertisement campaign. After the bill is paid, the platform sends the advertisement to people filling the criteria.

Some social media share their data with other social media or simply sell it. The platforms where we knowingly give our data are called first-party data brokers. However, there are third-party companies whose real business model is based on collecting and selling customer data. Let’s focus on those third-party companies next.

Websites can contain scripts called trackers and web bugs run by data supplier companies. When a user accesses a website that contains trackers, information about the site and user is collected. You might also have answered a few online quizzes just for fun, haven’t you. They can also be used for data collecting. Once data suppliers have collected this data, they usually sell it on to data brokers. Data brokers then take the purchased data, process it, and create new data objects. Data brokers further sell the data to other companies (for example advertisers, political parties) but also individuals, who can directly target the audiences they want to reach with their campaigns (Latto, 2020; Newberry, 2022; Usercentrics, 2021; Zawadziński, 2020).

Collecting and selling data is automated and it happens fast. You can think that when you release your finger from the keyboard or screen after liking, sharing, or browsing a new site, your info has already been sold for further use.

5.2 Spreading disinformation

As earlier shown in the drivers of false beliefs, one factor was source cues. This means that if the information comes from sources that are either elite, belonging to our own in-groups or otherwise attractive sources, we more easily believe their message. And here the bots and the trolls come in.

Bots can be built to be an elite person or a group having plenty of followers, providing an easy distribution channel for disinformation. Bots or paid trolls can also gain access to closed groups and can spread disinformation and raise emotions inside the group as a trusted member. Several bots can also spread similar disinformation, creating more volume, which can give a sense of false truth to the information receiver.

5.3 Writing content

Generative AI models, like ChatGPT, can, of course, produce disinformation.

The report called “Truth, Lies and Automation: How language models could change disinformation” shows an analysis that has been done on ChatGPT-v3 (Buchanan, Musser, Lohn, & Sedova, 2021). The report concludes as follows.

GPT-3 has clear potential applications to content generation for disinformation campaigns, especially as part of a human-machine team and especially when an actor is capable of wielding the technology effectively”.

However, the researchers behind the report also mean that for the text to be usable for disinformation campaigns:

  • It must support a defined narrative.
  • It must not produce material that is clearly illegal.
  • It cannot contain details that would easily reveal that the information is incorrect.

Thus, producing “credible disinformation” will be an iterative process between a human and a machine (Buchanan et al., 2021).

6.    Summary

Disinformation campaigns are built up like any other campaigns. One needs a goal, target audience and proper information channels.

Automation is not the main cost in the campaigns. What costs in campaigns is the planning of the campaign and getting the needed pieces to the correct places before the campaign starts, as well as monitoring the campaigns success.

  • Define narrative and target group.
  • Find out how the target can group be reached.
  • Build up “trusted sites” that can in a long run be disinformation sources.
  • Program bots or buy bots based on the volume campaign needs.
  • Place trolls and bots on the correct channels and closed rooms.
  • Coordination of the bots and trolls under the campaign.
  • Monitor the campaign success, make possible adjustments to the execution plan.

The resources of the campaign buyer will decide how much time and human resources can be used for strategic planning and executing the campaign. Therefore, the top-level threat actors (states, contractors and organised criminals) are those who have resources to plan and run wide-scale campaigns.

Countermeasures and mitigation measures exist, such as finding and removing bots, content analysis by fact-checkers, and user’s media literacy skills. However, that is the topic for another time.


Buchanan, B., Musser, M., Lohn, A., & Sedova, K. (2021). Truth, Lies, and Automation. How Language Models Could Change Disinformation

Cloudflare. (2023). What is a bot?

Ecker, U.K.H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L.K., Brashier, N., . . . Amazeen, M.A. (2022). The psychological drivers of misinformation belief and its resistance to correction. . Nature Reviews Psychology, 1, 13-29. doi:10.1038/s44159-021-00006-y

ENISA. (2022). Threat Landscape 2022

GCFGlodal. (2023).

Hayes, A. (2023). Demographics: How to Collect, Analyze, and Use Demographic Data

Helkala, K.M., & Rønnfeldt, C.F. (2022). Understanding and Gaining Human Resilience Against Negative Effects of Digitalization. In M. Lehto & P. Neittaanmäki (Eds.), Cyber Security. Computational Methods in Applied Sciences (Vol. 56): Springer, Cham.

Hiller, W. (2021). What Are Psychographics and How Are They Used in Marketing? 

Latto, N. (2020). Data Brokers: Everything You Need to Know

Newberry, C. (2022). Social Media Data Collection: Why and How You Should Do It

Telenor. (2018). Trusselforståelse

Tucker, J., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., . . . Nyhan, B. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature

Usercentrics. (2021). Data is the new gold – how and why it is collected and sold

Zawadziński, M. (2020). The Truth About Online Privacy: How Your Data is Collected, Shared, and Sold. 


[1] A troll is a person who intentionally tries to stir up conflict, hostility, or arguments in an online social media by using messages that provoke emotional responses (GCFGlodal, 2023).

[2] Disinformation-as-a-Service (aka disinformation-for-hire) is a service provided by a third party (contractor) that carries out targeted attacks based on the clients’ wishes. These services are not only used by governments but also non-state actors and private commercial organizations (ENISA, 2022).

[3] A bot is a software application that is programmed to do certain tasks. Bots often imitate or replace a human user’s behavior. A bot itself is not malicious. The user decides how the bot is used. Examples of useful bots are bots that index web content for search, or customer service bots. An example of a bad bot is a bot that breaks into users’ accounts (Cloudflare, 2023).