Skip to main content
SearchLoginLogin or Signup

Information trolls and democracy: A qualitative examination of disinformation campaigns in Canada

This is the online version of the article. To access a print version with page numbers for citation and reference purposes, select "Download" to the right and then choose "Formatted PDF."

Published onJan 19, 2024
Information trolls and democracy: A qualitative examination of disinformation campaigns in Canada
·

ABSTRACT

This research explores disinformation delivered during the 2019 Canadian Federal election. This study explores the methods and techniques used by the perpetrators of disinformation campaigns in the context of Canadian elections. This research also examines whether the disinformation discovered during the election falls within criminal foreign interference. Critical discourse analysis is used to qualitatively analyze 26 articles published by a Northeastern U.S. based independent news website within the ten months leading up to the election. This research finds that the use of unnamed sources, hyperbolic statements to misrepresent facts, and strategies to de-legitimize reputable institutions were the most common tools used by the source to engage in a disinformation campaign targeting Canadian democracy. This research reveals evidence that raises questions about the potential involvement of foreign entities in the 2019 Canadian Federal election.

Keywords: disinformation; propaganda; threats to democracy; election interference; foreign interference; criminal interference; social media; qualitative analysis; fake news: critical discourse analysis; fact checking


Online communication platforms have evolved into indispensable and powerful tools employed by a myriad of individuals, groups, businesses, agencies, and political parties to disseminate information. Through online communication tools, we can connect with friends and family, stay updated on the latest products and gadgets, converse with others, and maintain updated knowledge on local, national, and global events. This evolving technology can connect people through the rapid dissemination of information. Previous offline communication tools are still commonly used, but they cannot compete with the streamlined capabilities that online platforms possess to distribute information. Online tools, such as social media platforms, can be beneficial to bridging the geographical gaps that can hinder social cohesion, yet the same technology can be an accessory to social divisiveness. The nature of the content being shared and circulated can have significant consequences for the people who consume information published online.

The main focus of this research is to explore the presence of disinformation campaigns in Canada, the techniques and mechanisms used to create and deploy false pieces of information, and to assess whether any of the actions involved in the distribution of disinformation fall within the realm of criminal activity. Stories published by a independent news website based in the Northeastern U.S. (hereinafter referred to by the pseudonym of The Northeastern Independent), quickly became an issue of concern during the time of the 2019 Canadian Federal Election. Viral stories with minimal or zero factual basis quickly became widespread on social media platforms. Regardless of being debunked by a variety of fact-checking sources, the stories by the less than reputable source were liked, commented on, and shared by Facebook users over 200,000 times, and their Facebook page had accumulated 4.4 million fans during the time of the election (Oved, Lytvynenko, & Silverman, 2019, Oct 18).

The Northeastern Independent is said to provide coverage of both U.S and Canadian politics; however, eight out of the ten most popular articles on Facebook by the source were related to Canadian politics and were published on their website within the eight months before the Canadian election (Oved, Lytvynenko, & Silverman, 2019, Oct 18). Many stories are published anonymously without mentioning the article’s author and include unsubstantiated stories with anonymous sources cited. The methods used to construct the false information pieces can be examined and analyzed by employing a qualitative critical discourse analysis of articles published by The Northeastern Independent. Through this examination, the article's scope, target, and purpose can be inferred, which may be useful in developing future countermeasures to address disinformation.

There is a lack of academic research on disinformation within the Canadian context. A growing area of research is available and is being further developed on disinformation and election interference within the scope of the United States; however, literature specifically on Canadian disinformation campaigns is underdeveloped. Canada and the United States share similarities in structure as Western developed countries, but there are political, social, and economic deviations that can impact the transferability of research findings between the two countries. This highlights the importance of developing Canadian-specific disinformation research to bridge the knowledge gap and assist in efforts to respond to future disinformation threats against Canada.

Literature Review

As media consumers engage with an endless stream of information, the ability to infer the accuracy, quality, and legitimacy of the messages delivered to them can become more burdensome. Social media platforms, online blogs, and internet-based journalism occupy a unique role due to the constant presentation of information delivered to the user. As users become overloaded with information, the risk of accepting false information as real becomes higher (Bawden & Robinson, 2020). It is important to acknowledge that not all readers will critically assess the validity of everything they read; this can help encourage an environment of purposeful deception (McGregor, 2010). Online communication platforms have become a tool used by many to find and access important information related to democratic elections. There is a level of danger when the consumer accepts false information as factual and is then used to guide their political decisions.

Information Disorder

While they are not synonymous terms, misinformation, malinformation, disinformation, propaganda, and fake news, all share one common element – the presence of inaccurate or misinterpreted information. Derakhshan and Wardle (2017) refer to this presence of inaccurate or misinterpreted information as an ‘information disorder’, which consists of three overarching sub-categories – misinformation, malinformation, and disinformation. Conspiracy theories, rumors, fake news articles, doctored images, clickbait, hoaxes, satire, propaganda, scientifically dis-proven academic papers, fake reviews, imposter websites, misleading statistics, and so on, all contribute to information disorder (Derakhshan & Wardle, 2017; Santos-D’Amorim & Miranda, 2021).

Misinformation

Misinformation is considered to be information that is introduced as accurate but is later discovered as inaccurate. Misinformation involves getting the facts wrong, or misinterpreting the facts, without the intention to do so (Lewandowsky, Stritzke, Freund, Oberauer, & Krueger, 2013; Derakhshan & Wardle, 2017; American Psychological Association, 2022).

Malinformation

Malinformation intentionally publicizes flawed information to achieve personal or corporate goals (Derakhshan & Wardle, 2017). It is not intended to serve any public interest or influence public opinion. For example, revenge porn1 fits within the scope of malinformation as it serves to provide the aggressor with personal gratification in humiliating the victim (Sirianni & Vishwanath, 2016).

Disinformation

Disinformation is inaccurate information that is purposefully created and disseminated with the malicious intent to mislead targets (Bradshaw, Bailey, & Howard, 2021). In many cases, disinformation is spread to advance political goals (Bennett & Livingston, 2018; American Psychological Association, 2022). Disinformation is not limited to fabricated news articles. Manipulated media can include imposter websites, memes, hoaxes, clickbait, propaganda, doctored images, edited videos, and deepfakes (Santos-D’Amorim & Miranda, 2021). Disinformation also includes any conspiracy theories or rumors that have been purposefully created to mislead or influence opinion (Derakhshan & Wardle,2017). Disinformation can often be so carefully crafted that it is difficult to determine whether the information and the source of the information are credible (Bennett & Livingston, 2018).

Propaganda and Fake News

Propaganda

Propaganda is a subsection of disinformation. Propaganda can be best understood as politically motivated information that is often biased, misleading, or completely fabricated (Kumar & Krishna, 2014). Propaganda is not new. In Europe during the 15th century, anti-Semitic propaganda was circulated to support the Roman Catholic Church’s narrative of divine retribution and demonize followers of Judaism (Schudson and Zelizer, 2017). Propagandistic practices have carried through to World War I and World War II, where governments and agencies created content that was intent on shaping public opinion, encouraging hatred for adversaries, justifying barbaric acts to be committed in the name of a ‘greater good’, encouraging atrocities, increasing numbers of enlisted soldiers, and strengthening support from allies (Murphy & White, 2007; Evans, 2014; Schudson and Zelizer, 2017). During the Cold War, propaganda continued flourishing with state-sponsored Russian disinformation campaigns specifically designed to sow doubt and mistrust in targeted institutions and governments (Schudson and Zelizer, 2017; Zuckerman, 2017).

Traditionally, the use of propaganda, and the use of flawed information in general to achieve defined objectives, has been reserved for nation-states; however, in the digital age, citizens and private companies can also significantly influence public opinion through creating, disseminating, and circulating propagandistic materials online (Murphy & White, 2007; Bradshaw et al., 2021). Today, the technologies used to manufacture propaganda and other materials are easily accessible, simple, and affordable (Barclay, 2018).

Fake News

Fake news fits within the realm of disinformation. Fletcher and Nielsen (2017) suggest five distinct sub-categories of fake news: 1) satire – information that is presented in a manner that is humourous and entertaining, 2) poor journalism – information that is poorly researched and constructed which tends to be overly sensationalized and inaccurate, 3) propaganda – partisan content that takes an extremely biased stance to influence opinion, as well as, statements made by politicians that are untrue, misleading, or taken out of context 4) some forms of advertising – sponsored content that may mislead potential customers about the quality and utility of products, and 5) false news - malicious hoaxes, politically motivated stories, and fabricated narratives that are intended to create profit.

Egelhofer and Lecheler (2019) cautioned journalists and academics from applying the term ‘fake news’ to unverified articles. Doing so may prevent those narratives from being heard without sufficient evidence to prove these accounts are false. To label a piece of information as ‘fake news,’ the intent and verifiability must be known; fake news should only be considered as such when malicious intent is known, and the information conveyed can be verified as inaccurate by other more reputable sources (Bondielli & Marcelloni, 2019). Additionally, using the term ‘fake news’ as an all-encompassing label, results in failing to appreciate the key distinctions between misinformation and disinformation. Without acknowledging the difference, this limits our ability to understand the motivation and impact of the information and to differentiate those who intend to harm from those who do not (Derakhshan & Wardle, 2017).

The frameworks above show that the public and academic community have different understandings of where fake news fits within the broader subjects of misinformation and disinformation. One all-encompassing definition of fake news is that it is conceptualized as, “distorted signals uncorrelated with the truth” (Allcott & Gentzkow, 2017, p. 212). This definition can be applied to all information pieces: memes, GIFs, news articles (printed and online), videos, shared posts, tweets, forum posts, editorial pieces, etc. Each of these forms can propagate signals at odds with authentic reality.

Disinformation, misinformation, malinformation, and everything they encompass show us how intricate information disorder is (Derakhshan & Wardle, 2017). With each form of inaccurate information comes different considerations and implications about the intent, threat, and harm posed. For this paper, we will refer to any information that is created for malicious purposes and is created and circulated through technological means, as technology facilitated disinformation (TFD). This includes all forms of disinformation, including fake news and propaganda, circulating online and across social media.

Information Trolls

Cybervillians (Waschke, 2017), moral entrepreneurs (Cohen, 1972; Carlson, 2018), and information warriors (Libicki, 2007) are all accurate language to describe those who create fake news. This paper proposes the term ‘information trolls’ to denote the perpetrators of TFD. Internet trolling refers to someone who makes “inflammatory, rude, or upsetting statements to elicit strong emotional responses in people” (Vicente, 2020, Jan 21, para. 2). Trolls may have many motivations for their actions. They may be creating chaos in a forum for their entertainment, or they may have a very specific agenda that can be attained by eliciting a desired emotional response among their audience (Vicente, 2020, Jan 21). Fake news writers use various techniques to push a specific agenda and can have a major impact on the views of media consumers (Guo & Vargo, 2018). The main commodity in their agenda-setting ventures is the dispersion of information. Information is useful when striving to achieve a desired outcome within the public domain. Thus, information troll is a conclusive term to address the perpetrators of fake news specifically.

Disinformation and Democratic Elections (United States and Canada)

The intention of disinformation and propaganda attacks is to erode public trust in legitimate authorities and governments while undermining the cultural values that promote a cohesive citizenry (Ivan, Chiru, & Arcos, 2021). The cyber world has established an environment that enables the facilitation of information attacks to thrive. Political information attacks are strengthened by the tools available through social media that aid in orchestrating attacks against democratic processes with greater sophistication than in the pre-social media era (Boyd-Barret, 2019). Political interference accomplished through TFD was a significant issue in recent U.S. elections.

United States

During the 2016 U.S. presidential election, there was considerable concern regarding Russian interference authored by the Russian Internet Research Agency (IRA) and other Russian hacking groups (Berghel, 2017; Haataja, 2019). The IRA also played a very significant role in influencing opinions around the election. The U.S. Senate Select Committee on Intelligence (2019) describes the IRA as a highly organized Russian-based organization that uses social media platforms to sow discord among the public. The IRA employed people located in Russia to pose as U.S. citizens on social media, create and operate social media pages, and promote conversation about divisive social and political issues. The IRA often disseminated conspiracy theories, fear-mongering, populist appeals, and polarized content (Bastos & Farkas, 2019). The main objective was to polarize citizens with left-leaning political beliefs from the right-leaning population while eliminating common ground. In dividing a nation, manipulation and coercion are more easily attainable. Disinformation techniques that were observed during the Cold War continue to persevere in today’s political climate, and the goals remain the same (Shultz & Godson, 1984; Tolz & Hutchings, 2021).

Canada

While there has been no finding of any disinformation attack in Canada similar in caliber to the attacks launched against the United States during the 2016 Presidential election, Canada has not been completely immune to Russian political interference. However, research in this area continues to be limited. Al-Rawi (2021) examined Russian Twitter trolls and their role in spreading disinformation during the 2015 Canadian Federal election. The study found that Russian Twitter trolls showed significant support for Stephen Harper (the Conservative Party representative) and were hyper-critical of Justin Trudeau (the Liberal Party representative). Russian trolls constructed weaponized narratives to promote the false idea that the Canadian government was responsible for covering up the true perpetrator of the Quebec mosque shooting in 2017. Trustworthy sources widely reported that the perpetrator of this shooting was a White right-wing extremist; however, Russian trolls allege that it was a Muslim man who attacked the mosque, but the government concealed this fact in a cover-up attempt. Al-Rawi’s (2021) findings highlight that Canada’s democratic processes have been the target of information attacks and that further research is necessary to understand the scope of this problem better.

Information Trolls During the 2019 Canadian Federal Election

Within the context of the 2019 Canadian Federal election, there is current evidence to suggest that, much like the influence of the Russian IRA in the 2016 U.S. presidential election, a disinformation campaign was at play during Canada’s 2019 Federal election. The Northeastern Independent became a significant concern to the legitimate Canadian news media and the government during the time leading up to the election. Viral politically charged stories were being published on The Northeastern Independent’s website, which contained material depicting information that had minimal or zero factual basis (Oved, Lytvynenko, & Silverman, 2019, Oct 18). It was confirmed that the person behind the website had previously offered his services to individuals and businesses to publish positive or negative coverage of political candidates for a price.

Regardless of being debunked by various fact-checking sources, The Northeastern Independent’s stories were liked, commented on, and shared by Facebook users over 200,000 times, and they had accumulated 4.4 million Facebook fans. Of further interest, while the U.S based website is said to cover both U.S and Canadian political content, eight out of the ten most popular articles on Facebook created by the source were related to Canadian politics and were published within eight months leading up to the Canadian election. Many stories are published anonymously without mentioning the article’s author and include unsubstantiated stories with anonymous sources cited (Oved, Lytvynenko, & Silverman, 2019, Oct 18).

Another considerable red flag is that The Northeastern Independent’s website provides a contact address for the business; however, the address associated to the website belongs to an abandoned building (Merrifield, 2019, June 24). Due to minimal transparency in both company ownership and journalist identity, a history of publishing false stories, and previous transactions with political advocate committees, it is possible that a foreign entity played a role in influencing the 2019 Canadian election.

Creation and Dissemination Strategies

Emotional Content

In creating disinformation, the ability to trigger a person’s emotions is a powerful strategy. Martel, Pennycook, and Rand (2020) found that when people’s emotions are heightened, they are likely to make decisions guided by those emotions. Emotional thinking limits a person’s ability to discern fact from fiction. Anger is one of the most powerful emotions that can be triggered. Once triggered, anger can lower someone’s ability to think critically about the information they are engaging with (Barclay, 2018). Additionally, through the incitement of anger, the reader is directed to feel strong negative emotions towards those who stand on the other end of the issue and incite division. Anger can compel a person to stand firmly on one end of the political spectrum while condemning the perspectives of those on the other side (Haataja, 2019).

Fear is another emotion that can be manipulated when crafting TFD. Barclay (2018) notes that fear is a powerful tool that is often used to justify violence and aggression. Within the scope of propaganda, fear is manipulated by dictators to oppress citizens of the state and to convince people that violence and aggression against the enemy is justified. In the Russian-Ukrainian war, the Russian Federation is promoting fear of the enemy to justify the violent infiltration onto Ukrainian soil in efforts to ‘liberate Ukrainians from tyranny’ (Sopilko et al., 2022). It is this fear that has been manipulated to justify acts of war, violence, and the killing of any person who stands in the way of ‘liberation’ (Sopilko et al., 2022; Diaz Ruiz & Nilsson, 2023).

Exploiting emotions to increase the effectiveness of a TFD campaign is a common feature within information warfare. Posts that elicit an emotional response capture the attention of more users and spread more quickly and widely (Cheung-Blunden et al., 2021). Emotional manipulation used in politically targeted TFD campaigns impacts citizens’ abilities to make democratic decisions that aren’t influenced by nefarious external actors. Even when disinformation is corrected, the psychological harm caused by the emotionally charged narrative has lasting impacts on a person’s political and ideological beliefs (Manfredi, Amado, & Gomez-Iniesta, 2022).

Divisive Content

When people think sharing an article will generate support for the issue presented or help to achieve vengeance for injustice, they are likelier to believe, engage with, and share that content (Southwell & Boudewyns, 2017). This relies on encouraging a sympathetic response from the user. From the information troll’s perspective, eliciting sympathy can be achieved through various techniques. The us-versus-them technique is a common strategy used in TFD to manipulate someone’s beliefs (Barclay, 2018). When exaggerating differences and highlighting commonalities, sympathy is gained for the group the user feels they belong to and reduced for the ‘other,’ causing polarization within the population. In sharing or retweeting a post that highlights the struggles of a group they sympathize with or a post that emphasizes the evil of the dangerous other, the social support the person believes they are generating for their cause can encourage further engagement and belief in TFD (Southwell & Boudewyns, 2017).

Higher exposure to deceptive partisan media increases polarized beliefs in the reader (Stroud, 2010). Additionally, polarized political alignment contributes to engagement and belief in disinformation due to increased confirmation bias (Miller, Menard, Bourrie & Sittig, 2022). The findings of both Stroud (2010) and Miller, et al. (2022) suggest that exposure to divisive partisan media can promote political polarization. Once a person has adopted a polarized belief system, they become further entrenched within their beliefs of this divisive information. As a result, they engage in less skepticism and more acceptance of the narratives perpetuated by information trolls.

Clickbait

In some situations, creating an enticing headline is all that is needed for TFD content to be shared and re-tweeted (Southwell & Boudewyns, 2017). Clickbait headlines have become a common strategy used among pseudo-journalism outlets to encourage conspiracy theories, propaganda, and inaccurate information to circulate rapidly on social media outlets. Due to the psychological nature of these headlines, users feel motivated to share deceptive news, causing these pieces to permeate thousands of newsfeeds with little delay (Glenski, Weninger & Volkova, 2018). It is common for clickbait articles to encourage polarization and divisiveness by harshly criticizing political actors, forcing unrelated ideas to be contrasted with politically charged issues, and using manipulative language to encourage adverse emotional reactions (Palau-Sampio, 2023).

False Legitimacy

Disseminating TFD is critical for disinformation campaigns. Bondielli and Marcelloni (2019) found that fake news articles are most often found on malicious websites, some of which are equipped to look legitimate. Spoofing their domain and/or mimicking legitimate news sources is a standard method to deceive readers. For example, Bondielli and Marcelloni (2019) found that to mimic a website such as ABCnews.com, simply adding a secondary level domain can effectively trick users into believing the credibility of that source (e.g., ABCnews.com.co). In addition to mimicking legitimate sources, TFD dissemination strategies often involve sharing factual, biased, and non-factual content on the same website (Allcott & Gentzkow, 2017). In doing this, trust may be earned by the reader through reading the factual articles. The user’s trust in the website may be transferred to the TFD articles published by the same source. Using communication channels under the cloak of deception is advantageous in disseminating TFD across information outlets (Kumar & Krishna, 2014).

Exploiting Algorithms

In the ever-evolving landscape of the internet, algorithms significantly influence the information environments we engage with. Algorithms challenge the independent agency of the internet user to control the content they consume. This opens the door to external influences that can interfere with our production of opinions and beliefs about the world and ultimately influence our behavior (Beer 2009; Beer 2017). Algorithms possess the power to shape truths, yet they remain concealed by obfuscation, limiting our ability to understand the nature and scope of their role in our daily lives (Beer, 2009).

Recommendation algorithms are exceptional in spreading disinformation online. Algorithms on social media platforms such as Facebook, X (formerly known as Twitter), and YouTube are designed to present users with information they are most likely to engage with. Social media platforms profit from user engagement by maximizing advertisement revenues. Previous studies have found that the items deemed by the algorithm as worthy of engagement are often divisive, contain notions of conspiracy, and aim to generate an emotional response in the user; at the same time, these are all common features of TFD (Sun, 2023). Corporate interests are another factor that needs to be considered in understanding the actors responsible for the spread of disinformation.

While algorithms are not all bad, as they can also be employed to target and take down disinformation online (Cartwright et al., 2022; Sun, 2023), the power they possess to control the information environment is often exploited by information trolls and social media capitalism (Sun, 2023). The exploitation of algorithmic tools to spread disinformation coupled with the corporate goals of social media networks to maximize profits illustrates the online information environment's immense power in influencing public opinion.

Methods

This research examines all articles that The Northeastern Independent published between January 1, 2019, and the Canadian Federal election date (October 21, 2019). The data was collected by filtering all stories related to Canada published on the outlet’s website within the 10-month window. A total of 26 articles were identified and collected. This study implemented a critical discourse analysis (CDA) approach to analyze the information collected.

Critical Discourse Analysis

CDA provokes researchers to examine the messages conveyed in the data within broader historical, social, and political settings. It encourages researchers to consider the implied meaning of texts and assess them critically within the scope of power dynamics, social structures, and political ideologies (Catalano & Waugh, 2020). Assessing power, politics, and ideologies is essential to analyze TFD content effectively. CDA considers influential power in social discourse inevitable; those who control the methods of communication wield the power to influence (Locke, 2004). Given the virality of articles published by The Northeastern Independent (Oved, Lytvynenko, & Silverman, 2019, Oct 18), this outlet's power to influence readers can be inferred. CDA is an effective methodological approach to analyze TFD and garner critical insight into the contextual underpinnings of politically charged narratives.

Before conducting a CDA, researchers must examine the broader contexts relevant to the study area (Mullet, 2008). In preparation, the researchers thoroughly examined the social, political, and historical contexts pertinent to the scope of this research. The highlights of this exploration are outlined in the literature review. Next, familiarity with the data was established by the first reading of each article in its entirety. Each article was then read and analyzed multiple times to develop a coding scheme. The codes were developed inductively based on the themes and sub-themes that emerged. Once the coding scheme was completed, it was tested with five articles to ensure it could be applied consistently across each text. Upon completion of the coding process, the researchers were ready to implement CDA techniques to facilitate a better understanding of the contextual foundations of the themes identified. The following techniques were employed during the critical discourse analysis (McGregor, 2010):

  • Topicalization was implemented to understand how the writer has framed the sentence to draw attention to, or away from, a specific topic or influence a particular perspective.

  • Power relations were examined to understand how power is being framed within the context of the article.

  • Statements where the author appears to support a specific narrative were examined. In some statements, the writer may try to push a persuasive rhetoric on the readers that discourages them from exploring alternative perspectives.

  • The language was analyzed for insinuations. Insinuations often allude to suggestive ideas and double meanings without explicitly acknowledging them.

  • The text's tone was examined to uncover specific words that convey the certainty of the message. For example, words such as possibly, could, maybe, might, potentially, and can minimize an idea's certainty and reduce the writer’s accountability for getting the facts wrong.

  • The register of the message was explored to understand how particular language is used in specific situations. Formal language may be used to support the legitimacy of an idea or actor. In contrast, informal language may be used to convey questionable trust and authority of another idea or actor.

By employing the techniques outlined by McGregor (2020), the researchers could interpret the meanings of the major discursive themes uncovered by the CDA.

Data

Table 1 provides information on all 26 articles collected from The Northeastern Independent. The headline, publication date, news type, main issue of the article, number of times the article was shared from The Northeastern Independent website to Facebook, and the number of comments made on the article on the outlet’s website were recorded.


Table 1: The Northeastern Independent Articles

Headline

Publication Date

News Type

Main Issue

Facebook Shares

Comments

Trudeau’s West Grey accuser was much younger than first thought

10/19/2019

Fake

Sex Scandal

41k

301

RCMP plans to charge Trudeau with obstruction in SNC Lavalin affair, following federal elections

10/17/2019

Fake

SNC Lavalin

44k

28

Tipped off by PMO, SNC Lavalin CEO left Canada fewer than 36 hours before he was to be arrested

10/16/2019

Fake

SNC Lavalin

24k

23

RCMP source says ‘security risk’ against Trudeau was contrived by PMO staffers

10/15/2019

Fake

Ethical Integrity

29k

41

With landslide loss for Trudeau expected, Wilson-Raybould is likely to run for Leader

10/14/2019

Fake

SNC Lavalin

5.4k

14

Trudeau accusor lands a seven-figure NDA to keep quiet about West Grey departure

10/10/2019

Fake

Sex Scandal

51k

183

Trudeau is rumored to be in talks with an accusor to suppress an explosive sex scandal

10/7/2019

Fake

Sex Scandal

32k

258

Bellegarde marches with climate activist Greta Thunberg in Montreal

10/3/2019

Real

Indigenous Focus

11

1

Federal spending on foreign abortions increased to $700m, infuriating Canada’s indigenous people

9/3/2019

Fake

Indigenous Focus

3.4k

7

Will Mark Hill keep his pledge to reject Ken Hill’s financial support in Six Nations election?

7/25/2019

Mostly Real

Indigenous Focus

146

3

Federal agent charged with criminal assault against an Indigenous woman from Six Nations

7/18/2019

Mostly Real

Indigenous Focus

1.7k

8

Hydro One injunction sparks political unrest at Six Nations, as corruption allegations emerge against Reserve official

7/9/2019

Mostly Real

Indigenous Focus

1.6k

1

Stormy Daniels denied entry to Canada for Sundowner appearance — politics suspected

6/22/2019

Mostly Real

Entertainment

1.9k

6

Chrystia Freeland is angling to succeed Trudeau as Prime Minister

4/2/2019

Fake

Liberal Leadership

13k

32

SNC Lavalin CEO’s wife, employed by Stephen Bronfman, to leave Canada imminently

3/26/2019

Fake

SNC Lavalin

224

3

Jacques Bougie, SNC Lavalin board member, informally lobbied Morneau’s wife

3/25/2019

Fake

SNC Lavalin

4.9k

7

Fearing violations of Foreign Corrupt Practices Act, Lynch threatened SNC job losses

3/22/2019

Fake

SNC Lavalin

1.9k

5

Ottawa political operative threatens conservative publisher with ‘fake news’ in The Globe and Mail

3/18/2019

Fake

Fake News

1.6k

24

At Trudeau’s behest, Gould instructed Google News to limit Canadian access to foreign press

3/16/2019

Fake

Fake News

42k

50

‘Political grandmaster’ Frank Iacobucci is at the center of SNC Lavalin, Kinder Morgan scandals

3/11/2019

Fake

SNC Lavalin

9.4k

18

‘Deep and penetrating’ relationship may taint Butts’ testimony

3/6/2019

Fake

SNC Lavalin

10k

24

MPs want Wilson-Raybould to talk about Kinder Morgan at a second House hearing

2/27/2019

Mostly Real

SNC Lavalin

8.6k

7

Scheer wants BMO and Treasury official questioned in SNC-Lavalin probe

2/19/2019

Mostly Real

SNC Lavalin

6.2k

18

With Liberals in crisis mode, NDP wants Wilson-Raybould as Leader

2/16/2019

Fake

SNC Lavalin

4k

6

Corruption scandal prompts calls on Trudeau to resign

2/12/2019

Real

SNC Lavalin

1.6k

12

Akwesasne pushes the boundaries on Indigenous self-governance

1/31/2019

Real

Indigenous Focus

18

0

Fact-Checking

A fact-checking method was employed to determine whether the article depicted real and reliable information or whether the information was fake. First, the main points from each article were collected. For each main point, the fact-checking process was completed by asking the following questions (Gray, 2017):

  • Who are the sources within the story, and are they reputable subject-matter authority figures? Are these sources named or anonymous?

  • Is there only one side of the story being presented? What biases are present?

  • Does this story show up on any main fact checking services2? Does this same information show up on a trusted news media site?

  • Are other reliable information sources challenging these facts?

  • Are there any left-out facts that are important to understanding the story fully?

  • Do they cite sources of their information, and are these sources legitimate?

  • Is my confirmation bias acting in a way that causes me to reaffirm what I already believe about this article?

Based on the findings of the fact-checking process, the articles were coded as fake, real, or mostly real. A designation of ‘fake’ means that the article contains inaccurate information that misleads the reader and distorts the truth. Articles coded as ‘real’ contain information that can be verified by other sources and present no misleading or inaccurate information. For articles defined as ‘mostly real,’ most of the content is verifiable and contains reliable information, but bias, speculation, and/or unverifiable facts are also present. To err on the side of caution, if a fact within the story could not be verified, it was not coded as ‘fake.’ While unverifiable facts may be fake, labeling all unverifiable information as fake may produce biased results.

Findings

Preliminary findings based on the data in Table 1 illustrate that 65% (N = 17) of the articles published by The Northeastern Independent during the ten months leading up to the 2019 Canadian Federal Election are ‘fake.’ Stories designated as ‘mostly real’ account for 23% (N = 6) of all articles, and ‘real’ stories only account for 12% (N = 3).

The top three stories with the most shares on Facebook were all considered fake and published during the same month as the election (see Figures 1 – 3). These three stories were shared 136,000 times combined. Two of the popularly shared articles were related to false claims that Justin Trudeau engaged in sexual relationships with underage students during his tenure as a high school teacher. Combined, these two stories were shared on Facebook 92,000 times. The other fake story, which was among the top three, stated that an anonymous source within the RCMP reached out to The Northeastern Independent to disclose the police’s plans to lay charges against Justin Trudeau for corruption. The charges would be laid two weeks after the election for the Prime Minister’s role in the SNC Lavalin scandal. This story received 44,000 Facebook shares.


Figure 1

An article published on October 10, 2019, by The Northeastern Independent received 51k shares on Facebook.

Figure 2

An article published on October 19, 2019, by The Northeastern Independent received 41k shares on Facebook.

Figure 3

An article published on October 17, 2019, by The Northeastern Independent received 44k shares on Facebook.


Both false narratives target minimizing trust in Trudeau and the Liberal government. Those who internalize these messages are influenced to believe that Trudeau is a sex criminal who engages in pedophilia and that he will be arrested for his involvement in corruption within the month. Both of these narratives have the potential to influence how the reader may decide to vote in the upcoming election and can ultimately undermine an effective democracy. Additionally, all three of the most popular articles were published within a window of five days leading up to the election date. Given both the popularity of these articles and the publication date’s proximity to the election date, there lies the concern that the disinformation represented within these articles could have influenced voting decisions.

Additional insights can be found by examining the least popular articles posted by The Northeastern Independent (see Figures 4 and 5). The least popular article was published on October 3, 2019, and received only 11 Facebook shares and one comment. The article was determined to be factual and discussed climate change with a focus on Indigenous activism. The second least popular article was published on January 31, 2019, and received only 18 Facebook shares and 0 comments. This article was also the oldest one published during pre-election. The Northeastern Independent saw a surge in readership attributed to the virality of their subsequent less-than-truthful news about the election. This suggests that readers of the outlet tend to favor fake stories over real ones, which ultimately helped the outlet gain more traction and visibility online.


Figure 4

The article published on October 3, 2019, by The Northeastern Independent received 11 shares on Facebook.

Figure 5

An article published on January 31, 2019, by The Northeastern Independent received 18 shares on Facebook.


Through exploring the most shared and the least shared articles, inferences can be drawn about what is considered newsworthy and what is not. Or rather, inferences can be made about what type of information has been purposefully crafted to appeal to and titillate readers. Real news, in comparison, is less shocking and, therefore, less newsworthy. Pseudo-journalist platforms use clickbait headlines to circulate conspiracy theories, propaganda, and inaccurate information with minimal delay on social media outlets (Glenski, Weninger & Volkova, 2018). News that implies that Justin Trudeau is a pedophile who will be charged with government corruption exemplifies how clickbait headlines are designed to encourage sharing and re-posting. As articles spread across social media platforms, more readership can be recruited, increasing the platform's power to influence public opinion (Beer 2009; Beer 2017).

Unnamed Sources to Enhance False Legitimacy

Of the 26 articles analyzed, 16 stories used a nameless, anonymous source to substantiate claims made. In the quote below, one article claims that a retired RCMP officer who still has access to inside information within the police force has provided The Northeastern Independent with special information about the Liberals creating a false story about a security threat at an event to earn public attention and sympathy (Figure 6). The story states that,

A retired officer of the Royal Canadian Mounted Police, who remains in frequent communication with senior figures inside the federal agency, is telling [The Northeastern Independent] that the widely publicized ‘security threat’ against Prime Minister Justin Trudeau on Saturday night in Mississauga was “largely contrived by political staffers” inside the PMO.

Hoping to elicit sympathy in the national political discourse, senior PMO staffers actively instructed the RCMP on which equipment and weaponry to have present, which uniforms protective officers would wear, and which security precautions would be taken at the private invitation-only Liberal Party event.

The implication that this makes to the reader is that The Northeastern Independent has special, secretive insider sources who choose only to disclose this information to their trusted connections within The Northeastern Independent.


Figure 6

An article published by The Northeastern Independent on October 15, 2019, uses unnamed sources as supporting evidence for false claims made in the article.


In another Northeastern Independent article, there are claims made that Justin Trudeau was involved in a sex scandal with a minor during his tenure as a high school teacher (Figure 7). The quote below highlights an additional example where unnamed sources are used to legitimize the information conveyed in the article. The article states that,

Less than two weeks ahead of federal elections that have already been looking conclusively grim for Prime Minister Justin Trudeau, rumors of an explosive sex scandal are percolating at the highest echelons of Canada’s media establishment.

Ottawa’s longest-tenured political observers had been expecting a career-ending expose in Saturday’s edition of The Globe and Mail — but that story never came. Sources are now telling [The Northeastern Independent] that Trudeau is in private talks with the principal source of that piece to suppress explosive sex allegations that, if made public, would likely force Trudeau to resign his office.

Trudeau’s accuser is said to be a former student at West Point Grey Academy and the daughter of a wealthy Canadian businessman.  Sources tell [The Northeastern Independent] that counsel is representing her and is being offered monetary compensation in exchange for a pending, but not yet signed, non-disclosure agreement.

Interestingly, the writer does note that these allegations are rumors. However, the use of an unnamed source adds a misleading sense of legitimacy to the position that this sex scandal goes beyond fiction or rumor. This is an effective strategy to minimize the certainty and reduce the accountability the writer has in getting the story wrong while still influencing the reader to consider these facts as a potential truth (McGregor, 2010).

The article also suggests that this information is circulating through the more well-known media sources and is growing into an explosive story that has not yet been leaked. The way this information is presented to the reader insinuates that a ground-breaking story is about to explode into the Canadian media’s attention and that The Northeastern Independent was the first news outlet that was brave enough to report it. At the same time, this may influence the reader’s voting decision. It is suggested that Trudeau must resign from office soon after the election once this information sees the light of day. This story has been debunked. The Northeastern Independent’s role in circulating this disinformation has been called out by fact-checkers at Logically AI3 (Khandelwal, 2022).


Figure 7

An article published by The Northeastern Independent on October 7, 2019, uses unnamed sources as supporting evidence for false claims made in the article.


Of the three articles determined to be ‘real,’ all used named sources. Of the six articles determined to be ‘mostly real,’ three used named sources, two used unnamed sources, and one used a combination of named and unnamed sources. For ‘mostly real’ stories where unnamed sources were used, they typically presented facts that could be verified through fact-checking. They used the unnamed source to enhance a specific stance within the story or to cite an unverifiable fact. In a mostly real article discussing the SNC Lavalin scandal, facts are presented around the ongoing investigation and Andrew Scheer’s beliefs about the scandal (Figure 8). The quote below shows that unverifiable claims imply that the Bank of Montreal (BMO) was involved in the scandal by bribing elected Indian Affairs officials to approve plans for a pipeline. The article states that,

Sources close to the Conservative opposition leader have told [The Northeastern Independent], on the condition of anonymity, that Scheer has reason to suspect that Brison’s resignation on January 10th was part of a wider effort to shield the government and Bank of Montreal executives from wide-ranging improprieties related to the former Kinder Morgan pipeline and its subsequent acquisition.

The Bank of Montreal has been using intermediary business service providers to ‘effectively bribe’ the elected officials of Indian Act-governments in British Columbia in exchange for pipeline approvals, the longtime government relations executive alleges. That practice has been a source of outrage among indigenous people in the Province.

An unnamed source is used to substantiate the stance that the Liberal government and BMO are covertly collaborating to cover up any wrongdoing, enhancing the narrative about government corruption. Combining truth with an underscore of speculation is an effective method to manipulate a reader’s beliefs. A quick reference to speculative ideas substantiated by an unnamed source is sandwiched between facts that other, more reputable media sources have reported. Truth is used to establish trust, while flawed information is used to persuade; this is a common strategy TFD creators use (Allcott & Gentzkow, 2017). Once the reader’s guard has been lowered by known information, a specifically constructed narrative that influences the reader to see a political entity unfavorably is encouraged. These findings suggest that unnamed sources are likely an indicator of the presence of disinformation.


Figure 8

An article published by The Northeastern Independent on February 19, 2019, provides verifiable facts about the SNC Lavalin story and uses unnamed sources to enhance a narrative about Liberal corruption.


Hyperbole and the Misrepresentation of Facts

Of the 26 collected and analyzed articles, seven were found to possess exaggerations that misrepresented the facts. Analyzing these exaggerations uncovered that hyperbole is used to increase a reader’s outrage while simultaneously distorting the facts. While these facts are embedded in some truth, the truth is obscured by misinterpretation. For example, in the quote below, federal spending on sexual health initiatives in third-world countries is misrepresented as the government committing genocide against Indigenous peoples (Figure 9). The article states that,

When Prime Minister Justin Trudeau announced earlier this summer that his government would spend more than $700 million a year to fund abortions overseas, Canada’s indigenous people were stunned. Now, that shock is beginning to turn into anger and outrage.

“Canada is paying $700 million to kill the babies of indigenous people around the world,” explains one First Nations activist. “While most indigenous babies here in Canada don’t have access to clean drinking water.”

“If that’s not genocide, I don’t know what is,” she adds.

This article depicts a sensitive topic that remains an issue of concern for many Canadians across the country. It is indeed true that the federal government declared that it would allocate $700 million in foreign aid to assist in sexual, reproductive, maternal, and child health initiatives in countries where women’s rights are under attack (Carber & Woo, 2019, June 19). This fact is then exaggerated to promote a convoluted message that the Canadian government is spending $700 million to directly kill Indigenous babies. The tone of this message also appears to take on a pro-life stance on women’s reproductive rights in framing abortions as genocide.


Figure 9

An article published by The Northeastern Independent on September 3, 2019, uses hyperbole to distort facts about federal spending for women’s health in third-world countries.


The article later takes a more factual turn by noting that many Indigenous communities in Canada still do not have access to clean drinking water. In spending $700 million in foreign aid, it is argued that the money could instead be spent locally to increase access to drinkable water among remote Indigenous communities. The impact of hyperbole in this article negatively impacts achieving justice for Indigenous communities across Canada. In this article, The Northeastern Independent has trivialized real Indigenous issues, such as the Canadian genocide against Indigenous peoples and the lack of access to clean drinking water that many Indigenous communities still face, by erroneously conflating these issues to misrepresented facts about women’s sexual health spending. The source’s respect for Indigenous peoples and the marginalization they face appears in-genuine and deceptive. The factual information about Indigenous issues may be used to attract the reader and gain the reader’s trust. Then, through hyperbole and bending truth, the real issues are undervalued through unrelated political mind games.

In another example of hyperbole, The Northeastern Independent published a story encouraging outrage over allegations that Justin Trudeau has bought out the media. The story suggests that Trudeau will be paying $600 million in federal funds to Canadian media giants, which violates the expectations for freedom of the press (Figure 10). The article states that,

Critics fear that a curtailment of access to foreign press reports would be violative of Canadian norms and the expectations of a free society. At the same time, the Trudeau government plans to offer Canada’s media giants $600 million in operating subsidies, which has unnerved free speech advocates across all party affiliations.

This article also mentions that Trudeau has pressured Google to prevent criticisms of the Trudeau government from being referenced in Google searches. Trudeau indeed said that he had paid the media $600 million to promote positive headlines; however, this claim was entirely satirical. The statement was made at a Parliamentary Press Gallery dinner in 2019, where political figures often deliver comical speeches (Reuters Fact Check, 2022). While the $600 million figure was satirical, this was latched on to and used to cause further panic about Trudeau placing limits on the freedom of the press to suppress negative stories about the Liberals.

Additionally, the claim that the Trudeau government has pressured Google into taking down anti-Liberal news articles is a significant exaggeration that stems from government efforts to combat disinformation. The government has collaborated with Google to fight disinformation to uphold electoral integrity by cracking down on junk news and disinformation related to the Canadian election (Boutilier, 2019, May 27). While it is likely that junk media sources, such as The Northeastern Independent, shine a negative light on the Liberal government, their stories are targeted by Google’s fake news AI due to the indicators conveying the presence of disinformation; they are not targeted based on who they report about.


Figure 10

An article published by The Northeastern Independent on March 16, 2019, uses hyperbole to distort facts about the government’s role in combating disinformation during elections.


Both hyperbolic examples use real facts tainted with exaggeration to encourage outrage. The real issues in these cases become overshadowed by manipulative narratives aimed to influence the reader’s political leanings. The coverage of injustices against Indigenous communities by The Northeastern Independent likely has malicious motives. Southwell and Boudewyns (2017) suggest that when disinformation pieces present a story where injustices have been committed against a population, the desire for vengeance is promoted, and a sympathetic response from the user is encouraged. The reader’s emotional response is exploited to decrease their ability to think critically about the content presented (Barclay, 2018). Due to the emotional outrage and desire for justice, people are likelier to believe, engage with, and share this content. By using real Indigenous injustices that elicit empathy in conjunction with distorted facts, information trolls can establish legitimacy, increase audience size, and manipulate readers into believing an artificial, politically motivated narrative.

Both examples appear to encourage outrage and anger within the reader. The incitement of anger encourages strong negative reactions to people who stand on the other side of an issue which sows hate and division (Haataja, 2019). This finding provides evidence of an attempt to divide the population into polarized ends to achieve politically motivated goals.

De-legitimizing Reputable Media and Naysayers

Through the analysis of the articles posted by The Northeastern Independent during the time leading up to the 2019 Canadian Federal election, there appears to be an effort by the pseudo-journalism source to prove their legitimacy through attempts to de-legitimize other reputable news sources and actors. In the previously explored article where the outlet suggests that Trudeau was involved in a sex scandal with a student during his time as a teacher (Figure 7), the article states that the mainstream media knew about the story, but never reported it; this can be observed when the article states that,

Ottawa’s longest-tenured political observers had been expecting a career-ending expose in Saturday’s edition of The Globe and Mail — but that story never came. Sources are now telling [The Northeastern Independent] that Trudeau is in private talks with the principal source of that piece to suppress explosive sex allegations that, if made public, would likely force Trudeau to resign his office.

Claiming that The Globe and Mail knew about the sex allegations made against Trudeau, but never reported them, can encourage the reader to develop distrust in reputable media sources while increasing The Northeastern Independent’s perceived trustworthiness. The reader may be more inclined to reject information that has been well-researched, well-inferred, and scrutinized by editors. Instead, readers may be more open to accepting flawed information published by an online independent journal and written by an anonymous source who is not required to abide by professional journalistic standards.

In another interesting example of the attempts to de-legitimize reputable news outlets, The Northeastern Independent published an article responding to allegations of it being a fake news platform. The article begins by explaining that a journalist from The Globe and Mail, Justin Ling, contacted The Northeastern Independent and threatened to report them as a fake news platform (Figure 11). The article also alludes to the idea that Ling is a political operative attempting to intimidate and extort The Northeastern Independent. The outlet alleges that this coercion is motivated by the kickbacks The Globe and Mail received from Trudeau’s $600 million payment to Canadian mainstream media, which was proven false and derived from a satirical speech (Reuters Fact Check, 2022). A series of screenshots are also included in the article where Ling states that he is working on a story about The Northeastern Independent and requests that the outlet sends him information about the names of their reporters, the supporting evidence they have for baseless claims they have made, and the documentation to show that they have paid the appropriate licensing fees for the content they repost. In response to Ling’s email in the screenshot of the alleged email chain, owner of The Northeastern Independent responds,

This is my only statement for your report:

Unfortunately, due to our Statement on Corporate Social Responsibility, we are unable to engage with State-sponsored newsmedia or to, in any way condone the State-sponsorship of news journalism. It is our understanding that The Globe & Mail is expected to participate in the Trudeau government’s program to offer funding and subsidiaries to major media outlets. We are, therefore, unable to participate in that interview in good conscious.

If The Globe & Mail can offer public assurance that it will – under no circumstances – allow for the State-sponsorship of its journalism or journalists, we would very much enjoy speaking with you and your colleagues. We are sorry that we could not accommodate your request

[The Northeastern Independent] takes corporate citizenship seriously. If in refusing to take The Globe & Mail’s interview we can meaningfully protest the increasing concentration of media power in the hands of the government, then Canada’s seemingly imperiled freedom of speech demands that we do so.

We also understand that journalists in the United States enjoy far more sweeping legal protections relating to political speech. We will continue to protect the names of journalists who fear the political retribution of any government, anywhere in the world — most especially when it is in defense of the rights and freedoms of our friends and neighbors

This quote further vilifies The Globe and Mail as a hand of the government that is not to be trusted. It implies that the government is also another evil that needs to be fought as it is actively impeding the free speech of Canadians through its control of mainstream media. Instead of trusting the mainstream media, it is suggested that readers should put their blind faith in The Northeastern Independent if they want accurate information that is uninfluenced by state-sponsored media.

An interesting statement is made regarding their use of anonymous journalists. A major criticism of The Northeastern Independent is their lack of transparency in their sources and writers. This is explained by their supposed need to protect their identity so they won’t face retribution from the tyrannical government for ‘speaking the truth.’ In refusing to disclose evidence for dubious claims and the names of their journalists, accountability is escaped. Under the guise of a ‘Statement on Corporate Social Responsibility,’ any fake news allegation is to be met with the denial to participate in state-sponsored media.


Figure 11

An article published by The Northeastern Independent on March 18, 2019, uses techniques to de-legitimize mainstream media to reduce trust in reputable sources.


Similar patterns can be seen in the Russian IRA’s campaign, which aimed to influence American politics. Throughout the IRA’s disinformation campaign, divisive social and political issues were addressed, distrust in the mainstream media was encouraged, conspiracy theories were circulated, and fear was mongered (Bastos & Farkas, 2019). The goal was to divide the population, remove common ground, and coerce an entire population to consume their constructed narratives. In comparing The Northeastern Independent to other TFD attacks that have been more sufficiently researched, there appears to be more evidence to support the finding that the 2019 Canadian Federal election was the target of a disinformation campaign.

Discussion

The findings of this study have revealed that: 1) The Northeastern Independent frequently used unnamed sources to enhance the perceived legitimacy of their claims deceitfully, 2) employed hyperbolic claims aimed to distort the fragments of truth placed within divisive media narratives, and 3) engaged in attempts to de-legitimize the mainstream media to encourage distrust in more reputable informational sources.

The findings of this study relate to what is known about disinformation and propaganda attacks. A common feature of TFD involves harshly criticizing political actors and forcing unrelated narratives concerning political issues (Palau-Sampio, 2023). Enticing click-worthy headlines is another common tactic junk news sources use to encourage content to be shared and re-tweeted widely (Southwell & Boudewyns, 2017). Allcott and Gentzkow (2017) also reveal that information trolls often create factual, biased, and non-factual content on the same website. The reader trusts the source through factual stories, but their trust is then exploited through the introduction of fake information and constructed narratives. Each of these features seen in previous literature on disinformation can be examined within the context of The Northeastern Independent. This research's findings support this outlet's designation as a disinformation source.

Disinformation campaigns are designed to erode public trust in legitimate authorities and undermine a fully informed, cohesive citizenry (Ivan, Chiru, & Arcos, 2021). This highlights the potential danger The Northeastern Independent poses to Canadian institutions. However, what do these findings truly mean for Canada? Is it possible to know if these disinformation attacks influenced voters' decisions, and if so, is it possible to estimate the amount of harm done? Can Canada hold The Northeastern Independent criminally liable? Would criminal prosecution be feasible or worthwhile? While the former questions cannot be answered within the scope of this study, the latter two questions about criminal liability can be assessed.

Criminal Interference in Canadian Democracy

While the notion of potential foreign interference in Canada’s democratic processes does not derive directly from the results of this study, the implications of The Northeastern Independent’s meddling in Canadian political affairs need to be critically assessed. More importantly, an important question needs to be answered; does The Northeastern Independent’s reporting on manipulative and false information related to the 2019 Canadian Federal Election constitute foreign interference? If so, are there any actionable consequences for this interference, and are they worthwhile to pursue? The findings of this study will be framed within a discussion on foreign interference.

Foreign Interference

When foreign entities become involved in the democratic process of a country they do not belong to, the ability of the democracy to function as it should is challenged. Foreign bodies may inject their influence into the political affairs of another country to encourage a result that may benefit foreign interests. Section 282.4 of the Canada Elections Act (2000) states that no person who is not a Canadian citizen or permanent resident can influence an elector to vote or not vote, or to unduly influence them to vote or not vote for a particular candidate or party. This means that parties external to Canada who are found to be influencing a voter’s election decision violate this section.

Throughout the research, the findings indicated that foreign interference in the 2019 Canadian Federal election was at play during the pre-election period. The Northeastern Independent is a non-Canadian-based online independent journal that published significant (and flawed) information about Canadian politics during the time leading up to the election. Based on this, the idea that there was foreign interference is not an irrational conclusion. However, it is important to understand if there was a breach of Canadian election law and whether prosecution is even a viable option under these circumstances.

Application of Section 282.4 (Canada Elections Act)

In considering the application of section 282.4 of the Canada Elections Act (2000), it becomes necessary to consider whether the TFD created by the online journal had an ‘unduly’ influence on voters to vote or not vote for a particular party.

The interpretation of “unduly” is defined as an influence that, “deprives a person of freedom of choice or substitutes another's choice or desire for the person's own” (Merriam-Webster, n.d.). The influence is considered unduly when conceived in an undue manner and to an excessive degree.

Within Canadian common law, ‘unduly’ is a vague concept; there is no precise interpretation that can be applied to determine a purely technical meaning, instead, it often denotes the seriousness of the act (R. v. Nova Scotia Pharmaceutical Society, 1992). Three questions arise in the application of this section to the content delivered by The Northeastern Independent’s coverage of Canadian politics during the pre-election period:

  1. Is there any evidence to suggest that The Northeastern Independent influenced Canadian voters to vote or not vote for a particular party or candidate?

  2. If such evidence exists, was the influence on Canadian voters unduly?

  3. Are Canadian legal responses to The Northeastern Independent’s foreign interference actionable if an unduly influence is confirmed?

Question 1 – Evidence of Influence

In considering the scope of this influence, it is worthwhile to consider the number of shares each article received. Given that the top three most shared articles were disinformation pieces, there is a possibility that the inaccurate information presented could have persuaded at least some of the readers. However, the impact of that influence cannot be understood within the scope of this study. If evidence can be found that The Northeastern Independent did, in fact, influence the public, the Canadian legal definition of what constitutes foreign influence becomes more likely.

The significant level of engagement and sharing with published articles promotes suspicion that there were likely more than a select handful of Canadian citizens persuaded by the false information pieces. If this is accepted, to meet the criteria for the application of section 282.4, can this influence be proven to be unduly?

Question 2 – Unduly Influence

In the current age of information media, consumers are exposed to informational messages whenever they open their personal electronic devices to engage with their social media accounts. Given the constant stream of signals targeted at individuals, there are a few conclusions that the user may draw as they sort through each piece of information. The user may simply scroll past the information piece, they may read and disbelieve it, or they may read and believe it. When it comes to disinformation constructed by information trolls, it does not initially appear that the definition of ‘unduly’ can be applied to people who find truth in a fake story. The person who chose to believe the story was not deprived of their freedom of choice. But the question is, did the person choose to believe the false information through exercising their freedom of choice, or was there willful deceit that manipulated them into believing the fake information?

Psychological Manipulation: Curated Content & Exploitation of Emotions

When considering Facebook algorithms, many factors are considered when tasked to predict what content the user would most likely want to engage with. Collecting data about the user proves valuable in providing the user with social media content that will keep the user engaged for long periods. Information including the groups they are in, the pages they like, the content they look at for certain periods, the profiles of their friends and followers, their browsing history, their purchase history, and the time of day they are active on social media are all collected and entered into the algorithm to provide a personalized output onto each user’s newsfeed (Bakir & McStay, 2018). As the material becomes more specifically curated to suit the user's unique profile, the information presented to them becomes more psychologically filtered to amuse their personality.

Algorithms personalize the specific materials a person will read, and minimize content unsuited to the user’s profile. Based on the understanding of newsfeed algorithms, higher proportions of the population who fit a certain user archetype are likely being selectively exposed to fake news articles at a higher rate than other user groups who do not match the same criteria. When a significant level of psychological control is allocated to social media consumption, what level of personal choice can be exerted in believing fake news content curated to one’s unique user profile?

The Northeastern Independent findings suggest that stories often presented a real and concrete issue with the addition of false information to cushion the delivery of their anti-liberal messages. Considering the level of psychological manipulation used in the construction of the fake news piece combined with the role of algorithms to target the content towards certain consumers directly, it can be argued that the TFD created by The Northeastern Independent does meet the unduly threshold. This study's findings have uncovered ways readers are manipulated into believing in the credibility of the information provided. Using unnamed, and likely fictional sources, can enhance the story’s believability factor. Exaggerating real facts can be used to bend the truth. De-legitimizing reputable news media can discourage readers from believing information presented in mainstream news. Each technique could influence the political beliefs and opinions of those who read The Northeastern Independent based on flawed information.

Given the vast amount of research that indicates the presence of emotional manipulation in TFD (Barclay, 2018; Haataja, 2019; Sopilko et al., 2022; Cheung-Blunden et al., 2021; Manfredi, Amado, & Gomez-Iniesta, 2022), the emotional content seen within the articles is understandable. When false narratives are presented in a way that encourages outrage, the reader’s guard drops, and the desire to think critically becomes reduced (Barclay, 2018). Using a person’s emotions to achieve a malicious objective further illustrates the presence of psychological manipulation as an unduly influence.

The non-legal definition of ‘unduly’ is met through the specific nature of the fake information targeted towards a specific consumer based on their psychological profile. Algorithms and exploited emotions work in conjunction to remove a degree of free choice in accepting the validity of disinformation material; the content has been deliberately delivered to the person due to them fitting a unique profile that is likely to engage with the material and support it. The legal definition of ‘unduly’ is also met; information is used as a weapon to serve a specific purpose. This can be seen as an attack on Canada’s sovereignty and potentially threaten a democratic system. The findings of psychological manipulation within this study also allude to the severity of this issue. Strategic deception is a common feature when TFD is used as a weapon to target States (Libicki, 2007).

It is now important to establish whether Canadian legal responses are actionable under these circumstances. Section 3.7 of the Criminal Code of Canada (1985) determines Canadian legal jurisdiction. This section states that any person outside of Canada who commits an act (that would constitute a criminal offence if it were to be committed within Canada) against a Canadian citizen, is deemed to have committed that act within Canada and is subject to legal punishment within the Canadian justice system. It has been established that foreign interference violates section 282.4 of the Canada Elections Act. Section 491.2 (1)(q) of the same act notes that intent must be proven to be found guilty of collusion under 282.4.

The alleged man behind the website, has a history of offering his services to publish positive or negative coverage of political candidates for a fee (Oved, Lytvynenko, & Silverman, 2019, Oct 18). If an investigation were to uncover evidence that a transaction was involved in the disinformation attack against the Liberal party, intent would be established, and a guilty verdict may be reached. The issue then becomes, is there feasibility in prosecuting this case? Unfortunately, the answer to this looks bleak.

In establishing guilt, knowing who was responsible for the act is important. Would the guilty party be the man behind the site, or the person(s) who crafted the disinformation article? To answer these questions, an investigation would be required to understand the nature of the circumstances associated with the crime. The capacity of police to conduct complex investigations within the intricate bounds of technology may be limited by tools and technological expertise (Brown, 2015). It is currently unknown whether the website’s owner also authored each piece of disinformation, or whether he simply approved and uploaded each article. The authors of the articles in question remain anonymous; this would necessitate considerable methods of investigation.

Complex cyber crime investigations are often not within the abilities of the police. They require a high level of expert knowledge and significant resource allocation. For example, an investigation may yield information about an IP address of interest, which may assist in identifying the suspect. However, the IP address alone is insufficient evidence as a user with technological know-how can simply circumvent detection by manipulating their IP details. This makes identifying the suspect behind the computer increasingly more difficult without an expert understanding of cyber investigative methods (Kao & Wang, 2009).

If an investigation were successfully carried out and a suspect, or suspects, were identified, the next step would be extraditing the person(s) from the host country to Canada to participate in the adversarial process. This requires the cooperation of the country where the suspect is located. The Northeastern Independent is said to be located in the U.S; however, evidence suggests that this simply may not be true (Merrifield, 2019, June 24). If the U.S. location is true and the suspects responsible are also residing in the U.S., the plausibility of extraditing the suspects to Canada for prosecution is feasible, given the existence of an extradition treaty. When those responsible are found not to be located in the U.S. and reside in another country where an extradition treaty does not exist, the likelihood of conviction of the crime significantly decreases due to a likely absence of international cooperation with a Canadian investigation. Investigations that transcend borders require the country of interest to allocate resources to a crime not committed against its citizens.

While it may be legally proven that foreign interference was an issue in the 2019 Canadian Federal election, the presence of barriers seemingly does not make a criminal prosecution a viable avenue, nor would this response address the societal harms created by disinformation campaigns.

Recommendations

So, where do we go from here? How can sovereign States protect their democratic integrity from outside interference? Unfortunately, there is no simple answer.

As social media becomes more ingrained in culture, the ability of information trolls to use social media platforms as a method to deliver TFD becomes a more powerful threat. Disinformation campaigns can threaten the entire system of democracy by inoculating false narratives into the public arena while actively engaging in deception to encourage acceptance among their audience members (Haataja, 2019). When developing counter-strategies to prevent fake news circulation, Kumar and Krishna (2014) note that it is much more advantageous to prevent fake news circulation than to try to address it after it has infected the public domain. However, strategies must also focus on containing the disinformation threat if it does bleed into public discourse. Measures based on protection, prevention, and pre-emption should work congruently to minimize the harms of information attacks, but realistic expectations are required as complete protection against disinformation campaigns is not feasible (Devost, Houghton, & Pollard, 1997).

Algorithms play a substantial role in circulating disinformation. Beyond the algorithm’s role in the circulation of TFD, the desire for profits driven by engagement on social media platforms further encourages the spread of disinformation due to its ‘engage-worthy’ design (Sun, 2023). For most social media users, algorithms are a quiet and invisible force that limits one’s ability to understand the extent to which algorithms have influenced the production of their opinions and beliefs (Beer, 2009). As information is spread rapidly online and is curated according to profitable engagement, the risk that users will accept misleading or false information increases (Bawden & Robinson, 2020). As such, there is a growing need for social media platforms to be transparent about how algorithms use information to moderate the user’s content. When users have more knowledge about how information is curated in their newsfeeds and have more control over the content they engage with, they regain informational autonomy. As a result, their empowerment improves their ability to make informed decisions and promotes resilience against accepting disinformation narratives (Lorenz-Spreen, Lewandowsky, & Hertwig, 2020).

While the criminal justice system may not be adequately prepared to respond to foreign threats of disinformation and election interference, governments and non-partisan media can play an important role in countering disinformation. Truth-telling disarms weapons of disinformation (U.S. Department of State, 2022). For example, the Canadian government has requested assistance from reputable news media sources to review the narratives that are being presented by Russian forces about the current conflict in Ukraine. To counter the false narratives detected by news media fact-checkers, the government has actively published accurate information to demystify false messages (Government of Canada, 2022). Additionally, the Canadian government has also created a countering disinformation tool. This tool allows the public to access content published by Russian sources, describes how the information is false, and supplements it with correct information about the issue (Government of Canada, n.d.). However, given that disinformation sources, such as The Northeastern Independent, actively undermine the trustworthiness and legitimacy of government and media institutions, this measure may not reach the most at-risk demographic.

One method that may limit the negative consequences of TFD is encouraging cooperation among journalists, academics, and technology professionals. In doing so, this creates multi-disciplinary and coordinated efforts to respond to disinformation online. While each occupation contributes substantial efforts to combat information warfare, there is a disconnect between the collected data and how it can be operationalized within cyberspace (Dias, 2017). Efforts that bridge the gap between academic understanding and technological know-how have been seen in creating artificial intelligence to identify and combat TFD (Cartwright, Weir, & Frank, 2019).

Implications and Conclusion

Within the cyber world, propaganda and disinformation campaigns thrive. Social media and online platforms have enhanced the ability of information trolls to reach a greater audience with enhanced sophistication, posing a significant risk for larger segments of the population to give credence to harmful and false narratives (Boyd-Barret, 2019). Currently, little is understood about disinformation campaigns within the Canadian context. This research has established that The Northeastern Independent did engage in a disinformation campaign targeting the 2019 Canadian Federal Election and that criminal foreign interference could have been at play. The danger posed by disinformation and foreign interference in democratic elections can have grave implications for the integrity of democratic institutions and the States they represent.

However, the findings of this study lead to more unanswered questions. Was the impact of this interference sufficient to impact the election results? Have any other disinformation campaigns been targeting Canada that have not been identified? What are potential ways to minimize the harms of disinformation, and could they be applied within the Canadian legal framework? To develop a more significant knowledge base on Canadian disinformation, it would be beneficial for future research to investigate these questions. To respond more effectively to the threats of disinformation, criminal justice policies, and frameworks will need to evolve to accommodate more effective responses that minimize the harms of TFD. This will not be a simple endeavor and will require consistent and ongoing efforts in research, policy evaluation, and legal reforms.

References

Al-Rawi, A. (2021). How did Russian and Iranian trolls’ disinformation toward Canadian issues diverge and converge? Digital War, 2, 21–34.

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236.

American Psychological Association. (2022). Misinformation and disinformation. Psychology Topics.

Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2), 154-175.

Barclay, D. A. (2018). Fake news, propaganda, and plain old lies: How to find trustworthy information in the digital age. Lanham: Rowman & Littlefield.

Bastos, M., & Farkas, J. (2019). “Donald Trump is my president!”: The internet research agency propaganda machine. Social Media & Society, 5(3), 205630511986546.

Bawden, D., & Robinson, L. (2020). Information overload: An introduction. Oxford Research Encyclopedia of Politics.

Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002.

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1-13.

Bennett, L. & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication (London), 33(2), 122–139.

Berghel, H. (2017). Oh, what a tangled web: Russian hacking, fake news, and the 2016 US presidential election. Computer, 50(9), 87-91.

Bondielli, A., & Marcelloni, F. (2019). A survey on fake news and rumour detection techniques. Information Sciences, 497, 38–55.

Boutilier, A. (2019, May 27). Facebook, Microsoft and Google agree to fight disinformation and fake news for federal election. Toronto Star.

Boyd-Barrett, O. (2019). RussiaGate and propaganda: Disinformation in the age of social media. Routledge.

Bradshaw, S., Bailey, H., & Howard, P. N. (2021). Industrialized disinformation: 2020 global inventory of organized social media manipulation. Oxford Internet Institute.

Brown, C. S. D. (2015). Investigating and prosecuting cyber crime: Forensic dependencies and barriers to justice. International Journal of Cyber Criminology, 9(1), 55.

Canada Elections Act, SC 2000, c 9, s 282(4).

Carber, M. & Woo, A. (2019, June 19). Trudeau announces hundreds of millions in foreign aid for women’s health, amid ‘attacks’ on abortion rights. The Globe and Mail.

Carlson, M. (2018). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information,
Communication & Society. 1-15.

Cartwright, B, Weir, G. R. S., & Frank, R. (2019). Fighting disinformation warfare with artificial intelligence: Identifying and combating disinformation attacks. Tenth International Conference on Cloud Computing, GRIDS, and Virtualization, May 2019, 67-72.

Cartwright, B., Frank, R., Weir, G., & Padda, K. (2022). Detecting and responding to hostile disinformation activities on social media using machine learning and deep neural networks. Neural Computing & Applications, 34(18), 15141–15163.

Catalano, T., & Waugh, L. R. (2020). Critical discourse analysis, critical discourse studies and beyond. Springer International Publishing.

Cheung‐Blunden, V., Sonar, K. U., Zhou, E. A., & Tan, C. (2021). Foreign disinformation operation's affective engagement: Valence versus discrete emotions as drivers of tweet popularity. Analyses of Social Issues and Public Policy, 21(1), 980–997.

Cohen, S. (1972). Folk devils And moral panics. London: MacGibbon and Kee.

Criminal Code, RSC 1985, c C-46, s 342(1)

Derakhshan, H. & Wardle, C. (2017). Information disorder: Definitions. In Understanding and Addressing the Disinformation Ecosystem. Annenberg School for Communication.

Devost, M. G., Houghton, B. K., & Pollard, N. A. (1997) Information terrorism: Political violence in the information age. Terrorism and Political Violence, 9(1), 72-83.

Dias, N. (2017). How academics can help platforms tackle disinformation. In Understanding and Addressing the Disinformation Ecosystem. Annenberg School for Communication.

Diaz Ruiz, C., & Nilsson, T. (2023). Disinformation and echo chambers: How disinformation circulates on social media through identity-driven controversies. Journal of Public Policy & Marketing, 42(1), 18–35.

Egelhofer, J.L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: A framework and research agenda. Annals of the International Communication Association 43(2), 97–116.

Evans, G. (2014). Propaganda: World War 1 usages. History Class Publications. 10.

Fletcher, R. & Nielsen, R. (2017). People don’t trust news media – and this is key to the global misinformation debate. In Understanding and Addressing the Disinformation Ecosystem. Annenberg School for Communication.

Glenski, M., Weninger, T., & Volkova, S. (2018). Propagation from deceptive news sources: Who shares, how much, how evenly, and how quickly? IEEE Transactions on Computational Social Systems, 5(4), 1071–1082.

Government of Canada. (n.d.). Countering disinformation with facts - Russian invasion of Ukraine. Global Affairs Canada.

Government of Canada. (2022). Canada’s response to the Russian invasion of Ukraine. Global Affairs Canada.

Gray, B. (2017). 10 Tips for Fighting Fake News: How to Fact Check Like a Pro. Lexis Nexis.

Guo, L., & Vargo, C. (2018). “Fake news” and emerging online media ecosystem: An integrated intermedia agenda-setting analysis of the 2016 U.S. presidential election. Communication Research, 47(2), 178–200.

Haataja, S. (2019). Cyber attacks and international law on the use of force: the turn to information ethics. New York, NY: Routledge.

Ivan, C., Chiru, I., & Arcos, R. (2021). A whole of society intelligence approach: Critical reassessment of the tools and means used to counter information warfare in the digital age. Intelligence and National Security, 36(4), 495–511.

Kao, D.Y., & Wang, S.J. (2009). The IP address and time in cyber-crime investigation. Policing: An International Journal of Police Strategies & Management, 32(2), 194–208.

Khandelwal, D. (2022). Canadian Prime Minister Justin Trudeau had sexual relations with a minor and had her sign a two and a quarter million dollar non-disclosure agreement so no one would find out about it. Fact Check Library: Logically AI.

Kumar, K. & Krishna, G. (2014). Detecting misinformation in online social networks using cognitive psychology. Human-Centric Computing and Information Sciences, 4(1), 1–22.

Lewandowsky, S., Stritzke, W. G. K., Freund, A. M., Oberauer, K., & Krueger, J. I. (2013). Misinformation, disinformation, and violent conflict. American
Psychologist, 68(7), 487–501

Libicki, M.C. (2007). Conquest in cyberspace: National security and information warfare. Cambridge University Press.

Locke, T. (2004). Critical discourse analysis. Bloomsbury Publishing.

Logically AI. (2023). About us. https://www.logically.ai/about-us

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4(11), 1102–1109.

Manfredi, J.L., Amado, A., & Gómez-Iniesta, P. (2022). State disinformation: Emotions at the service of the cause. Communication & Society, 35(2), 205–221.

Martel, C., Pennycook, G., & Rand, D. G. (2020). Reliance on emotion promotes belief in fake news. Cognitive Research: Principles and Implications, 5(1), 47–47.

McGregor, S. (2010). Critical discourse analysis: A primer. Kappa Omicron Nu Forum, 15(1), 1546-2676.

Media Bias Fact Check. (2022). The Globe and Mail - Bias and credibility. https://mediabiasfactcheck.com/the-globe-and-mail/

Merriam-Webster. (n.d.). Unduly. Retrieved from: https://www.merriam-webster.com/dictionary/unduly

Merrifield, C. (2019, June 24). Federal election easy prey for social media manipulators, experts warn. CBC News.

Miller, S., Menard, P., Bourrie, D., & Sittig, S. (2022). Integrating truth bias and elaboration likelihood to understand how political polarisation impacts disinformation engagement on social media [Special issue]. Information Systems Journal. https://doi.org/10.1111/isj.12418

Mullet, D. R. (2018). A general critical discourse analysis framework for educational research. Journal of Advanced Academics, 29(2), 116–142.

Murphy, D. M., & White, J. F. (2007). Propaganda: Can a word decide a war? The US Army War College Quarterly: Parameters, 37(3), 15–27.

Oved, M., Lytvynenko, J. & Silverman, C. (2019, Oct 18). A [redacted] website is publishing 'false' viral stories about Justin Trudeau - and there's nothing Canada can do about it. Toronto Star.

Palau-Sampio, D. (2023). Pseudo-media disinformation patterns: Polarised discourse, clickbait and twisted journalistic mimicry. Journalism Practice, 17(10), 2140–2158.

R. v. Nova Scotia Pharmaceutical Society, 1992 SCR 606.

Reuters Fact Check. (2022). Fact check-Trudeau’s joke about paying media is taken out of context. Reuters.

Santos-d'Amorim, K., & Miranda, M. (2021). Misinformation, disinformation, and malinformation: Clarifying the definitions and examples in disinfodemic times. Encontros Bibli Revista Eletrônica de Biblioteconomia e Ciência da Informação, 26, 1–23. https://doi.org/10.5007/1518-2924.2021.e76900

Schudson, M., & Zelizer, B. (2017). Fake news in context. In Understanding and Addressing the Disinformation Ecosystem. Annenberg School for Communication.

Shultz, R., & Godson, R. (1984). Dezinformatsia: Active measures in Soviet strategy. Pergamon-Brassey's.

Sirianni, J., & Vishwanath, A. (2016). Bad romance: Exploring the factors that influence revenge porn sharing amongst romantic partners. Online Journal of Communication and Media Technologies, 6(4), 42-73.

Sopilko, I., Svintsytskyi, A., Krasovska, Y., Padalka, A., & Lyseiuk, A. (2022). Information wars as a threat to the information security of Ukraine. Conflict Resolution Quarterly, 39(3), 333–347.

Southwell, B. & Boudeywns, V. (2017). Using behavioral theory to curb misinformation sharing. In Understanding and Addressing the Disinformation Ecosystem. Annenberg School for Communication.

Stroud, N. J. (2010). Polarization and partisan selective exposure. Journal of Communication, 60(3), 556–576.

Sun, H. (2023). Regulating algorithmic disinformation. The Columbia Journal of Law & the Arts, 46(4), 367-417.

Tolz, V., & Hutchings, S. (2018). Performing disinformation: A muddled history and its consequences. The London School of Economics and Political Science.

U.S. Department of States. (2022). Disarming disinformation: Our shared responsibility. Policy Issues.

U.S. Senate Select Committee on Intelligence. (2019). Russian active measures campaigns and interference in the 2016 U.S. election. 116th Congress Senate Report: 1st session.

Vicente, V. (2020, Jan 21). What is an internet troll? (And how to handle trolls). How to Geek.

Waschke, M. (2017). Personal cybersecurity: How to avoid and recover from
cybercrime Berkeley, CA: Apress.

Zuckerman, E. (Jan 30. 2017). Stop saying fake news, it’s not helping. Ethan Zuckerman Blog.

Contributor Bios

Rachelle Louden is a PhD student in the School of Criminology at Simon Fraser University (SFU). Her area of research includes disinformation, information warfare, hybrid warfare, cybercrime, cyberterrorism, influence operations, technology-facilitated political violence, and policing. She is a research assistant for the International CyberCrime Research Centre (ICCRC) and an instructor in Law Enforcement Studies at the Justice Institute of British Columbia.

Richard Frank is a Professor in the School of Criminology at Simon Fraser University (SFU), Canada, and Director of the International CyberCrime Research Centre (ICCRC). Richard completed a PhD in Computing Science (2010) and another PhD in Criminology (2013) at SFU. His main research interest is Cybercrime. Specifically, he's interested in researching hackers and security issues, the dark web, online terrorism and warfare, eLaundering and cryptocurrencies, and online child exploitation. He is the creator of The Dark Crawler, a tool for collecting and analyzing data from the open Internet, dark web, and online discussion forums. Through this tool, the ICCRC has collected ~150 million posts from various right-wing, left-wing, gender-based, and religiously-motivated extremist communities, leading to several projects and publications. Dr. Frank has publications in top-level data mining outlets, such as in Knowledge Discovery in Databases, and security conferences, such as Intelligence and Security Informatics (ISI). His research can also be found in Criminology and Criminal Justice, the Journal of Research in Crime and Delinquency, and the Canadian Journal of Criminology and Criminal Justice, to name a few.

Comments
0
comment
No comments here
Why not start the discussion?