The contemporary technological landscape has witnessed unprecedented advancements that have revolutionized the way information is created, consumed, and manipulated. Emerging technologies have paved the way for sophisticated disinformation campaigns, where AI-generated content can blur the lines between fact and fabrication, leaving unsuspecting audiences vulnerable to deception. A lot has happened on this front since the well-known Cambridge Analytica scandal and cases of election interference in Western democracies, as technologies designed to sow discord and confuse populations through digital communication have advanced rapidly. Recent years have witnessed an ever-growing repertoire of technologies used in information manipulation and disruption beyond the most frequently studied countries in the West, owing to the advances in Artificial Intelligence (A.I.), Generative Adversarial Networks (GAN) and deep fakes.
For example, the 2022 national election in Brazil was undermined by the malicious use of AI-generated deepfakes, where fabricated images and videos were used to spread misinformation, showing leading candidates involved in various scandalous and compromising situations. This deception was masterfully orchestrated by leveraging advanced AI and sophisticated editing tools to mimic the candidates’ voices and facial expressions accurately, creating a result that was highly convincing, yet belonged to an entirely fictitious ecosystem of narratives that destabilized public trust and skewed the electoral environment. In Myanmar, military personnel led a systematic campaign on Facebook to spread hate speech and fake news against the Rohingya Muslim minority. The campaign involved setting up seemingly innocuous lifestyle pages that slowly began posting anti-Rohingya content, capitalizing on algorithmic amplification to reach a wider audience. This disinformation contributed to an atmosphere of hostility that culminated in acts of ethnic cleansing. As a result, Facebook faced international criticism, leading to more substantial efforts to combat hate speech and misinformation on the platform.
In India, a country rich in languages and dialects, the capabilities of Natural Language Processing (NLP) were deployed for manipulating political debate at scale. Through 2019-20, competing AI systems were specifically programmed to generate disinformation predominantly on Facebook, tailored to different regions, taking into account region-specific cultural and linguistic tones. The AI used local dialects and intricately woven culturally nuanced narratives that perfectly fit the regional context. This tactic was disconcertingly effective, playing into regional biases and presenting the fabricated news in a manner that seemed more authentic and relatable to the local communities. In a more insidious example in Russia, AI-based information manipulation systems were leveraged to manipulate public sentiment in Ukraine on a broad scale. An extensive disinformation campaign was brought to light in 2022, revealing that bots, powered by NLP, were impersonating real users across a variety of social media platforms. These AI entities participated in public discussions, stirred up conflict, and propagated counterfeit news stories, intensifying political divisions and contributing to social unrest in the run-up to the second Russian invasion of Ukraine.
Since President Rodrigo Duterte assumed office in the Philippines on June 30, 2016, there has been a marked increase in state-sponsored disinformation campaigns, particularly on the pervasive social media platform, Facebook. The government has purportedly mobilized ‘troll armies’, an alarming strategy wherein large groups of individuals or automated bots are deployed to systematically spread propaganda. Their primary objectives appear to encompass the manipulation of public sentiment to sway opinion in favor of the administration, to discredit and harass the opposition, and to divert attention from controversial issues. These disinformation campaigns are neither haphazard nor spontaneous. They involve targeted and calculated dissemination of content based on intricate data profiling. Their focus is generally on critics of the administration and influential opposition figures, including human rights activist, Maria Ressa, who has faced online abuse and legal harassment since her news organization, Rappler, started reporting on the disinformation tactics and alleged extrajudicial killings under Duterte’s administration.
Simultaneously, across the world in Australia, amidst the catastrophic bushfires that occurred from late 2019 to early 2020, a surge of misinformation was disseminated across social media platforms. This misinformation inaccurately asserted that the bushfires were predominantly set by arsonists, despite official reports stating that most fires were caused by lightning and only around 1% were intentionally set. This wave of false information was mainly propagated by bots and trolls, which strategically amplified the misinformation based on big-data profiling to target susceptible demographics, including climate change skeptics and antigovernment groups. Moreover, in Sweden, in the summer of 2023, the Ministry of Defense was inundated with a large-scale, automated information manipulation operation. This used deepfake technology – sophisticated artificial intelligence that can create hyper-realistic fake images or videos – to attribute a series of Quran burnings that occurred in June and July 2023, to the Swedish government. Overwhelmed by the enormity of this disinformation campaign, the Swedish Ministry of Defense was compelled to launch an organized counter-campaign to live fact-check and refute these accusations. They adamantly denied government involvement in these provocations, underscoring the sophistication and potential harm of such manipulative digital tactics.
However, while technology has fueled the rise of disinformation, it also holds the key to potential solutions. AI and NLP, once harnessed by disinformation creators, can now be wielded as powerful tools for detection. Machine learning and data analytics empower researchers and fact-checkers with the ability to sift through vast amounts of information, uncovering patterns that reveal traces of disinformation. For instance, the emergence of GPT-powered AI fact-checkers showcased how AI can be used to rapidly cross-reference claims against reliable sources and debunk falsehoods in real time.
The transformative potential of blockchain technology, for example, also presents a promising path towards tamper-resistant information dissemination, safeguarding content from unauthorized alterations and ensuring its provenance. The tamperproof nature of blockchain ensures that the data presented is authentic, helping to counter misinformation and enhance trust in online information sources. By analyzing networks and user behavior, researchers can reverse-engineer orchestrated disinformation campaigns and the role of automated bot accounts, enabling the development of targeted counter-strategies. In the aftermath of the well-known 2016 Brexit referendum, for example, researchers identified an extensive network of Twitter bots that disseminated misleading information and manipulated public sentiment. The exposure of this orchestrated campaign later proved to be instrumental in understanding the mechanics behind disinformation propagation and devising strategies to disrupt such coordinated efforts.