An exposition on THe forEign informatioN mAnipulation and interference​

A blog on foreign information manipulation and interference (FIMI) in the 2024 elections

Authors: Dr David Wright, Trilateral Research; Dr Richa Kumar, Trilateral Research; and Dr Evangelos Markatos, FORTH

Introduction

The observation attributed to Thomas Jefferson is truer than ever in this year, 2024, which is witnessing more elections than any other in history. More than 41 per cent of the world’s population is entitled to vote. Even where elections are “free”, they can be manipulated in many ways, from bribery to ballot-stuffing, from cyber attacks to foreign information manipulation to interference (FIMI). In this year’s elections, the online ecosystem will play a major role in shaping campaigns, information flow and election outcomes. As a result, freedom and fairness of these elections are at stake. There are lots of examples of the disruption of electoral processes by information manipulation such as the Russian social media disinformation campaigns targeting the US presidential elections in 2016; the September 2023 pre-election deepfake political interruption in Slovakia’s parliamentary elections; and the role of fake news and computational propaganda in India’s 2019 general elections which led to a historic win of the Indian extreme right-wing party, BJP, with a forecast use of deepfake AI in this year’s parliamentary elections.

Examples of election interference and information manipulation

With about 80 elections this year, and given the potential exploitation of technology in elections, attackers have many opportunities. The recent past is one indicator of the growing scale of the challenges to democracy, but new technologies such as deepfakes and robocalls (AI-generated voice calls) pose an unprecedented kind of accelerated and amplified threat. EU chief diplomat Josep Borrell warned in January 2024 that elections across the globe this year will be a “prime target” for countries looking to spread disinformation and undermine democracy, such as Russia. Here are a few examples of how AI-generated FIMI is disrupting democratic practices and raising concerns about the sanctity of our democratic processes:

As the September 2023 election in Bulgaria approached, an audio recording of the pro-EU candidate, Michal Šimečka, was posted to Facebook. In the recording, Šimečka appeared to discuss plans for how to rig the election by buying votes from marginalised communities. The audio was fake – and news agency AFP said it showed signs of being manipulated by using AI. Even though fact-checkers identified the recording as false, nevertheless, it was shared across social media as if it were real. The winner of the election was populist Prime Minister Robert Fico who ended the country’s military aid for Ukraine.

In the 2023 Argentinian presidential elections, a few days before the first round of voting, scandalous audio recordings appeared featuring Carlos Melconian, then presidential candidate, speaking crudely about women and offering government positions in exchange for sexual favours. Security Minister Patricia Bullrich and her party swiftly came to Melconian’s defence and dismissed the recordings as fabricated, but the damage was done. The incident highlights how AI-generated content can shape the contours of an electoral contest.

Other instances of information manipulation include Russian information manipulation in its war against Ukraine with an aim to discredit the Ukrainian president: in 2023, social media was rife with the news that Ukrainian president Volodymyr Zelenskyy had bought a mansion in Florida and suggested that he was now looking to flee to the United States. In June 2023, the French government agency Viginum announced that it had uncovered a wide-ranging Russian disinformation campaign aiming to undermine Western support to Ukraine. Although not an example of election interference per se, nevertheless Viginum’s report is instructive in detailing the scope and measures of the Russian disinformation campaign. It consisted of spreading pro-Russian content; impersonating media such as Le Monde, Le Figaro and Le Parisien, as well as government websites including France’s ministry of European and foreign affairs; creating websites on francophone news with polarising angles; and coordinating fake accounts to spread the content created. The same techniques are being used in elections.

In the Indian parliamentary elections of 2019, there was widespread disinformation that shaped the election campaigns along Hindu-Muslim polarisation and fomented anti-Pakistan sentiment. With the upcoming 2024 Indian parliamentary elections, there are concerns about the use of deepfake AI-generated voice calls to manipulate the elections, thus “seriously destabilizing the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes”.

Interfering in elections with new technologies

Elections are the cornerstone of democracies and generative AI technology risks transforming the electoral process in multiple ways. New technology tools such as generative AI – developed by ChatGPT, Midjourney, Stable Diffusion, Dall-E, Midjourney et al – are generating plausible disinformation at scale. And social media is great at making it go viral.

Some of the FIMI tools most frequently used by foreign agents include fake news, deepfakes, robocalls, spear phishing, non-consensual image sharing and voice and video cloning. AI fakes have already been used in elections in Slovakia, Taiwan and Indonesia, and will further erode the public’s already-low trust in politicians, institutions and the media. Despite OpenAI’s ban on using its tools in political campaigning, Reuters reported that their products were used widely in the recent election in Indonesia – to create campaign art, track social media sentiment, build interactive chatbots and target voters.

The rise of TikTok and other less-regulated platforms such as Telegram, Truth Social and Gab has also created more information silos online where baseless claims can spread. Some apps that are particularly popular among communities of colour and immigrants, such as WhatsApp and WeChat, rely on private chats, making it hard for outside groups to see the misinformation that may be spreading.

In addition to their malicious use, some new technologies are disruptive on their own. New research from civil society organisations AI Forensics and AlgorithmWatch found that Microsoft’s Bing AI chatbot, recently rebranded as Microsoft Copilot, gave inaccurate answers to one out of every three basic questions about candidates, polls, scandals and voting in a pair of recent election cycles in Germany and Switzerland. In many cases, the chatbot misquoted its sources.

Generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities. Bing, ChatGPT and Bard all carry disclaimers noting that their chatbots can make mistakes and encourage users to double-check their answers.

Broadly, the threats posed by generative AI content include the following:

 

    • Robocalls, social media posts and fabricated evidence can be generated to de-mobilise or deceive voters urging them to do the opposite of what a politician or influencer would otherwise be expected to say or do. These tools are posing new challenges to democratic practices.

    • Social media posts, images, comments and engagement can amplify or algorithmically boost topics to manufacture a perception of consensus on a political issue thereby shrinking the space of dissenting opinion leading to the manipulation of voters.

    • By the same token, social media can be and are used to inflame social divisions and disseminate controversial views about an issue supposedly from candidates or other influential figures aimed at manipulating voters.

    • Fabricated recordings or images are leaked to undermine trust in electoral processes with claims of election rigging. Citizens sometimes don’t know what is real and what is AI-generated, which creates uncertainty and anxiety.


Thus, generative AI poses the risk of degrading the overall information environment; it can be “used to target existing vulnerabilities in election operations or voter engagement by scaling tried and tested interference playbooks”.

What can be done to combat FIMI?

How much of an impact, if any, inaccurate answers from Bing or other AI chatbots could actually have on elections is unclear. But with several elections in Europe – in Portugal and Slovakia in March, Lithuania in May, Belgium and the European Parliament elections in June, Croatia in September, Austria and Lithuania in October, Romania and the UK in November (note, some of these notes may change) – citizens need to be vigilant.

New laws are needed to combat information manipulation in elections. Legal and policy interventions are already underway; for example, a new law in Minnesota will protect election workers from threats and harassment, bar people from knowingly distributing misinformation ahead of elections and criminalise people who non-consensually share deepfake images to hurt a political candidate or influence an election. Canada has banned the use of the Chinese-owned social media app TikTok on government-issued devices, citing privacy and security risks. Other countries are also considering what measures can be taken to counter the explosive spread of FIMI as a clear and present danger to democracies.

With 2024 as the biggest year for elections worldwide, where nearly 4.2 billion people are eligible to vote, information manipulation is one of the key risks that threatens the cornerstone of our democratic processes. As we navigate these threats for the 2024 elections, concerted research is required to build robust democracies. The EU-funded ATHENA project is contributing to this vigilance by investigating 30 case studies of FIMI and the tactics, techniques and procedures (TTPs) used by FIMI attackers in their efforts to undermine democratic practices.

Trilateral Research Ireland is leading a consortium of 14 partners from 11 European countries undertaking the €3.2 million, three-year project.  The consortium is inviting stakeholders to participate in identifying instances of FIMI and what countermeasures, if any, have been taken. The project includes development of a FIMI toolbox and dashboard for stakeholders, a foresight task to “scan the horizon” for emerging developments in FIMI. The consortium expects to recommend improvements to existing legal, regulatory and knowledge-sharing frameworks to better counter the huge rise in FIMI.

The consortium’s research and findings will support EU efforts under the Code of Practice on disinformation and the new Digital Services Act to make election integrity a top priority. The ATHENA consortium is responding to the EC’s public consultation Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes. The partners will also examine the measures taken by governments outside the EU, e.g., laws requiring deepfakes to be labelled, or banning those that misrepresent candidates. Some social media companies, including YouTube and Meta, which owns Facebook and Instagram, have introduced AI labelling policies.

To stay up-to-date with the ATHENA project’s progress, please follow us on LinkedIn and X, and sign up for our newsletter. If you’d like to get in touch to find out more about the project, please email us at athena.comms@trilateralresearch.com.

Leave a Reply

Your email address will not be published. Required fields are marked *