top of page

Elections are Primed for ‘Wave of Misinformation’

Updated: Feb 12

Patrick Weninger

Former CIA Senior Intelligence Officer

OPINION — As we enter 2024, concerns arise regarding the potential threat generative AI poses to elections in terms of misinformation and media manipulation. The proliferation of generative AI technology in the form of “deepfake” audio, images, and videos over the past year, coupled with its easy accessibility, means that both nation-state actors and individuals working from their garages now have access to nearly the same level of technology. Experts have expressed significant concern regarding the emergence of deepfakes and the use of AI algorithms, as they blur the line between truth and fiction in public discussions.

This amplification of misinformation due to generative AI technology poses significant threats to democratic processes. Various online platforms and applications enable anyone to generate AI deepfake audio, images, or videos, leading to viral misinformation events.

Today, more people are armed with better technology to engage in malicious activities. Combining this fact with an increased number of polarizing global issues, geopolitical tensions, active hostilities, and numerous global elections in 2024, it is evident that we are primed for a wave of misinformation.

On the spectrum of AI technology, we are only scratching the surface and are likely to witness increased advancements in the next 2-5 years, providing the public with even more user-friendly tools for media manipulation.  Some have characterized 2024 as the “year of elections,” with over 50 national elections worldwide, including those in the US, Mexico, the UK, India, and Pakistan. Many of these elections will be contentious due to the surplus of polarizing issues affecting social and cultural norms globally, including religion, political identity, economic factors, immigration, and numerous national security issues like conflicts in the Middle East and Ukraine.

It’s not just for the President anymore. Are you getting your daily national security briefing? Subscriber+Members have exclusive access to the Open Source Collection Daily Brief, keeping you up to date on global events impacting national security.  It pays to be a Subscriber+Member.

A World Economic Forum survey named misinformation and disinformation from AI as the top global risk over the next two years — ahead of climate change and war. The confluence of these polarizing issues, generative AI technology, and the intent of bad actors make these elections “ripe” for misinformation.

To underscore the threat to elections, recent examples highlight the use of generative AI technology to influence voters. In late January 2024, ahead of the New Hampshire primary, the New Hampshire attorney general’s office investigated a possible “unlawful attempt” at voter suppression. Media outlets reported a robocall with the voice of President Joe Biden, seemingly artificially generated, advising recipients to “save their vote for November” and avoid voting in the New Hampshire presidential primary.

In October 2023, just two days before Slovakia’s national elections, an AI-generated audio recording was posted on Meta. In the recording, the pro-NATO candidate was assumedly heard discussing how to rig the election by buying votes from a smaller party. The pro-NATO candidate ultimately lost to the opposing candidate, who campaigned to withdraw military support from Ukraine, supporting a Russian narrative. It remains unclear how much this deepfake audio call impacted voting, but it underscores the threat generative AI poses to elections.

People generally believe what they read and hear, and generative AI allows manipulation of images, video, audio, and text, making it increasingly challenging to separate truth from fiction, especially in elections.

Today’s constant barrage of information makes it easy for countries to wage disinformation campaigns and your emotions are the weapon of choice.  Learn how disinformation works and how we can fight it in this short video.  This is one link you can feel good about sharing.

We know that Russia uses active measures, which are a combination of overt and covert mechanisms to shape public opinion and influence political processes to help achieve their strategic objectives. By using disinformation, in support of their active measure campaigns, Russia seeks to undermine democratic institutions, manipulate public sentiment, and sow discord by exploiting existing social, political, and economic fault lines.

In the last two US presidential elections, Russia leveraged its understanding of our culture and politics to exploit existing divisions in our society on wedge issues, like race, nationalism, patriotism, immigration, LGBTQ, and gun control. Therefore, with the addition of generative AI technology, there is little doubt that Russia will employ more innovative methods when conducting active measure campaigns against the 2024 US presidential elections.

Over time, we can expect their misinformation tradecraft to improve, potentially enhancing their ability to undermine elections.

The Cipher Brief hosts expert-level briefings on national security issues for Subscriber+Members that help provide context around today’s national security issues and what they mean for business.  Upgrade your status to Subscriber+ today.

Many believe that social media platforms will be unprepared for the onslaught of misinformation generated by AI in the 2024 US presidential elections. This election will be the first where manipulated videos or images can be instantly created and distributed to the public.

AI-generated ads have already infiltrated the current presidential campaigns. For instance, the campaign of former presidential candidate Ron DeSantis created an AI-generated ad showing former President Donald Trump hugging Dr. Anthony Fauci. Meta has implemented a new policy, effective January 1, 2024, displaying labels acknowledging the use of AI when users click on political ads. Google and Microsoft have instituted similar policies requiring campaigns or candidates to disclose the use of AI-altered voices or images, with content violating these policies being removed. However, safeguards to detect and filter for fake content struggle to keep up with the AI tools and systems that create and spread manipulated media. The challenge lies in how social media companies can police their platforms, especially regarding bad actors who do not disclose the use of AI-generated content.

The primary challenge in countering AI-generated manipulated media is identification. Detection technology is not keeping pace with generative AI technology, making it more challenging for social media platforms to monitor real-time activities on their platforms.

President Biden recently signed an Executive Order to strengthen AI safety and security, suggesting government guidelines on digital watermarking to determine if an image is real or fake. However, these guidelines lack enforcement mechanisms, and watermarking is not foolproof.

Looking for a way to get ahead of the week in cyber and tech?  Sign up for the Cyber Initiatives Group Sunday newsletter to quickly get up to speed on the biggest cyber and tech headlines and be ready for the week ahead. Sign up today.

Data governance in the era of generative AI is becoming more challenging, requiring additional legislation, something social media platforms generally oppose. The European Union (EU) has recently passed draft legislation to ensure AI systems are safe and respect the fundamental rights of citizens. The EU’s risk-based approach offers stricter rules for high-risk AI systems that could cause harm. The agreement covers the governmental use of AI in biometric surveillance, how to regulate AI systems like ChatGPT, and the transparency rules to follow before market entry. The legislation does not apply outside of the EU, and the US government has expressed some concern over its language to potentially put US firms at a competitive disadvantage with China if adopted in the US. Congress is now examining this issue, recognizing the need for anticipatory legislation to keep up with the rapid pace of generative AI development. However, any legislation will need to consider AI strategic risk management, promoting its safe use and transparency, without stifling US innovation.

Countering the use of generative AI by bad actors for misinformation requires innovation in technologies that can detect and mitigate threats in real-time or near real-time. Detection platforms should not only focus on determining if something is real or fake but also on understanding how it was manipulated. This approach can address attribution and help identify and expose bad actors. Currently, deepfake detection efforts are not widely available, and even when they are, they struggle to scale to the volume of media being processed. Congress should explore ways to fund innovation for media manipulation detection through specific legislation and funding programs.

Some argue that 2024 will be the most crucial year in history for preserving democracy. How we legislate, manage, prevent, and detect threats associated with generative AI for misinformation purposes will likely have a significant impact on how democracies govern and conduct elections in the future.

This critical time in our history requires finding ways to leverage innovation to protect our democratic processes and institutions from generative AI threats while allowing AI technology to thrive through transparency.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. 

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field?  Send it to for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

Patrick Weninger is a retired CIA Senior Intelligence Service Case Officer with over 28 years of experience in the Central Intelligence Agency and U.S. Air Force. He served executive-level headquarters assignments overseeing clandestine operations, tradecraft, and technical operations and served as a Senior Executive Operations Officer and Manager, with senior field assignments in the Near East, South Asia, and Europe, including multiple Chief of Station and war zone assignments. He led and oversaw the CIA’s global program for Russia.


bottom of page