top of page
Writer's pictureDr. Sean Guillory

Warning Signs for How Generative AI Will Impact the Next Election


2024 election ai threat

Dr. Sean Guillory

Lead Scientist for Booz Allen Hamilton’s Cognitive Domain/Dimension Products and Capabilities


OPINION – As state and non-state actors aim to disrupt elections by any means available while still affording plausible deniability, one of the newest tools that we are expecting wide adoption of is generative AI.


Generative AI can produce text, audio, imagery, and even synthetic data at speeds that traditionally took humans much longer to do. With hundreds of millions of users and the prices to make large language models dropping rapidly (OpenAI reported that GPT-3 cost an estimated $4 million to build, and some experts now are claiming that it’s possible to create better models for under $100k since March 2023), it is highly likely these tools will be used against our democracy at a scale that we haven’t experienced in any election cycle yet. With this in mind, here are three big issues with generative AI’s use in trying to influence the next election cycle that are important for national security experts to consider, followed by three suggestions that could alleviate the issue.


The first issue is that many people have been anticipating generative AI’s manipulative effects to be primarily targeting the national level, when it’s likely to be felt at the local community levels. The attempts to try to propagate deepfakes of President Biden or Ukraine’s President Zelenskyy get caught and publicized quickly. The public is already very aware of how the voice of famous people can be manipulated to say different things through AI parody content.


The default position of many people is to first be skeptical whether a piece of content from a famous person is authentic or not. This phenomenon can unfortunately be weaponized into what is called “The Liar’s Dividend,” where audiences can now have the plausible excuse that incriminating evidence against them may have been AI generated. Instead of influencing at the national level where the chance of being caught is extremely high, the issues we will see with generative AI will be more at the local level. For example, in the case of high school students in Carmel, New York, making a deepfake of their principal spouting racist comments caused huge issues in the town’s population believing it was real.


Local communities will first not have the sheer quantity of eyes on content where an issue would be easy to spot but also will likely not have the people with both the expertise to prove a piece of content is AI generated and the ability to regulate the content’s reach. Therefore, we can expect the attempts on national-level campaigns to be easily caught, while it will be multiple microtargeted campaigns where we’ll see the most influence.


The next issue is that AI-generated content of a different language and culture than that of the audience will propagate farther before it is stopped. Content where the audience is familiar with the language, the speaker’s typical messages, and even their various speech idiosyncrasies will be detected more quickly if someone tries to present AI-generated content to them.


We have already seen attempts from actors presenting deepfake content where the speakers are English (one purporting to be President Biden) to countries in Africa. It was not immediately stopped because it was more difficult to tell if the content was fake when one doesn’t speak the language or only knows about a person primarily from secondary sources.


The last issue of concern is making a generative AI model to help their operations. Model Collapse is a recently documented phenomenon where the more a large language model (LLM), such as ChatGPT, is trained on AI-generated data (vs. human-created data), the worse the model will perform. Teaching new LLMs with data produced by other models causes the model to forget the true underlying data distribution and learn from compounded mistakes. Study authors were surprised by how quickly it took for the model to collapse.


Another point that makes Model Collapse more of a near-term concern are the studies where generative AI experts believe that by 2025, 90% of all online content will be AI generated (which is currently the data source where most of these generative AI models get their training data). While we won’t necessarily reach that 90% during this upcoming election cycle, knowing that there will likely be a significant increase of AI-generated content in this election cycle (and much of it being generated because of the election cycle) is cause for concern about any new generative AI models that will be built during or after that time period.


While these issues may have come off sounding incredibly bleak for how AI could influence our democracy, I do want to be clear that not only are there many things in our space where AI will be a net benefit for humanity (i.e., how AI was used in accelerating research and development for COVID-19 vaccines), there will be dimensions where AI could prove to be crucial for our survival. When a person has less than a minute to make a decision after the discovery of a hypersonic missile traveling at Mach 10, having AI support that can help in the detection, data analysis, and even the decision making and deployment, will be critical for saving lives. As these different adversarial actors continue to invest in the type of AI that aims to damage our democracy, our first suggestion is to point to how critical it is for our national security to invest in ethical and responsible AI solutions for the entire lifecycle of AI model development.


Some might at first see this as a disadvantage where we may be limiting our options to fight—but in empowering accountability, oversight, and transparency, we will have the ability to address issues and possible manipulation faster. It is important to make these algorithms as inclusive as possible, so every citizen is properly represented in the data and no one is treated as an outlier because of their identity (those are the individuals frequently targeted by inciting messages).


Our second suggestion is to remember that more than studying how to build tools to detect “fake” content, study how humans detect if they think something is real or not (i.e. currently called “cyberpsychology,” but this kind of detection was traditionally studied under “psychophysics”). The detection tools will be in a never-ending cat-and-mouse game with the continually updating generative AI tools. However, it’s ultimately up to the human perceiving the content and using the tool whether they believe that the content is real or not, so the variables that humans use to make that decision should be further researched.


Lastly, the final suggestion is that our cyber and our information operations workforce must collaborate jointly to address the issues expressed in this article or we could lose this fight. CIG Principals Group member Honorable Sue Gordan has noted in previous writing how the Russians laughed and exploited the fact that U.S. Cyber Command at the time didn’t have a mission in information operations—and more so than ever, many modern cyber-attacks have both cyber and information operations elements. Democracy can only win if we fight for it, and it can only win as much as we can fight together.


Dr. Sean Guillory is the Lead Scientist for Booz Allen Hamilton’s Cognitive Domain/Dimension Products and Capabilities. Sean specializes in the utilizations of neuroscience, psychology, social science, and machine learning to help for concepts and products for Defense and National Security. He holds a Ph.D. in cognitive neuroscience from Dartmouth College.

1 Comment


1mw35ake
Oct 15

1111

Like
bottom of page