As we approach the 2024 Presidential Elections, the landscape of political campaigning and election security is rapidly evolving. Among the many challenges that elections face today, one topic experiencing a growth of discussion is the threat of change and ethical controversy posed by Artificial Intelligence (AI). While AI was a topic once confined to the realm of science fiction, today we find its use in society creeping into spaces the average citizen might never have considered. Democratic elections are dependent upon communication between citizens, campaigns, the media, and the government. With AI tools now poised to change how important election information is created and disseminated, we face the fact of a change to status quo and questions of informational integrity amidst a conversation concerning the spread of disinformation.
Proliferation of Disinformation
Disinformation is defined as false or misleading content, intentionally planted, or spread for a specific purpose—in some cases for political gain (Starbird et al., 2023). An instance of politically motivated disinformation that has gained attention in both academic and popular literature was the sustained effort to construct and spread a false narrative of widespread voter fraud in the 2020 United States Presidential Election (Sharma et al., 2022; Starbird et al., 2023). When it comes to understanding how disinformation gains traction, it is important to note that a single narrative or piece of content is often not enough to constitute widespread manipulation. Rather, a campaign of disinformation sparked by a single actor and promulgated by viewers and media participation is what works to cultivate the ungrounded misleading conversations we refer to as disinformation (Starbird et al., 2023).
With campaigns and news media of the 21st century taking the turn to social media platforms like Instagram, Facebook, and Twitter (now “X”) to disseminate statements, electoral information, and other related content, society finds itself tasked with the duty of fact-checking their feeds and differentiating between reputable sources, social media influencers, and run-of-the-mill, “bedroom tweeters”. Ill equipped to know fact from fiction, the distortion of facts used to promote disinformation and political propaganda is shown to greatly reduce trust in online systems because of its ability to influence both individual opinions and social dynamics (Sharma et al., 2022). Moreover, with polarized and partisan narratives strewn throughout media platforms, opportunity for exacerbated prejudices and ideological divides is strengthened (Sharma et al., 2022). So, what awaits the electoral landscape when AI joins conversations as a computer-generated participant with the opportunity to inject new modes of disinformation and influence into electoral discourse?
AI-Powered Disinformation Campaigns
A decade ago, the question posed above would have required insight into the future with ample time to develop a legislative and technological course of action. However, in February of this year, a video went viral just before the City of Chicago’s four-way mayoral primary where viewers heard a voice seemingly that of mayoral candidate Paul Vallas say, “In my day,” [a police officer could kill as many as 17-18 people] and “no one would bat an eye,” endorsing a laissez-faire approach to police brutality (Giansiracusa & Panditharatne, 2023). This audio clip that was released on Twitter by an account called “Chicago Lake Front News” was not the result of a leak or comment caught on a hot mic, nor was it the work of a talented Paul Vallas impersonator. Rather the video was a digital fabrication, or a “deepfake” made using generative AI audio technology for the sole purpose of misleading voters with emotionally charged, false information. While the account responsible for posting the video was deleted the next day, the damage had already been done, with thousands retweeting the video and Vallas losing his party’s primary to a candidate advocating to defund the police (Concha, 2023).
Though the circulation of convincing fake images, videos, and audio clips is not new, the most recent developments in innovative generative AI tools have helped to make their production cheaper, easier to use, hyper realistic, and more likely to manipulate public perception (Swenson, 2023). Now faced with the political battle that is the 2024 Presidential Election race, AI technology sits at the forefront of tools to be used in election campaigns for the most sought-after seat of power in our democracy. Several campaign tactics using generative AI tools have already helped to bolster the false narratives being spread by parties and campaigns alike in the Republican Party’s presidential primary race. In April, the Republican National Committee (RNC) admittedly used AI to produce a video warning voters about the “potential dystopian crises” that would amount under a second term under President Biden. The RNC’s video showed President Biden declaring a national draft to aid Ukraine’s war effort, boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic (Giansiracusa & Panditharatne, 2023; Swenson, 2023).
AI Generated Images Clipped from GOP’s “Beat Biden” Attack Ad (April 2023)
In another instance of internet virality, a deepfake video was posted on Twitter in February of 2023, wherein Senator Elizabeth Warren (D-MA) is heard speaking on MSNBC, claiming that Republicans should be barred from voting in the 2024 Presidential Election. The video, which was made using AI technology to make it appear as if the Democratic senator was genuinely making the claims, amassed around 189,000 views in a week (Phillips, 2023). When we compare these two uses of AI in the political arena, we are faced with the dilemma of understanding whose responsibility it is to identify content as AI generated and how to combat the negative effect this might have on its viewing audience. The video posted by the RNC attacking a second term under President Biden included a small white text disclaimer of the use of AI to generate the video, one that was quickly lost in the conversations it sparked. In the case of the deepfake of Sen. Warren seemingly targeting Republican voters, the burden of disclosure ultimately fell upon Twitter to warn its users of the “altered audio” (Phillips, 2023). Does the public deserve to be informed on what material is produced using AI technology? If so, does the burden of disclosure fall upon the content’s author or the platform’s fact checkers?
What Now?
As we broach the topic of enumerating AI regulations in the laws of our nation and state’s, we face the reality of entering unknown territory. While this discussion has highlighted a brief selection of contextual scenarios wherein AI has served to further the spread of disinformation in U.S. elections and the broader political arena, truth remains that the relationship between AI and elections remains understudied and underassessed. With AI fitting into the context of both tools and threats, legislators are inevitably being called upon to take regulatory action. Want to learn more about how deepfakes work and possible steps legislators could take to regulate disinformation in elections? Read the second article in this series titled, “Deepfakes and Democracy: Combatting Disinformation in the 2024 Election” posted by Katherine McCormick where she takes a “deep” dive into that conversation.
Sources
Concha, J. (2023, April 23). The impending nightmare that AI poses for media, elections [Text]. The Hill. https://thehill.com/opinion/technology/3964141-the-impending-nightmare-that-ai-poses-for-media-elections/
Giansiracusa, N., & Panditharatne, M. (2023, July 21). How AI Puts Elections at Risk—And the Needed Safeguards | Brennan Center for Justice. Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards
Phillips, A. (2023, February 27). Deepfake video shows Elizabeth Warren saying Republicans shouldn’t vote. Newsweek. https://www.newsweek.com/elizabeth-warren-msnbc-republicans-vote-deep-fake-video-1784117
Sharma, K., Ferrara, E., & Liu, Y. (2022). Characterizing Online Engagement with Disinformation and Conspiracies in the 2020 U.S. Presidential Election. Proceedings of the International AAAI Conference on Web and Social Media, 16, 908–919. https://doi.org/10.1609/icwsm.v16i1.19345
Starbird, K., DiResta, R., & DeButts, M. (2023). Influence and Improvisation: Participatory Disinformation during the 2020 US Election. Social Media + Society, 9(2), 20563051231177944. https://doi.org/10.1177/20563051231177943
Swenson, A. (2023, August 10). FEC moves toward potentially regulating AI deepfakes in campaign ads. AP News. https://apnews.com/article/fec-artificial-intelligence-deepfakes-election-2024-95399e640bd1e41182f6c631717cc826
Leave a Reply
You must be logged in to post a comment.