Missed the first article in our Deepfakes and Democracy series? Click here to read Danny Nickel’s take on the current use of deepfakes in American politics.
False information on social media has already taken its toll on American democracy, affecting the previous 2016 and 2020 presidential elections. As artificial intelligence (AI) becomes more sophisticated and available to the public, the fake news ecosystem will become more widespread and harder to spot. Deep fakes in particular pose a significant threat to the 2024 election as the digital likeness of candidates can readily be created and manipulated by outside forces.
A Deep Dive on Deep Fakes
In simple terms, deep fakes are the video equivalent of photoshopping or photo generating. Deepfakes use an artificial intelligence technology called “deep learning’ to generate fake video and audio of real people. Deep fakes utilize online images of an individual. Therefore, public figures and other individuals who post frequently on social media are especially vulnerable to having their likeness replicated. There is no federal criminal law against deep fakes. As such they are extremely difficult to regulate or track.
Deepfakes, like all AI generated content, can be political or non-political in nature. AI video generation technology is frequently used to produce memes and promotional content, most of which are harmless in nature. The real issue comes with examples of impersonation. Non-consensual pornography has been an issue in recent years with celebrities including Scarlett Johansson and Margot Robbie being victims of deep fake pornography, having their faces digitally imprinted onto porn stars. Although celebrities are the main focus of deep fakes, it is becoming increasingly common for normal women with a social media presence to be targeted as well. The greatest potential for harm involves the digital impersonation of a political figure. In 2022, a deep fake video of Ukrainian President Volodymyr Zelenskyy appeared to show the leader calling citizens of his country to stop fighting Russian soldiers and surrender their weapons. Zelenskyy’s digital likeness proclaimed that he had already fled Kyiv.
Deep fakes, including the digital recreation of Zelenskyy, differ from other image manipulating tools like photoshop in several key ways. First, deep fakes change the game in terms of response time and accessibility. Previously political organizations and other content creators would take hours or days to respond to breaking news. Video editing and photoshop take time and skill to be done properly. AI makes these content creation abilities available to anyone to easily and purposefully create disinformation. This poses an additional challenge in itself. Consumers may be instinctively more critical of ads from campaigns and other political organizations because they understand it to be purposeful political content with a clear goal. This will be less obvious with AI generated content from an unknown source. The public will have less of an ability to critically examine content because won’t always know a creator’s bias and goals. Additionally, this technology eliminates the language barrier in creating deceptive content. Foreign trolls no longer need to be fluent to influence an English speaking audience. They have language models to do it for them. U.S. officials have previously warned that Russia and China are trying to influence elections.
There is already an enormous disinformation problem in U.S elections. In 2021, fake news and conspiracy theories regarding the legitimacy of Joe Biden’s electoral victory helped lead to an insurrection at the U.S. Capitol. This disinformation, mainly pushed by President Donald Trump, increased in popularity due to several misleading viral videos apparently showing election workers destroying or modifying ballots. The video disinformation campaign for the 2024 cycle has already started. In June, former President Donald Trump’s campaign released deceptively edited videos of President Joe Biden slurring his words to make him seem mentally unfit for office. AI generated deep fakes will only make this problem worse, and will affect leaders on both sides of the aisle. In August, AI generated photos of Trump apparently resisting arrest went viral on some sectors of social media. Other AI generated images including one of Trump praying on one knee, and many AI generated mugshots have also garnered attention surrounding the former President’s arrest. Ben Winters, senior counsel at the Electronic Privacy Information Center, says that AI generated information will “have no positive effects on the information ecosystem” and expects it to make the jobs of journalists significantly harder in the upcoming election.
The Fight Against Disinformation
The United States needs immediate policy reform requiring all campaign related organizations including PACS, nonprofits, and other associated individuals to disclose any AI generated content; many politicians across the aisle are currently pushing for bold AI policy reform. At the same time, other political organizations are openly embracing AI. Recently the RNC released an AI generated video warning of an apocalyptic world should President Biden get elected for a second term. This description of the video includes a disclaimer that it is made from AI generated content, but this sort of disclosure is not currently required. Naturally, individual use of open AI will be much harder to regulate than professional and organizational use. Senator Amy Klobuchar introduced a bill _ that would “prohibit generative AI from being used to create deceptive political ads.” Unfortunately, if the success of previous congresses is any indication, the rate at which effective policy is passed may be much slower than advances in AI technology. The easiest path forward in terms of flagging this content would involve updating the terms and conditions of social media apps to require that users disclose or flag their posts if it contains AI generated content. Methods similar to the existing “community notes” on Twitter (now known as X) could also be useful in allowing users to flag suspected misleading AI content. Perhaps in the future AI itself could be used to identify other AI generated content, but for now that appears to be a long way off.
Additionally, there is an enormous need in the United States and other parts of the developed world to teach digital literacy. A 2020 study found that older people were the most susceptible to believing fake information on social media due to their relative inexperience with the platforms. The younger generations, while generally more adept at navigating misleading content, are still vulnerable and are significantly more likely to believe certain online conspiracy theories.
Widespread public education campaigns reminiscent of those during the early stages of COVID-19 could help keep the issue at the forefront of public mind and teach basic media literacy skills for recognizing deep fakes and other forms of fake and manipulated news. Unfortunately that is not an end all be all solution. even the most skeptical digital natives can get fooled by well-crafted deep fakes. It’s also not reasonable to expect social media users will critically analyze all the content they consume.
While AI is being touted by Silicon Valley execs as the dawn of a new era and the fix-all solution to everything from driving to manufacturing and policing, its potential for harm is just as great, if not greater than its potential innovative benefits. AI, being the product of the same society it aims to “fix,” will exacerbate existing problems in our society and give unchecked content creating ability to the political fringes. Political disinformation already serves as a major threat to democracy and AI will only make this worse. Deep fakes are only one form of artificial intelligence that pose this threat. The use of AI must continually be examined ahead of and during the 2024 election to understand its dangers and effects.
Sources:
Allyn, B. (2022, March 17). Deepfake video of Zelenskyy could be “tip of the iceberg” in Info War, experts warn. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
Brashier, N. (2020, May 19). Aging in an era of fake news. Sage Journals. https://journals.sagepub.com/doi/epdf/10.1177/096372142091587
Dunn, S. (2021, March 3). Women, not politicians, are targeted most often by deepfake videos. Centre for International Governance Innovation. https://www.cigionline.org/articles/women-not-politicians-are-targeted-most-often-deepfake-videos/
Fung, B. (2023, September 13). Bill Gates, Elon Musk and Mark Zuckerberg meeting in Washington to discuss future AI Regulations | CNN business. CNN. https://www.cnn.com/2023/09/13/tech/schumer-tech-companies-ai-regulations/index.html
Gallagher, F. (2020, November 13). Election 2020: Debunking false and misleading videos claiming to show voter fraud. ABC News. https://abcnews.go.com/Politics/election-2020-debunking-false-misleading-videos-claiming-show/story?id=74148233
Johnson, A. (2023, April 26). Republicans share an apocalyptic AI-powered attack ad against Biden: Here’s how to spot a deepfake. Forbes. https://www.forbes.com/sites/ariannajohnson/2023/04/25/republicans-share-an-apocalyptic-ai-powered-attack-ad-against-biden-heres-how-to-spot-a-deepfake/?sh=5878a6547753
Panditharatne, M. (2023, September 6). How AI puts elections at risk – and the needed safeguards. Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards
Paul, K. (2023, August 17). Teens much more likely to believe online conspiracy claims than adults – US study. The Guardian. https://www.theguardian.com/us-news/2023/aug/16/teens-online-conspiracies-study
Robins-Early, N. (2023, July 19). Disinformation reimagined: How ai could erode democracy in the 2024 US elections. The Guardian. https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections
Russell, D. (2023, June 23). Is deepfake pornography illegal? it depends. Endless Thread. https://www.wbur.org/endlessthread/2023/06/23/deepfake-pornography-law
Tucker, E., & Merchant, N. (2022, October 4). US warns about foreign efforts to sway American voters. AP News. https://apnews.com/article/2022-midterm-elections-russia-ukraine-campaigns-presidential-ea913f2b3b818651a9db1327adaa330a
West, D. M. (2023, June 12). How ai will transform the 2024 elections. Brookings. https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/
Leave a Reply
You must be logged in to post a comment.