The Teen View is back! Read our latest here:
By: Urvi Mysore
You may have seen a viral video of former Speaker of the House Nancy Pelosi appearing to be inebriated during a meeting, slurring her speech, or a clip of former President Nixon mourning a failed Apollo 11 mission in 1969. The events in these two videos, however genuine they seem, actually never happened. The videos are instances of deepfakes, or manipulated digital content crafted with advanced artificial intelligence (AI) algorithms to change a person's appearance, typically with the aim of misleading viewers. Some deepfake videos can be humourous and lighthearted. But in a political context, these fast-spreading clips foster the rapid spread of misinformation that can dangerously affect the outcome of important elections.
The year 2024, set to be the biggest global election year in history, is undeniably crucial in the political realm. More than 50 countries around the world with a combined population of around 4.2 billion will hold national and regional elections this year, including the United States and India. Unfortunately, with AI tools now at our fingertips, deepfake audio and video clips increasingly threaten the fairness and integrity of elections. The spread of deepfakes is concurrent with the spread of misinformation, and social media sites like TikTok, Instagram, and X (formerly known as Twitter) only facilitate the rapid dissemination of this false information. Additionally, messaging apps such as WhatsApp and WeChat rely on private chats, making it difficult for outside groups to monitor potential misinformation. Though YouTube and Meta have introduced policies requiring creators to label any AI-manipulated content, it is unclear if they can catch violators consistently. Moreover, these very companies have laid off thousands of employees and contractors since 2020, some of whom include content moderators. With big-tech companies reluctant to take solid measures against harmful deepfakes and AI technologies, some government officials are attempting to do their part in the wake of this critical election year.
While the Federal Election Commission and Congress leaders are taking steps to regulate the technology, they have not yet finalized any rules or legislation. However, officials on a more local scale are acting to ensure that their campaigns do not fall prey to misinformation narratives online. For example, Minnesota Secretary of State Steve Simon’s office is leading #TrustedInfo2024, a new online public education effort by the National Association of Secretaries of State to promote the most credible source of election information: election officials. His office also organizes meetings with county and city election officials and updates a “Fact and Fiction” information page on its website as false claims emerge. Also, recent legislation in Minnesota protects election workers from harassment, forbids intentional pre-election misinformation, and criminalizes the non-consensual sharing of damaging deepfake images for political purposes. Elsewhere, in rural Wisconsin, Oconto County Clerk Kim Pytleski has traveled the region giving talks and presentations to small groups about elections to boost voters’ trust.
Though these community and state initiatives are making a positive impact, individual social media users such as ourselves also share the important responsibility of preventing the spread of misinformation online. We must be critical of the news we encounter on social media by researching claims before sharing expository posts, regardless of whether they are political in nature. The very future of democracy is threatened by the dissemination of false information. As Americans head to the polls this year, we must safeguard election integrity by seeking trusted, verifiable news sources and being skeptical of deepfakes and AI-generated media.
This article was edited by Grace Hur.