latest-news-headlines Market Intelligence /marketintelligence/en/news-insights/latest-news-headlines/ai-poses-risks-opportunities-for-election-campaign-security-83098664 content esgSubNav
In This List

AI poses risks, opportunities for election, campaign security

Blog

The Party is Over: Tupperware’s Failure

Podcast

Private Markets 360 - Episode 17: European Credit Opportunities

Blog

Engineering and Construction Cost Indicator declined in September as cost increases for materials and equipment moderate

Podcast

Next in Tech | Ep. 186: B2B Payments Technology and Markets


AI poses risks, opportunities for election, campaign security

Election-related misinformation is on the rise — fueled by the upcoming US presidential election and the increasing use of generative AI — forcing cybersecurity experts, campaign workers and election officials to figure out how best to sort fact from fiction online.

While misinformation on the internet is not new, it is becoming more pervasive. NewsGuard, a company that tracks online misinformation, identified 1,075 unreliable AI-generated news and information websites, more than twice the number counted a year ago. The sites span over a dozen languages and typically have generic names that reference common news or business concepts. While some of them feature relatively innocuous content — sometimes presenting old events as new ones — others are more targeted and political.

The AI-fueled proliferation of this content has raised several questions, including who is responsible for preventing its spread and how big of a risk it represents, especially in an election year. Lawmakers and tech executives have emphasized the importance of spotting and labeling malicious or misleading content and offered varying perspectives on the role that government and Big Tech platforms should play in content moderation.

"Everyday disinformation or misinformation has very small effects, and most people don't see it," said Erik Nisbet, a professor of communication at Northwestern University. "What we need to think about are these low-incidence, high-impact events. ... We need to be prepared."

Election risk

US elections tend to attract attention from foreign interests, potentially posing a risk of sabotage.

In one recent example, the US Justice Department on Sept. 27 unsealed an indictment charging Iranian nationals and Islamic Revolutionary Guard Corps employees of working to hack into accounts associated with current and former US officials, political campaign workers and others. The intent, according to the DOJ, was to access non-public information and then leak it to stoke discord and erode confidence in the US electoral process.

The Trump campaign earlier acknowledged that Iranian hackers had breached the former president's campaign.

"People gravitate towards things like elections," said Wasim Khaled, CEO and co-founder of Blackbird.AI, a platform that aims to protect organizations from targeted disinformation campaigns. "Any major incident that draws attention can be exploited by an opportunistic actor, whether a domestic fringe actor or a nation-state actor, to twist that event and use it as a lightning rod to drive synthetic manipulated narratives against any entity."

While the Iranian hack-and-leak operation did not rely on AI, artificial intelligence democratizes access to tools that can be used for illicit or deceptive ends, making content generation easier and faster. The proliferation of this content also raises the risk that AI-generated misinformation will be picked up and shared by legitimate news sites or public individuals.

"AI content can be amplified in partisan media, even if it's fake," Nisbet said.

In 2024, Alphabet Inc. removed more than 11,000 YouTube channels connected to "coordinated influence operations" with ties to Russia and 22,000 with ties to China, a company executive told the US Senate Select Committee on Intelligence in September testimony. The hearing also featured testimony from Meta Platforms Inc. and Microsoft Corp. executives. X did not participate.

Misinformation efforts are not just occurring at the federal level.

New Mexico Secretary of State Maggie Toulouse Oliver said her office is wary of "a huge ratcheting up" of misinformation and "trolling and attacks" involving AI-generated content such as pornographic deepfakes targeting women running for elected office.

Cybersecurity experts have factored these conditions into their training, as more campaigning has moved online, both in terms of advertising and outreach.

"Right now, we have to slow down and not plow through links, emails and even attachments as we used to," said Jude Meche, chief information security officer at the Democratic Senatorial Campaign Committee. "All of these are vectors for targeting us."

Government and AI

Policymakers and campaign workers view AI as a double-edged sword. AI is very effective as a defensive measure to identify and take down misinformation, Microsoft Vice Chairman and President Brad Smith said at the Sept. 18 Senate committee hearing.

However, AI poses its own risks. With this in mind, Microsoft issued a set of recommendations for Congress in a new white paper, said Danyelle Solomon, Microsoft's senior director of US AI policy. The recommendations include implementing a deepfake fraud statute, protecting content provenance information via cryptographic markers, and ensuring existing legislation such as statutes protecting civil rights or protections for minors are updated to account for AI usage.

Tech companies thus far seem eager to work with policymakers to prevent the spread of misinformation. But others would like to see the Federal Trade Commission have the power to compel Big Tech companies to share information about their algorithms, content moderation processes and how they are training their content moderators.

"I don't think the government ought to be regulating content, but I think there is an essential role for government to provide oversight," said Michael Posner, a professor at New York University and former assistant secretary of state during the Obama Administration. "It's unthinkable that we would allow airplanes to fly without a regulatory agency or drugs to circulate without [the Food and Drug Administration]. Through its consumer protection authority, the federal government needs to oversee these companies."

Thus far, the FTC has relied on its powers to protect consumers against deceptive or unfair conduct to regulate AI. As part of a law enforcement sweep called Operation AI Comply, the agency on Sept. 24 said it filed complaints against multiple companies that used AI to allegedly defraud consumers or promote fake content.

"Using AI tools to trick, mislead or defraud people is illegal," said FTC Chair Lina Khan. "The FTC's enforcement actions make clear that there is no AI exemption from the laws on the books."

At the same time, policymakers, fact-checkers and company executives have a tightrope to walk. They must accurately cover the scope and severity of threats to election integrity or malicious propaganda while not overstating its effects.

"We need to ensure that we don't inadvertently undermine people's confidence in elections and democratic governance by exaggerating the impact of misinformation and disinformation," Nisbet said. "Multiple social studies, including those from our lab, show that such impacts exist. People become less confident and less satisfied with democracy, which taints their confidence in electoral results and the democratic process. Therefore, we need to be very careful about how we discuss it."