15 Aug, 2024

US broadcasters mull impact to political ads from FCC's AI effort

SNL Image

The Federal Communications Commission voted last month to advance a notice of proposed rulemaking that would require broadcasters to disclose if political ads airing on their stations contain AI-generated content.
Source: FCC.

A proposed federal rule targeting political ads containing content generated by artificial intelligence highlights both the importance of political ad revenue to broadcasters and the inability of agencies to regulate the primary means by which AI-generated ads are disseminated — the internet and social media.

This summer, the Federal Communications Commission voted 3-2 along party lines to advance a notice of proposed rulemaking requiring disclosures for AI-generated content in political ads aired on television or radio. The move came after a political action committee affiliated with the campaign of Florida Gov. Ron DeSantis used AI images of GOP presidential nominee Donald Trump during the Republican presidential primary. There have also been AI-generated facsimiles, or deepfakes, of Trump and President Joe Biden created by individuals on social media as well as more coordinated groups, some of which have had ties to foreign governments.

While there is consensus regarding the need to prevent the spread of misinformation ahead of this November's elections, critics of the proposed rulemaking say it unfairly targets broadcasters while leaving social media and ad-supported streaming platforms wholly unregulated. Broadcasters have said they need more time to understand the full implications of the proposed rulemaking and how it might impact their advertising revenues. FCC Chairwoman Jessica Rosenworcel has said the risk of inaction is too great this year.

"There's too much potential for AI to manipulate voices and images in political advertising to do nothing," Rosenworcel said. "At the very least, the public deserves the chance to weigh in and offer solutions about the best way this agency can utilize its existing authority to increase transparency and build consumer trust."

The proposal

The proposed rulemaking builds on the commission's existing requirements around political programming and recordkeeping. Broadcasters, cable operators and satellite video providers already must record information about who bought a campaign ad, how much they paid for it and when it ran.

The proposal would go further, requiring all radio and TV broadcast stations to ask whether any political ads scheduled to be aired contain AI-generated content. If the answer is yes, broadcasters must disclose to viewers or listeners that the ads contain AI-generated content.

"Broadly, I don't think this is a major concern for broadcasters since the majority of local ads are not AI-generated," said Justin Nielson, principal analyst at S&P Global Market Intelligence Kagan. "Political ad disclosures on broadcast radio and TV are already highly regulated and this [notice of proposed rulemaking] to me is just an extension of those limits when it comes to the use of AI."

However, broadcasters are not so sure, and trade groups have asked the FCC for extensions in the comment and reply comment deadlines set for Sept. 4 and Sept. 19, respectively.

In their motion for extension, the National Association of Broadcasters (NAB) and the Motion Picture Association (MPA) said this month that the FCC proposal "raises significant, novel factual and legal issues that will entail extensive fact-finding and research."

SNL Image

EXPLORE: Our 2024: The Year of Elections page provides more coverage of the US elections' potential impacts on the risk landscape and policy environment.

Defining AI

Among the concerns raised by the trade groups is what is covered by AI-generated content. Specifically, they ask if there are AI uses "not relevant to the FCC's concerns with false, misleading, or deceptive advertisements, such as quality touch ups" that would not require disclosure.

The FCC's proposal seeks comment on how it should define AI-generated content while offering its own definition. The agency's initial definition includes any computer-generated images, audio or video depicting an individual's appearance, speech, or conduct, or an event.

Some say that definition is too expansive.

"The proposed definition is arguably broad enough to cover not only content that is generated with AI technologies, but also content that was created using more traditional editing tools, such as CGI and VFX software," wrote Marc S. Martin, a partner at the international law firm Perkins Coie LLP, along with a panel of his colleagues.

The FCC has invited commenters to propose alternative definitions that are more tailored.

Asymmetrical approach

Another concern raised by the NAB and the MPA is whether the proposal might inadvertently incentivize advertisers to place ads on different platforms — platforms that do not carry disclosure requirements.

This concern is shared by several industry observers and even Republican FCC Commissioner Brendan Carr.

"The FCC is legally powerless to adopt uniform rules in a technologically neutral fashion," the commissioner said. Carr noted that the commission's legal authority extends only to legacy media — broadcast, cable and satellite television. "As a result, AI-generated political ads that run on traditional TV and radio will come with a government-mandated disclaimer, but the exact same ad that runs on a streaming service or social media site would not."

Martin at Perkins Coie told Market Intelligence the proposal could "discourage advertisers or cause them to choose social media or streaming services instead," giving broadcasters cause for objection.

Political advertising is a critical revenue source for broadcasters such as Sinclair Inc., Nexstar Media Group Inc., Gray Television Inc. and The E.W. Scripps Co. Kagan analysts expect a double-digit increase in local broadcast TV political ad revenue this year compared to the 2022 election cycle, putting the projected total for the 2024 cycle at about $4 billion.

Even as digital advertising overall continues to grow, TV remains the most favored form of political advertising, according to Kagan analyst Peter Leitzinger.

Regulators at odds

Rosenworcel has said the asymmetry in regulation could be solved if the Federal Elections Commission moves forward on a similar rulemaking targeting AI. "With our complementary authorities, the FEC can regulate AI use in online advertisements for federal candidates while the FCC can focus on the areas where the FEC is powerless to act," she said.

The three Republican commissioners of the FEC, led by Chairman Sean Cooksey, said in a recent memo that the elections commission is "ill-positioned to take on the issue of AI regulation and does not have the technical expertise required to design appropriately tailored rules for AI-generated advertising."

Given that position and what the three commissioners described as "the relatively limited use of AI-generated content in federal campaigns to date," they said an AI-focused rulemaking would be a poor use of FEC resources.

While a solution could come from Capitol Hill, the timelines for potential legislative action on the subject remain unclear.

Congressional lawmakers have introduced a variety of bills related to AI during the current legislative session. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act, drafted by Sen. Maria Cantwell (D-Wash.) would create guidelines for detection and labeling of synthetic content, including watermarking and content provenance information. The bill's text does not mention the FCC, but it does address a lack of guidance around AI-generated content and warns that the lack of cohesive standards could harm broadcasters or other content creators.

On Aug. 1, Sen. Peter Welch (D-Vt.) introduced a bill that would hold operators of social media platforms like Alphabet Inc.'s YouTube, Meta Platforms Inc.'s Facebook, TikTok and presumably X, formerly known as Twitter, accountable if they intentionally or knowingly host false election administration information.

In the meantime, the FCC is working to better understand the applications of AI in communications technologies more broadly. The agency's commissioners voted 5-0 to approve a proposed rule requiring callers to disclose their use of AI-generated voices and text messages, or robocalls.

"When you're talking about a technology that is going to show up in so many aspects of our economy, communications technology included, it's smart to wrap our arms around these issues and try to understand them," Rosenworcel said.