FCC Proposal: AI Transparency in Political Ads
5 min readThe Federal Communications Commission (FCC) is stepping in to address the growing influence of artificial intelligence (AI) in political advertising. On Wednesday, FCC Chair Jessica Rosenworcel introduced a proposal aiming to bring more transparency to the use of AI-generated content in TV and radio ads. This move comes as many experts and lawmakers have raised concerns about how AI can create highly realistic but deceptive images, videos, and audio clips.
The proposal, if adopted, will require political advertisers to disclose when AI tools have been used. The initiative focuses on traditional media like TV and radio, leaving digital and streaming platforms outside its jurisdiction. This marks a significant first step, yet also highlights ongoing gaps in AI regulation. As AI becomes increasingly accessible, ensuring voters are aware when this technology is utilized has never been more crucial. The FCC hopes to have these rules in place before the upcoming election.
FCC Targets AI in Political Ads
The Federal Communications Commission (FCC) has proposed a new rule requiring political ads on TV and radio to disclose the use of artificial intelligence (AI). This move aims to provide transparency as AI tools can create lifelike images and voices that can mislead voters. The proposal will apply to broadcast TV, radio, and cable providers but does not cover digital and streaming platforms.
FCC Chair Jessica Rosenworcel emphasized the importance of this move. “Consumers should know when AI is used in the political ads they see,” she stated. The initiative comes as lawmakers and AI experts have expressed concerns about the rapid advancement of AI technology and its impact on elections. The need for AI disclosure is critical, given the technology’s potential to distort reality and mislead voters.
Scope and Limitations
While the proposal marks significant progress, it has its limitations. The FCC’s authority extends only to broadcast channels and does not include digital and streaming platforms, which have seen tremendous growth in political advertising. This gap leaves a considerable portion of political ads unregulated regarding AI disclosure.
The proposal also suggests that broadcasters will need to verify with political advertisers whether their content was generated using AI tools. This verification will cover tools like text-to-image creators and voice-cloning software. However, the FCC cannot enforce this rule on online platforms, leaving a critical gap in AI transparency.
Steps Towards Regulation
This is not the FCC’s first step towards regulating AI in political communications. Earlier this year, the commission banned the use of AI voice-cloning tools in robocalls. This decision followed an incident where voice-cloning software was used to mimic President Joe Biden in automated calls during New Hampshire’s primary election.
The FCC plans to finalize the details of the proposal, including how broadcasters should disclose AI-generated content. This could be through an on-air message or in the station’s political files, which are public. The commission also faces the challenge of defining AI-generated content, as retouching tools and other AI advancements become more embedded in creative software.
The proposal aims to have these regulations in place before the upcoming election to ensure voters are fully informed.
Political and Public Reactions
The proposal has garnered mixed reactions. Rob Weissman, president of the advocacy group Public Citizen, applauded the FCC for proactively addressing threats from AI and deepfakes. He emphasized the importance of on-air disclosure for the public’s benefit.
Meanwhile, Rep. Yvette Clarke from New York has introduced legislation to address online misinformation, which the FCC can’t regulate. Clarke’s bill aims to require disclosure for AI-generated content in online ads, filling the gap left by the FCC’s proposal.
The proposal also reflects broader concerns about AI’s role in politics. Lawmakers from both parties have called for legislation to regulate AI technology in political contexts. However, with the election approaching, no bills have yet been passed.
Global Context and Examples
The issue of AI in political ads is not unique to the United States. In India, for instance, AI-generated videos misrepresenting Bollywood stars as criticizing the prime minister have surfaced. These incidents highlight the global challenge of regulating AI in democratic elections.
In the U.S., political campaigns have already begun experimenting with AI tools. Last year, the Republican National Committee released an entirely AI-generated ad depicting a dystopian future under another Biden administration. This ad included fake but realistic images of boarded-up storefronts and armored military patrols.
These examples underscore the urgency of implementing AI regulations in political advertising. Without proper oversight, AI-generated content can easily manipulate public perception and disrupt democratic processes.
Challenges and Future Steps
Defining AI-generated content is one of the significant challenges the FCC faces. AI technology is becoming increasingly integrated into creative software, making it difficult to distinguish between human-created and AI-generated content.
The FCC aims to define AI-generated content as anything produced using computational technology or machine-based systems. This includes AI-generated voices and actors that appear human. However, this definition will likely evolve as the regulatory process unfolds.
FCC spokesperson Jonathan Uriarte acknowledged the commission’s limited capacity to address AI threats but stressed the importance of taking initial steps. “This proposal offers the maximum transparency standards that the commission can enforce under its jurisdiction,” he said.
The proposal also serves as a call to action for other government agencies and lawmakers to build on this foundation. With the election approaching, the urgency for comprehensive AI regulations in political advertising cannot be overstated.
Conclusion and Next Steps
The FCC’s proposal represents a significant step towards transparency in political advertising. However, it also highlights the limitations of current regulations and the need for broader legislative action.
As AI technology continues to evolve, so too must the rules governing its use in politics. The upcoming election will be a critical test for these new regulations and their effectiveness in ensuring a fair and transparent democratic process.
The FCC’s proposal marks a pivotal move towards ensuring transparency in political advertising by mandating AI disclosure in TV and radio ads. However, it also highlights the significant regulatory gaps in the digital and streaming spheres, which need immediate attention. As AI continues to advance, comprehensive measures are essential to protect the integrity of democratic processes.