With the arrival of the election season and the rapid development of artificial intelligence, artificial intelligence manipulation in political advertising is becoming a greater concern for the market and economy. A new Moody’s report released on Wednesday warned that the generation of artificial intelligence and deep fakes are election integrity issues that could pose risks to the credibility of U.S. institutions.
“This election is likely to be hotly contested, with growing concerns about artificial intelligence,” Moody’s Assistant Vice President and Analyst Gregory Sobel and Senior Vice President William Foster wrote Deepfakes could be used to mislead voters, fuel division and stoke discord. “If successful, agents of disinformation could influence voters, influence election outcomes, and ultimately policymaking, which would harm the credibility of American institutions. ”
The government has been stepping up its crackdown on deepfakes. On May 22, FCC Chairman Jessica Rosenworcel propose new rules This would require political television, video and radio advertising to reveal whether the content was generated using artificial intelligence. The FCC has been concerned about the use of artificial intelligence in advertising this election cycle, with Rosenworcel pointing to potential issues with deepfakes and other manipulated content.
Social media has always been outside the scope of FCC regulation, but the Federal Election Commission Broad AI disclosure rules also under consideration This will be expanded to all platforms. In a letter to Rosenworcel, which encouraged the FCC to delay a decision until after the election because its changes would not be enforced in digital political advertising. They added that online ads that do not disclose information, even if they have artificial intelligence, could confuse voters.
While the FCC proposal may not fully cover social media, it opens the door for other agencies to regulate advertising in the digital world as the U.S. government becomes a powerful regulator of artificial intelligence content. Perhaps these rules could be extended to more types of ads.
“This would be a groundbreaking ruling that could change the narrative surrounding political campaigns in traditional media for years to come,” said Dan Ives, managing director and senior equity analyst at Wedbush Securities. “The worrying thing is that you can’t put the genie back in the bottle, and this ruling will have many unintended consequences.”
Some social media platforms have already adopted some form of AI disclosure on their own, ahead of regulation. Meta, for example, requires AI disclosure for all of its ads and bans all new political ads a week before the November election. Google requires disclosure of all political ads whose content has been altered to “unrealistically depict actual or realistic people or events,” but does not require AI disclosure for all political ads.
Social media companies have good reason to be seen as proactive on this issue, as brands worry about aligning themselves with the spread of misinformation at a critical time for the country. Google and Facebook U.S. digital advertising spending is expected to reach $306.94 billion by 2024, accounting for 47% of that total. The third problem facing major brands.
Despite self-regulation, AI-manipulated content does appear on unlabeled platforms due to the sheer volume of content posted every day. regardless Spam generated by artificial intelligence or A large number of AI imagesit’s hard to find everything.
“The lack of industry standards and the rapid evolution of technology make this challenging,” said Tony Adams, senior threat researcher at Secureworks Counter Threat Group. “Fortunately, these platforms have successfully implemented technical controls to police the most harmful content on their sites. Content, ironically powered by artificial intelligence.”
Creating manipulated content is easier than ever. In May, Moody’s warned that deepfakes have been “weaponized” by government and non-government entities as a means of propaganda, creating social unrest and, in the worst cases, terrorism.
“Until recently, creating convincing deepfakes required significant technical knowledge in terms of specialized algorithms, computing resources and time,” wrote Abhi Srivastava, an associate vice president at Moody’s Ratings. “With the advent of easily accessible, affordable Gen AI tools, generating sophisticated deepfakes can be accomplished in minutes. This ease of access, combined with social media’s existing safeguards against the spread of manipulated content, Limitations, for the widespread abuse of deepfakes.
Deepfake audio via automated calls Already used in New Hampshire’s presidential primary this election cycle.
Moody’s believes potential silver linings are the decentralized nature of the U.S. electoral system, as well as existing cybersecurity policies and general understanding of imminent cyber threats. Moody’s said this would provide some protection. State and local governments are enacting measures to further block deepfakes and unlabeled AI content, but free speech laws and concerns about blocking technological progress have slowed progress in some state legislatures.
According to Moody’s, as of February, state legislatures were introducing 50 pieces of AI-related legislation every week, including legislation targeting deepfakes. Thirteen states have laws on election interference and deep fakes, eight of which have been enacted since January.
Moody’s notes that the United States is vulnerable to cyber risks, Ranked 10th out of 192 countries Included in the United Nations e-Government Development Index.
Moody’s said that even without specific examples, the perception that deep fakes have the ability to influence political outcomes is enough to “undermine public confidence in the electoral process and the credibility of government institutions, which is a credit risk.” The more people worry about distinguishing fact from fiction, the greater the risk that citizens will disengage and become distrustful of their government. “This trend will have negative credit implications, could lead to increased political and social risks, and harm the effectiveness of government agencies,” Moody’s wrote.
“The response from law enforcement and the FCC may deter other domestic actors from using artificial intelligence to deceive voters,” Secureworks’ Adams said. “But there is no doubt that foreign actors will continue to use artificial intelligence to deceive voters, as they have done for years. Generating artificial intelligence tools and systems to intervene in American politics, the message to voters is to stay calm, stay alert and vote.”