New York State Attorney General Letitia James speaks during a press conference at the New York Attorney General’s Office on February 16, 2024.
Timothy A. Clary | AFP | Getty Images
With four days until the presidential election, U.S. government officials warn against relying on AI Chatbot for getting voting related information.
in a Consumer Alert On Friday, New York Attorney General Letitia James’ office said it had tested “multiple artificial intelligence chatbots asking sample questions about voting and found that they often provided inaccurate information in response.”
election day It’s Tuesday in America, Republican candidates Donald Trump and Democratic Vice President Kamala Harris A virtual impasse was reached.
“New Yorkers who rely on chatbots instead of official government sources to answer questions about voting may be misled or even lose their opportunity to vote due to inaccurate information,” James’ office said.
This is a big year for political movements around the world, with upcoming elections affecting more than 4 billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns.
According to data from machine learning company Clarity, the number of deepfakes has increased by 900% compared with the same period last year. Some of the videos were produced or paid for by Russians and were intended to disrupt the U.S. election, U.S. intelligence officials said.
Lawmakers are particularly concerned about misinformation in the era of generative AI, which will begin to take off in late 2022 with the launch of OpenAI’s ChatGPT. Large language models are still new and often output inaccurate and unreliable information.
Alexandra Reeve Gives, CEO of the Center for Democracy and Technology, told CNBC: “Voters should never turn to AI chatbots for information about voting or elections because people have concerns about accuracy and completeness. There are too many concerns about sex. “Study after study shows that AI chatbots generate information about voting locations, accessibility to voting, and permitted voting methods. “
In a July study, the Center for Democracy and Technology found that in response to 77 different election-related queries, more than one-third of the answers generated by AI chatbots contained incorrect information. The study tested Mistral’s chatbot, GoogleOpenAI, Anthropic and Yuan.
“We agree with the New York Attorney General that voters should consult official sources to learn where, when and how to vote,” an Anthropic spokesperson told CNBC. “For specific election and voting information, we will direct users to authoritative sources. , because Crowder was not trained frequently enough to provide real-time information about specific elections.”
OpenAI said in an article Recent blog posts “Beginning November 5, people who ask ChatGPT for election results will see a message encouraging them to check news sources such as The Associated Press and Reutersor their state or local election board for the most complete and up-to-date information.
In a 54-page report published last monthOpenAI says it has disrupted “more than 20 operations and deceptive networks from around the world trying to use our models.” The threats ranged from AI-generated website articles to social media posts from fake accounts, the company wrote, although all election-related operations failed to attract “viral engagement.”
As of November 1, the Voting Rights Lab had tracked 129 bills in 43 state legislatures that contained provisions aimed at regulating the possibility of artificial intelligence producing election disinformation.
watch: More than a quarter of new code is now generated by artificial intelligence