Chatbots pose threat to voters with AI-generated misinformation – Report

Share

AS many countries hold national elections this year, a new study has revealed the risks posed by the rise of artificial intelligence chatbots in disseminating false, misleading or harmful information to voters.

The study which was published by The AI Democracy Projects on February 27, 2024, brought together more than 40 experts, including US state and local election officials, journalists and AI experts.

The experts built a software portal to query the five major AI large language models: Open AI’s GPT-4, Alphabet Inc.’s Gemini, Anthropic’s Claude, Meta Platforms Inc.’s Llama 2 and Mistral AI’s Mixtral.

They developed questions that voters might ask around election-related topics and rated 130 responses for bias, inaccuracy, incompleteness and harm.

The report reveals that chatbots such as GPT-4 and Google’s Gemini, trained on vast amounts of text from the internet, are prone to providing inaccurate and harmful responses when it comes to election-related information. These chatbots may suggest non-existent polling places or invent illogical answers based on outdated information.

The report highlights several key findings from the workshop where the chatbots were tested. More than half of the chatbots’ responses were rated as inaccurate, and 40% were categorized as harmful.

The chatbots perpetuated outdated and inaccurate information that could limit voting rights, such as incorrectly stating that there is no voting precinct in a particular ZIP code.

In some instances, the chatbots appeared to draw from outdated or inaccurate sources, raising concerns about their ability to amplify longstanding threats to democracy.

For example, four out of the five chatbots tested wrongly asserted that voters in Nevada would be blocked from registering to vote weeks before Election Day, despite same-day voter registration being allowed in the state since 2019.

The report also draws attention to the lack of regulation surrounding AI in politics in the United States. While major technology companies have signed a voluntary pact to adopt precautions against AI-generated misinformation, the report raises questions about their compliance with these pledges.

The findings suggest that chatbots are not yet equipped to provide reliable and accurate information about elections, emphasizing the need for further scrutiny and regulation.

+ posts

Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Read

Recent Checks