IMAGE creation tools powered by artificial intelligence from companies including OpenAI and Microsoft can be used to generate images that could promote election or voting-related disinformation, despite their policies against creating misleading content, a report by the Centre for Countering Digital Hate (CCDH) has shown.
In the report published on March 6, 2024, the non-profit organization used generative AI tools to create images of U.S. President Joe Biden laying on a hospital bed and election workers smashing voting machines, raising worries about falsehoods ahead of the U.S. presidential election in November.
“The potential for such AI-generated images to serve as ‘photo evidence’ could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections,” CCDH researchers said in the report.
CCDH tested OpenAI’s ChatGPT Plus, Microsoft’s Image Creator, Midjourney and Stability AI’s DreamStudio, which can each generate images from text prompts.
The report follows an announcement last month that OpenAI, Microsoft and Stability AI were among a group of 20 tech companies that signed an agreement to work together to prevent deceptive AI content from interfering with elections taking place globally this year. Midjourney was not among the initial group of signatories.
CCDH said the AI tools generated images in 41% of the researchers’ tests and were most susceptible to prompts that asked for photos depicting election fraud, such as voting ballots in the trash, rather than images of Biden or former U.S. President Donald Trump.
CCDH researchers tested 160 prompts on ChatGPT Plus, Midjourney, DreamStudio, and Image Creator, and found that Midjourney was most likely to produce misleading election-related images, about 65 per cent of the time. Researchers were able to prompt ChatGPT Plus to do so only 28 per cent of the time.
“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. “If one so effectively seals these weaknesses, it means that the others haven’t really bothered.”
The report shows that Midjourney, Microsoft’s Image Creator, and ChatGPT Plus all have specific policies about election disinformation and yet, failed to prevent the creation of misleading images of voters and ballots.
The researchers recommended responsible safeguards by the tech companies to prevent users from generating images, audio, or video that are deceptive, false, or misleading about geopolitical events, candidates for office, elections, or public figures.
They also urged them to invest and collaborate with researchers to test and prevent ‘jailbreaking’ before product launch and have response mechanisms in place to correct the jailbreaking of products.
The report also called for clear and actionable pathways to report those who abuse AI tools to generate deceptive and fraudulent content.
To prevent election disinformation, the researchers recommended gatekeeping to prevent users from generating, posting, or sharing images that are deceptive, false, or misleading about geopolitical events and impact elections, candidates for public office, and public figures.
They also recommended investment in trust and safety staff dedicated to safeguarding against the use of generative AI to produce disinformation and attacks on election integrity.
They finally urged policymakers to leverage existing laws to prevent voter intimidation and disenfranchisement and pursue legislation to make AI products safe by design, transparent, and accountable for the creation of deceptive images which may impact elections.
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.