OpenAI has identified and shut down an influence campaign linked to the Iranian government.
This operation, aimed at disseminating false information about the US elections, utilized artificial intelligence technology to generate and spread misinformation through ChatGPT.
In a blog post on August 16, 2024, the AI company said it had banned several accounts linked to the campaign from its online services. The Iranian effort, OpenAI added, did not seem to reach a sizable audience.
“This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035,” OpenAI said.
“The operation used ChatGPT to generate content focused on a number of topics — including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites.”
The popularity of generative A.I. like OpenAI’s online chatbot, ChatGPT, has raised questions about how such technologies might contribute to online disinformation, especially in a year when there are major elections across the globe.
Earlier in August, a Microsoft threats-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum.
The engagement was being built with “polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel – Hamas conflict,” the report stated.
Recall that in May 2024, OpenAI released a first-of-its-kind report showing that it had identified and disrupted five other online campaigns that used its technologies to deceptively manipulate public opinion and influence geopolitics. Those efforts were run by state actors and private companies in Russia, China and Israel as well as Iran.
These covert operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.
In some cases, the commentary seemed progressive. In other cases, it seemed conservative. It also dealt with hot-button topics ranging from the war in Gaza to Scottish independence.
According to OpenAI, the campaign used its technologies to generate articles and shorter comments posted on websites and on social media. In some cases, the campaign used ChatGPT to rewrite comments posted by other social media users.
OpenAI added that a majority of the campaign’s social media posts had received few or no likes, shares or comments, and that it had found little evidence that web articles produced by the campaigns were shared across social media.
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.