OpenAI disrupts Israeli, Chinese, Russian influence campaigns

Share

OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran.

The artificial intelligence (AI) company disclosed in its report titled: “AI and Covert Influence Operations: Latest Trends” published recently.

The report suggests that recent influence operation (IO) campaigns leveraging generative AI lack sophistication and have had minimal public influence.

OpenAI uses the information discovered in its investigations of offending accounts to share threat intelligence with others in the industry and improve its safety systems to combat threat actor tactics. The AI company has also terminated the accounts involved in the malicious campaigns.

The five case studies presented in the report involved threat actors from Russia, China, Iran and Israel. The report uses the Breakout Scale to gauge the impact of each campaign, with none of the described AI-facilitated campaigns receiving a score higher than 2 out of 6.

Two Russian campaigns, dubbed “Bad Grammar” and “Doppelganger” were observed attempting to sway public opinion in favor of Russia and against Ukraine using fabricated personas.

‘Bad Grammar’ and ‘Doppelganger’ largely generated content about the war in Ukraine, including narratives portraying Ukraine, the United States, NATO and the European Union in a negative light, OpenAI stated.

Spamouflage generated text in Chinese, English, Japanese and Korean that was critical of prominent critics of Beijing, including actor and Tibet activist Richard Gere and dissident Cai Xia, were observed and those that highlighted abuses against Native Americans, it added.

International Union of Virtual Media generated and translated articles that criticised the US and Israel, while Zero Zeno took aim at the United Nations agency for Palestinian refugees and “radical Islamists” in Canada, OpenAI said.

The final case study was on a campaign dubbed “Zero Zeno” that OpenAI identified as being run by an Israeli political campaign management firm called STOIC. The campaign involved AI-generated social media posts across multiple platforms attempting to sway opinion on a range of topics including the Israel-Hamas war, U.S. involvement in Middle East conflicts and Indian politics. The campaign leveraged numerous fabricated identities, including profile pictures that appeared to be created using generative adversarial networks (GAN) that were reused across multiple accounts.

The report showed how OpenAI uses a variety of methods to combat covert IO campaigns such as those outlined in the case studies.

The company uses its own AI-powered models to improve detection of potential adversarial uses of its services, better enabling it to investigate harmful campaigns and terminate offending accounts.

OpenAI emphasized the “importance of sharing” what it learns from real-world misuse with industry peers and the public.

OpenAI’s investigations also built on information shared by other companies and researchers, such as information about the Doppelganger threat actor by Meta, Microsoft and Disinfolab, and articles about Iranian IOs from Mandiant and Reuters.

“Overall, these trends reveal a threat landscape marked by evolution, not revolution. Threat actors are using our platform to improve their content and work more efficiently. But so far, they are still struggling to reach and engage authentic audiences,” the report noted.

+ posts

Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Read

Recent Checks