OpenAI sets to introduce tools to counter election disinformation globally


ChatGPT creator, OpenAI, has announced plans to launch tools to fight disinformation and ban the use of its technology for political campaigns ahead of a series of elections globally in 2024.

The ChatGPT maker, in a blog post, stated that the company is “working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information”.

The World Economic Forum (WEF) warned in a report released last week that AI-driven disinformation and misinformation are the biggest short-term global risks and could undermine newly elected governments in major economies.

When ChatGPT was introduced in late 2022, it quickly went viral and spurred a global AI boom. However, experts have warned that these tools could also fill the internet with disinformation and sway voters.

In response, the firm vowed to stop the harmful use of its technology such as ChatGPT and DALL·E, to protect the integrity of elections.

“We want to make sure our technology is not used in a way that could undermine this process. We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used,” the company said.

The company added that before releasing new systems, they red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm.

“DALL-E has guardrails to decline requests that ask for image generation of real people, including candidates,” it said.

OpenAI said they are still working to understand how effective their tools might be for personalised persuasion and until it is clear they won’t allow people to build applications for political campaigning and lobbying.

The company has also built new GPTs through which users can report potential violations.

The company also said a mechanism has been put in place that wouldn’t let builders create chatbots that pretend to be real people or institutions adding that applications that can deter people from participating in democratic processes won’t be allowed.

Noting the importance of labelling AI-crafted content, OpenAI said the company is working on several provenance efforts that would attach reliable attribution to the text generated by ChatGPT, and also give users the ability to detect if an image was created using DALL-E 3.

ChatGPT, the statement added, is increasingly integrating with existing sources of information after which users will start to get access to real-time news reporting globally, including attribution and links.

Similarly, to improve access to authoritative voting information, OpenAI has joined hands with the National Association of Secretaries of State (NASS) where ChatGPT will direct users to — an authoritative website on US voting information.

+ posts

Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.


Please enter your comment!
Please enter your name here

Most Read

Recent Checks