Open AI’s GPT-3 capable of spreading disinformation faster than human – Study

Share

OPEN AI’s GPT-3, the machine-learning model behind ChatGPT, has the ability to spread disinformation online at a faster rate than humans, a recent study published on Science Advances has revealed.

The research focused on the significant threats posed by advanced text generators in a digital world, particularly in relation to disinformation and misinformation on social media platforms. 

READ : How Russia runs disinformation campaign against Zelensky – Report

It examines how Artificial intelligence (AI) models like GPT-3 influence the information landscape and how people perceive and interact with information and misinformation.

GPT-3, an acronym for Generative Pre-trained Transformer 3, is the third version of the programme developed by OpenAI. Among its various language processing skills, the programme can mimic the writing styles commonly found in online conversations.

To assess the impact of GPT-3 on spreading disinformation, the researchers selected 11 topics that were deemed susceptible to such manipulation, including climate change, COVID-19, vaccine safety, and 5G technology.

The study collected AI-generated tweets containing both true and false information, as well as real tweets related to the same topics.

The researchers conducted a survey using subsets of tweets categorized into AI-generated and human-generated, after identifying disinformation through expert analysis. Respondents were asked to assess the content’s accuracy and discern whether it was authored by a human or AI.

The study found out that respondents were better able to identify disinformation in tweets written by humans but containing false information, compared to the tweets generated by GPT-3. Interestingly, respondents were less effective at detecting false information when it was generated by Artificial intelligence (AI).

The study further revealed that GPT-3’s generated text not only has a greater impact on informing and disinforming humans but also does so more efficiently and in less time. Accurate statements were found to be more challenging to assess than disinformation, both for humans and AI-generated text.

“Participants recognized organic false tweets with the highest efficiency, better than synthetic false tweets,” the study explained.

“Similarly, they recognized synthetic true tweets correctly more often than organic true tweets.”

This study shows the potential dangers and implications of AI-powered text generators like GPT-3, particularly in the realm of spreading disinformation online and highlights the need to develop robust strategies to combat the rapid dissemination of false information in the digital age.

+ posts

Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Read

Recent Checks