A new report has warned that the proliferation of large-scale Artificial Intelligence (AI) models may lead to a surge in falsified information concerning climate change online.
The report which was released on March 7, 2024 by members of a global coalition detailed “significant and immediate dangers” that artificial intelligence poses to the climate emergency.
According to the report, the AI models have enabled climate deniers to create and disseminate persuasive false content more easily and rapidly across various online platforms, including social media, targeted advertising, and search engines.
“Fossil fuel companies and their paid networks have spread climate denial for decades through politicians, paid influencers and radical extremists who amplify these messages online.
“In 2022, this climate disinformation tripled on platforms like X. In 2023, amidst a number of whale deaths on the east coast of the US, right wing media began spreading the false claim that offshore wind projects were impacting the endangered populations.
“It was included in 84% of all posts about wind energy over the relevant three-month period, and was advanced by right wing politicians on social media. In 2023, the Danish company, Orsted, while claiming the disinformation campaign was irrelevant, pulled out of a major project to build two wind farms off the coast of New Jersey.
“Generative AI will make such campaigns vastly easier, quicker and cheaper to produce, while also enabling it to spread further and faster. Adding to this threat, social media companies have shown declining interest in stopping disinformation, reducing trust and safety team staffing. There is little incentive for tech companies to stop disinformation, as reports show companies like Google/YouTube make an estimated $13.4 million per year from climate denier accounts.”
“Disinformation campaigns about climate change have a number of new AI tools to help them be more effective. Chair of the Federal Trade Commission, Lina Khan, warns that “generative AI risks turbocharging fraud” in its ability to churn out content. Instead of having to draft content one piece at a time, AI can churn out endless content for articles, photos and even websites with just brief prompts,” part of the report stated.
The report raised concerns over the advancements in generative AI technology, such as deepfake videos and persuasive text generation which has further enhanced the effectiveness of disinformation campaigns reflecting on how these tools have been used to create convincing fake content, including politically charged material and even pornography, thereby undermining trust in information and exacerbating political divides.
According to the report, even though some AI companies said they would address the situation in advance of the upcoming 2024 elections around the world by developing policies that might prevent bad actors from producing disinformation content, past efforts proved largely ineffective.
“Open AI claimed its ChatGPT-4 was “82 per cent less likely to respond to requests for disallowed content and 40 per cent more likely to produce factual responses,” but testers in a March 2023 NewsGuard report were still able to consistently bypass safeguards.
“They found the new chatbot was in fact “more susceptible to generating misinformation” and “more convincing in its ability to do so” than the previous version. They were able to get the bot to write an article claiming global temperatures are actually decreasing—just one of 100 false narratives they prompted ChatGPT to draft,” it noted.
The report urged the government to urgently study the problem and implement comprehensive AI regulations to fully understand the threats to climate change and protect against them, using a systems-wide approach to the health, integrity and resilience of the information ecosystem.
The coalition also called for collaboration between the government, companies, academia and civil society to work together to determine how to create “green AI” systems that reduce overall emissions and climate disinformation.
“Tech companies implementing AI must commit to strong labour policies including: fair pay, clear contracts, sensible management, sustainable working conditions and union representation,” the report suggested.
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.