A WhatsApp helpline has been launched by Meta in collaboration with the Misinformation Combat Alliance (MCA) to address misinformation generated by AI, particularly deepfakes in India.
This was made known in an official statement by Meta. The project, which is scheduled to begin in March 2024, aims to give the public a way to report and validate questionable media, thereby assisting them in differentiating between real and fake content.
The statement added that users can send messages to the helpline, and digital labs, industry partners, and fact-checkers will in turn review and validate them. They would generally assess the content’s legitimacy and reveal any misleading or manipulative information.
“The initiative will see IFCN signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta’s support,” said Bharat Gupta the president of Misinformation Combat Alliance.
The statement further highlighted that the focus of the programme is to implement a four-pillar approach – detection, prevention, reporting and driving awareness around the escalating spread of deepfakes along with building a critical instrument that allows citizens to access reliable information to fight the spread of such misinformation.
The Misinformation Combat Alliance is building a central unit for deepfake analysis, while Meta is working on a chatbot for WhatsApp. All of the helpline’s messages will be handled by MCA.
“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India. Its formation highlights the collaboration and whole-of-society approach to foster a healthy information ecosystem that the MCA was set up for,” Gupta added.
Meta recently declared that it will categorize photographs created by AI across its networks, including Facebook, Instagram, and Threads. Users will be able to tell the difference between photographs that appear natural and ones that are artificially created. Any content with indicators of being produced by AI that are considered industry standard will have these labels applied to it.
Seasoned fact-checker and researcher Fatimah Quadri has written numerous fact-checks, explainers, and media literacy pieces for The FactCheckHub in an effort to combat information disorder. She can be reached at sunmibola_q on X or [email protected].