META has announced that it is increasing the scope of its efforts to detect images that have been altered by artificial intelligence.
This is part of the attempt to filter false information and deepfakes ahead of global elections that are taking place across multiple countries this year and in 2025.
He added that the organisation will start labeling AI-generated photos coming from other sources in the coming months.
According to the company, it is developing technologies to automatically recognise content created by AI when it shows up on Facebook, Instagram, and Threads.
“The added time is needed to work with other AI companies to align on common technical standards that signal when a piece of content has been created using AI,” Clegg wrote.
The statement added that Meta is looking to minimize uncertainty by primarily collaborating with other AI firms that employ invisible watermarks and specific kinds of metadata in the photos produced on their platforms. Meta also intends to solve the issue of watermarks being eliminated.
“We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks,” Clegg stated.
It further noted that audio and video can be even harder to monitor than images, because there’s not yet an industry standard for AI companies to add any invisible identifiers.
The statement added that Meta will add a way for users to voluntarily disclose when they upload AI-generated video or audio. If they share a deepfake or other form of AI-generated content without disclosing it, the company may apply penalties.
“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate,” Clegg said.