An investigation by Channel 4 news has shown that about 4,000 famous people have been victims of deepfake pornography on the internet.
The analysis of the five most visited deepfake websites published on March 21, 2024 revealed that out of almost 4,000 famous individuals that were found, 255 were British.
They include female actors, TV stars, musicians and YouTubers, who have not been named, whose faces were superimposed onto pornographic material using artificial intelligence.
The investigation found that the five sites received 100m views in the space of three months.
The Channel 4 News presenter, Cathy Newman, who was found to be among the victims, said: “It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me.”
In 2016, researchers identified one deepfake pornography video online. In the first three-quarters of 2023, over 143,733 new deepfake porn videos were uploaded to the 40 most used deepfake pornography sites – more than in all the previous years combined according to the report.
The UK’s Online Safety Act makes it a criminal offence to share, or threaten to share, a manufactured or deepfake intimate image or video of another person without his or her consent but it is not intended to criminalise the creation of such deepfake content.
In its investigation, Channel 4 News claimed the most targeted individuals of deepfake pornography are women who are not in the public eye.
Earlier this year, explicit deepfake images of Taylor Swift were circulated on X, formerly Twitter. They were viewed around 45 million times before the platform took them down.
Sophie Parrish, 31, from Merseyside, discovered that fabricated nude images of her had been posted online prior to the legislation being introduced.
She told Channel 4 News: “It’s just very violent, very degrading. It’s like women don’t mean anything, we’re just worthless, we’re just a piece of meat. Men can do what they like. I trusted everybody before this.”
Reacting to this, a Google spokesperson said: “We understand how distressing this content can be, and we’re committed to building on our existing protections to help people who are affected.
“Under our policies, people can have pages that feature this content and include their likeness removed from search. And while this is a technical challenge for search engines, we’re actively developing additional safeguards on Google search – including tools to help people protect themselves at scale, along with ranking improvements to address this content broadly.”
Ryan Daniels of Meta, which owns Facebook and Instagram, said: “Meta strictly prohibits child nudity, content that sexualises children, and services offering AI-generated non-consensual nude images. While this app [that creates deepfakes] remains widely available on various app stores, we’ve removed these ads and the accounts behind them.”
The research also found that more than 70 per cent of visitors arrived at deepfake websites using search engines like Google.
Advances in AI have made it easier to create digitally altered and fake images.
Experts have warned of the danger posed by AI-generated deepfakes and their potential to spread misinformation, particularly in a year that will see major elections in many countries, including the UK and the US.
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.