YOUTUBE channels are using Artificial Intelligence to create misleading “scientific” videos that are recommended to children as educational content, an investigation by the BBC’s Global Disinformation Team has revealed.
The report uncovered more than 50 channels in over 20 languages spreading disinformation disguised as STEM (Science Technology Engineering Maths) content, including pseudo-science, false information, and conspiracy theories like electricity-producing pyramids, climate change denial, and alien existence.
According to the report, these channels manipulate legitimate content to create sensationalist videos with catchy titles and dramatic imagery to attract viewers and generate more Ad revenue while YouTube benefits from these high-performing videos by taking a portion of advertising revenue.
The report revealed that creators label their misleading videos as “educational content,” increasing the chances of them being recommended to children.
The BBC’s Global Disinformation Team found these channels publishing content rapidly, suspected to be generated with AI programmes like Chat GPT, which can create new content instead of relying on existing examples.
AI detection tools and expert analysis conducted by the researchers showed that most of these videos used AI to generate text and images, scrape information from websites, and manipulate real science videos, resulting in content that appears factual but is largely untrue.
Part of the report read, “To test this theory, they took videos from each channel and used AI detection tools and expert analysis to assess the probability or likelihood that the footage, narration and script were made using AI.
“The BBC analysis showed that most of those videos had used AI to generate text and images, and to scrape – that is to extract information from a website, and manipulate material from real science videos. The result is content that looks factual, but is mostly untrue.
“To test whether the “bad science” videos would be recommended to children, the journalists created children’s accounts on the main YouTube site. (All the children they spoke to said they used children’s accounts rather than YouTube Kids.)
“After four days of watching legitimate science education videos, the BBC journalists were recommended the AI-made “bad science” videos too. If they clicked on them, more of the false-science channels were recommended.”
According to the report, when they shared this content with groups of 10-12-year-olds in the UK and Thailand, many children believed the false information until the journalists revealed that the videos were fake.
The report stated teachers have expressed concern that these videos take advantage of children’s curiosity and may confuse them about what is true while researchers express worry about children’s susceptibility to conspiratorial content, and YouTube’s role in profiting from misleading videos raises ethical questions.
The report questioned why YouTube recommends its YouTube Kids platform for those under 13, which claims to have higher-quality content while misleading videos continue to thrive on the main YouTube platform.
The report warned that as AI tools continue to improve, identifying misleading content will become more challenging, urging teachers and parents to prepare for the growing impact of AI-generated content on children’s understanding.