Videos of sexually suggestive, AI-generated children are racking up millions of likes on TikTok, study finds



New York
 — 

AI-generated movies displaying what seem like underage women in sexualized clothes or positions have collectively racked up millions of likes on TikTok, though the platform’s guidelines prohibit such content material, in accordance with new analysis from an internet security non-profit.

Researchers discovered greater than a dozen accounts posting movies that includes AI-generated women sporting tight clothes, lingerie or faculty uniforms, generally in suggestive positions. The accounts have a whole lot of hundreds of followers mixed. Comments on many of the movies included hyperlinks to chats on the messaging platform Telegram, which provided youngster pornography for buy, in accordance with the report.

Thirteen accounts remained energetic as of Wednesday night after 15 of them have been flagged by way of TikTok’s reporting device final week, in accordance with Carlos Hernández-Echevarría, who led the analysis as assistant director and head of public coverage at Maldita.es. The Spain-based non-profit, which research on-line disinformation and promotes media transparency, launched the report Thursday.

The report raises questions on TikTok’s skill to implement its personal insurance policies relating to AI content material, even when that content material seems to point out sexualized photographs of computer-generated children. Tech platforms together with TikTok face elevated strain to guard younger customers as extra jurisdictions cross on-line security laws, together with Australia’s under-16 social media ban, which went into impact this week.

“This is not nuanced at all,” Hernández-Echevarría informed NCS. “Nobody that is, you know, a real person doesn’t find this gross and want it removed.”

TikTok says it has a zero tolerance coverage for content material that “shows, promotes or engages in youth sexual abuse or exploitation.” Its group pointers particularly prohibit “accounts focused on AI images of youth in clothing suited for adults, or sexualized poses or facial expressions.” Another part of its insurance policies states that TikTok doesn’t enable “sexual content involving a young person, including anything that shows or suggests abuse or sexual activity,” which incorporates “AI-generated images” and “anything that sexualizes or fetishizes a young person’s body.”

The firm says it makes use of a mix of imaginative and prescient, audio and text-based instruments, together with human groups, to reasonable content material. Between April and June 2025, TikTok eliminated greater than 189 million movies and banned greater than 108 million accounts, in accordance with the corporate. It says 99% of content material violating its insurance policies on nudity and physique publicity, together with of younger folks, was eliminated proactively, and 97% of content material violating its insurance policies on AI-generated content material was eliminated proactively.

A TikTok spokesperson didn’t present a remark particular to the report.

Maldita.es found the TikTok movies by way of take a look at accounts it makes use of to watch for potential disinformation or different dangerous content material as half of its work.

“One of our team members started to see that there was this trend of these (AI-generated videos of) really, really young kids dressed as adults and, particularly when you went into the comments, you could see that there was some money incentive there,” Hernández-Echevarría stated.

Some of the accounts described their movies of their bio sections as “delicious-looking high school girls” or “junior models,” in accordance with the report. “Even more subtle videos like those of young girls licking ice cream are full of crude sexual comments,” it states.

In some circumstances, the accountholders used TikTok’s “AI Alive” characteristic — which animates nonetheless photographs — to show AI-generated photographs into movies, Hernández-Echevarría stated. Other movies appeared to have been created utilizing exterior AI instruments, he stated.

Comments on many of the movies included hyperlinks to non-public Telegram chats that marketed youngster pornography, in accordance with the report.

“Some of the accounts responded to our direct messages on TikTok with links to external websites that sold AI-generated videos and images that sexualized minors, with prices ranging from 50 to 150 euros,” the report states.

Researchers didn’t observe by way of with any transactions, and the group reported the web sites and Telegram accounts to police in Spain.

“Telegram is fully committed to preventing child sexual abuse material (CSAM) from appearing on its platform and enforces a strict zero-tolerance policy,” Telegram spokesperson Remi Vaughn stated in a press release to NCS. “Telegram scans all media uploaded to its public platform against a database of CSAM removed by moderators to prevent it from being spread. While no encrypted platform can proactively monitor content in private groups, Telegram accepts reports from NGOs around the world in order to enforce its terms of service.”

Vaughn stated greater than 909,000 public teams and channels associated to youngster sexual abuse materials have been eliminated by Telegram in 2025.

The group says it flagged 15 accounts and 60 movies to TikTok by way of the app’s reporting instruments on Tuesday, December 2, classifying them as “sexually suggestive behavior by youth.” The accounts had a complete of almost 300,000 followers and their 3,900 movies had greater than 2 million likes mixed, in accordance with the report.

By Friday, TikTok responded that 14 of the accounts didn’t violate their guidelines and one account had been “restricted,” Maldita.es stated. The group appealed every resolution, however “exactly 30 minutes after the appeal for every single case,” TikTok reiterated its preliminary resolution, the report states.

Of the 60 movies the group reported, TikTok responded on Friday that 46 of them didn’t violate their insurance policies and it eliminated or restricted 14. After researchers appealed, TikTok eliminated three extra movies and restricted one other. It was not instantly clear how video restrictions differed from removals.

Among these movies that weren’t eliminated have been one that includes an AI-generated younger woman, scantily clad within the bathe, and different AI-generated photographs that appeared to point out younger women posing suggestively in lingerie or bikinis, in accordance with the group.

“There is absolutely no way a human being sees this and doesn’t understand what’s happening,” Hernández-Echevarría stated. “The comments are super crude, are full of the most disgusting people on earth making comments.”

By Wednesday, not less than one account and one video that TikTok’s content material overview course of beforehand indicated didn’t violate its guidelines have been not accessible. Hernández-Echevarría stated it was not clear why they weren’t taken down when first reported.

Thursday’s report comes after a separate study printed in October by the UK not-for-profit Global Witness discovered that TikTok had directed younger customers towards sexually express content material by way of its steered search phrases. That report discovered TikTok’s search steered “highly sexualized” phrases to customers who reported being 13 and have been looking in “restricted mode.” TikTok stated in response that it had eliminated content material that violated its insurance policies and launched enhancements to its search suggestion characteristic.