YouTube is giving with one hand, and taking with the other, with recent strategy statements from the company giving contradictory messages about the use of AI on the platform.
In a communication to the YouTube community on Wednesday, YouTube’s chief executive, Neal Mohan, stated that a key part of the company’s strategy for 2026 is an initiative to lower the amount of low-quality, AI-derived content that appears in users’ feeds. “As an open platform, we allow for a broad range of free expression while ensuring YouTube remains a place where people feel good spending their time,” he said.
The company’s hands-off approach to content moderation is to change with regards to AI-generated content, in order to preserve the standard of videos posted, he said, thus safeguarding viewers’ experiences. Despite what YouTube describes as its ‘openness’, Mohan said the company will be “actively building on our established systems that have been very successful in combating spam and clickbait, and reducing the spread of low-quality, repetitive content.”
Yet simultaneously, YouTube is invested in AI-assisted content creation, with creator-focused features planned for 2026 including tools allowing creators to generate Shorts using AI models of themselves.
“AI will act as a bridge between curiosity and understanding. Ultimately, we’re focused on ensuring AI serves the people who make YouTube great: the creators, artists, partners, and billions of viewers looking to capture, experience, and share a deeper connection to the world around them.”
Creators will still be obliged to declare when they publish artificially generated or modified media, and YouTube will mark content produced using its own in-house AI tools.
Over a million channel owners used on-platform AI creation tools in December 2025 and there were over 20 million uses of YouTube Ask by viewers in the same month. In 2026, YouTube will roll out two new features in addition to the creator likenesses generated for Shorts, namely using text prompts to create games and experimental music production tools.
The company is also doubling down on protecting itself from criticisms over copyright infringing content on its platform. “We remain committed to protecting creative integrity by supporting critical legislation like the NO FAKES Act,” Mohan said.
“Ultimately, we’re focused on ensuring AI benefits those who make YouTube successful: the creators, artists, partners, and billions of viewers eager to capture, experience, and share a stronger connection to the world around them.”
The company’s strategy seems therefore to be to provide AI content creation tools to creators, and flag ensuing ‘creations’ as AI-generated so viewers are aware of their provenance. Creators are expected to declare whether or not they use AI in their work if using external software.
The company’s strategy, as embodied by the software that influences viewers’ choices, will become apparent over time. It’s not clear whether the YouTube algorithms will promote creators who use or don’t use AI, or choose on-platform AI tools over alternatives. YouTube seems to be hoping that viewers reject what Mohan termed “AI slop” but accept media created by YouTube-endorsed AI tools. The latter videos, presumably, won’t regarded as “AI slop,” despite the fact that they will be automatically flagged as AI-generated onscreen.
(Image source: “Rand Paul, Ai Generated Images, Public Domain, CC0” by MrScott+Ai Art is marked with CC0 1.0. )
Find out more about the Digital Marketing World Forum series and register here.



