Real-time video AI developer Decart says it’s primed to transform video marketing with the release of Lucy 2, an innovative new model that’s able to seamlessly edit longform live streams via natural language prompts.
The open-source model not only generates high-quality video, but provides the tools needed to dynamically edit that content on the fly without compromising its realism. It eliminates the need for AI video to be edited post-production, opening the door to new marketing uses in live streaming, virtual try-ons, personalised product placement and more, the company claims in a press release.
More coherent edits and prompt adherence
What sets Lucy 2 apart from competing video generation models like Google’s Veo 3 and OpenAI’s Sora 3 is its ability to process incoming video instantaneously with sub-second latency. Users will be able to apply stylistic changes, manipulate objects and characters, modify the scenery and more – while the video is being generated.
Decart explained in a blog post that Lucy 2 integrates diffusion models that have been optimised for temporal consistency and low latency to improve the overall coherence of its videos. It can ingest live video from multiple sources, like a smartphone camera, web cam, or RTMP stream and apply AI processing to that footage in real-time to transform it in an infinite number of ways. The model does this while running at 1080p and 30 fps, producing immediate results without any interruptions to the live video stream or loss of quality.
Creators will be able to experiment with Lucy 2 in various ways, using simple prompts to “replace the background with New York City” and make it look as if they’re livestreaming from an entirely different location. Or, else they might command Lucy 2 to transform the subject into a cartoon character and completely change the appearance of the person on film in an instant.
According to Decart, Lucy 2’s real-time performance benefits from several tweaks the company has made, like optimising it to run on Nvidia’s most advanced GPUs and model distillation techniques.
Decart also talked about the architectural changes applied to Lucy 2, which make it work very differently from traditional video models. Notably, it integrates a diffusion pipeline that helps it to achieve an unprecedented level of coherence in frames. It outputs video continuously on a frame-by-frame basis without time constraints to eliminate the weird artefacts and flickering effects that plagued earlier video models and achieve greater consistency in scenes.
The model also benefits from enhanced prompt adherence, which allows it to interpret the user’s instructions more accurately. This means creators will be able to feed it with more complex descriptions. For instance, someone might say “add red, blue and green fire-breathing dragons without affecting the original lighting,” and Lucy 2 will do so without making mistakes.
A magic effect on marketing
Benchmark tests show that Lucy 2’s latency measures less than 100 milliseconds per frame, making it viable for livestreaming application, and this will be of big interest to marketers already using AI-generated video.
For instance, it has implications for livestreaming collaborations with social media influencers and other content creators. Using Decart’s Delulu Stream tools, which are now powered by Lucy 2, livestreamers on platforms like Twitch and TikTok can enhance their video streams in various ways, like by dynamically adjusting their appearance to look like any character in any world.
Alternatively, streamers can apply thematic filters that are more responsive to real-time chat commands. And, they’ll be able to instantaneously insert brands’ products into their videos at any moment to showcase different ways and situations in which they can use them.
Instead of relying on recognised influencers, brands can use Lucy 2 to create a virtual ambassador that represents them exclusively. They’d be able to build entire “lives” for these digital ambassadors, showing them in various parts of the world, doing different activities to build more emotional connections with consumers.
One advantage of virtual brand ambassadors is that they don’t need to sleep – operating around the clock. Brands can even livestream continuously, 24 hours a day, answering queries and providing tips on how to use products without any fatigue.
Decart’s leadership also sees a lot of potential for Lucy 2 when it comes to letting more realistic virtual try-ons for ecommerce. Some digital stores already employ augmented reality technology to help users visualise how they might look while wearing a new jacket, or whether or not a new sofa is going to match the decor of their home, but Lucy 2 will bring the realism of these experiences to another level.
Thanks to Lucy 2’s superior coherence, virtual try-ons will be much less distinguishable from actually trying on a new shirt or pair of jeans in a physical store, and they’ll be more dynamic too. A shopper could not only try on a pair of new Levi’s, but also visualise themselves walking around in them in a park or through a shopping mall. Lucy 2 will model the person’s motion accurately while synthesising how the fabric should react to movement to make it look as if they’re really wearing it.
Another option is live product placement. Because Lucy 2 can edit livestreams on-the-fly and personalise them for each viewer, brands will be able to insert different products in the same video for millions of different viewers, based on each audience member’s past engagement with product pages.
A new era for world models
Decart’s CEO Dean Leitersdorf has described the launch of Lucy 2 as the “GPT-3 moment for world models.” He said it’s the first instance in which a world model can now run live, in real-time, without any compromise in terms of video quality. “The shift doesn’t just improve video, it creates entirely new markets, from live media and entertainment to virtual try-on, gaming and robotics,” he said. “We’re confident these markets will be measured in billions.”
As an added benefit, Lucy 2 is also extremely flexible and affordable. Decart said the model has been optimised to run on Nvidia GPUs, and on Amazon and Google’s cloud infrastructure, giving customers multiple deployment options.
Energy-efficiency optimisations promise to reduce the cost of AI-generated video dramatically, with Decart saying sustained real-time video generation will cost around $3 per hour.



