Swiss scientists want to make long AI-generated videos even better by preventing them from 'degrading into randomness' - is that a good idea? I am not so sure
EPFL researchers teach AI to correct its own video mistakes
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- AI-generated videos often lose coherence over time due to a problem called drift
- Models trained on perfect data struggle when handling imperfect real-world input
- EPFL researchers developed retraining by error recycling to limit progressive degradation
AI-generated videos often lose coherence as sequences grow longer, a problem known as drift.
This issue occurs because each new frame is generated based on the previous one, so any small error, such as a distorted object or slightly blurred face, is amplified over time.
Large language models trained exclusively on ideal datasets struggle to handle imperfect input, which is why videos usually become unrealistic after a few seconds.
Recycling errors to improve AI performance
Generating videos that maintain logical continuity for extended periods remains a major challenge in the field.
Now, researchers at EPFL’s Visual Intelligence for Transportation (VITA) laboratory have introduced a method called retraining by error recycling.
Unlike conventional approaches that try to avoid errors, this method deliberately feeds the AI’s own mistakes back into the training process.
By doing so, the model learns to correct errors in future frames, limiting the progressive degradation of images.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The process involves generating a video, identifying discrepancies between produced frames and intended frames, and retraining the AI on these discrepancies to refine future output.
Current AI video systems typically produce sequences that remain realistic for less than 30 seconds before shapes, colors, and motion logic deteriorate.
By integrating error recycling, the EPFL team has produced videos that resist drift over longer durations, potentially removing strict time constraints on generative video.
This advancement allows AI systems to create more stable sequences in applications such as simulations, animation, or automated visual storytelling.
Although this approach addresses drift, it does not eliminate all technical limitations.
Retraining by recycling errors increases computational demand and may require continuous monitoring to prevent overfitting to specific mistakes.
Large-scale deployment may face resource and efficiency constraints, as well as the need to maintain consistency across diverse video content.
Whether feeding AI its own errors is truly a good idea remains uncertain, as the method could introduce unforeseen biases or reduce generalization in complex scenarios.
The development at VITA Lab shows that AI can learn from its own errors, potentially extending the time limits of video generation.
However, how this method will perform outside controlled testing or in creative applications remains unclear, which suggests caution before assuming it can fully solve the drift problem.
Via TechXplore
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.