Self-Correcting Self-Consuming Loops
For Generative Model Training

Brown University, Google DeepMind
ICML 2024


What happens after iteratively training a text-conditioned generative model for human motion synthesis for 50 generations? We simulate a self-consuming loop by creating synthetic data with the latest generative model, and mixing them with the original data to continue training the next generative model. We observe that by self-correcting the synthetic data with a physics simulator, the model can successfully avoid collapse and generate high-quality human motion. Our paper provides theoretical and empirical justification for the self-correcting self-consuming loop.

Abstract

As synthetic data becomes higher quality and proliferates on the internet, machine learning models are increasingly trained on a mix of human- and machine-generated data. Despite the successful stories of using synthetic data for representation learning, using synthetic data for generative model training creates “self-consuming loops” which may lead to training instability or even collapse, unless certain conditions are met. Our paper aims to stabilize self-consuming generative model training. Our theoretical results demonstrate that by introducing an idealized correction function, which maps a data point to be more likely under the true data distribution, self-consuming loops can be made exponentially more stable. We then propose self-correction functions, which rely on expert knowledge (e.g. the laws of physics programmed in a simulator), and aim to approximate the idealized corrector automatically and at scale. We empirically validate the effectiveness of self-correcting self-consuming loops on the challenging human motion synthesis task, and observe that it successfully avoids model collapse, even when the ratio of synthetic data to real data is as high as 100%.



Overview of Procedure: Self-Consuming Loop with Self-Correction





Effect of Correction Strength

Our empirical results demonstrate that increasing our proposed correction strength hyperparameter improves performance and stability after self-consuming iterations.

MY ALT TEXT



Examples: Synthesized Human Motion

How does our proposed self-correction operation affect the self-consuming loop for the human motion generation task? We can see that the self-consuming model produces a motion which doesn't reflect the prompt description. Additionally, this motion produces frames that represent physically impossible human motions--notice when the human suddenly snaps to a position where the leg penetrates the ground plane. These negative artifacts do not exist in the motions synthesized from the baseline model or self-consuming with self-correction model.




Examples: Synthesized Human Motion

We can see that the self-consuming model outputs random motions that slide the figure to the right and have no relation to the text prompt. Additionally, the human rotates their forearm unnaturally and forcefully. In contrast, the baseline and self-consuming with self-correction models both generate motions with accurately embody the prompt.



Example: Human Motion Physics Correction

Generated motions sometimes disobey the laws of physics. For our self-correction operation, we utilize a frozen, pretrained policy whose goal is to imitate the given synthetic motion sequence. Because the transition dynamics are provided by a physics simulator, this has the effect of imitating the synthetic motion as closely as possible, while correcting those parts of the motion which disobey the laws of physics. In this example, the resulting output motion is a corrected version of the input motion, but with physics-obeying collision logic.


Quantitative Analysis


Our quantitative analysis demonstrates that the self-consuming with self-correction model outperforms the self-consuming model, in that it more stably converges to a better FID score, and more quickly. When the dataset size is smaller (top) we can see that the self-consuming model has a flat Matching score, as well as diverging FID and Diversity scores, indicating model collapse. When the dataset size is larger (bottom) there is less collapse in the self-consuming model, although the variance of the FID score between generations is worse, which indicates training instability. And in this case, the self-consuming with self-correction is competitive with the baseline even after 50 generations.

MY ALT TEXT

MY ALT TEXT



Example: Synthesized Human Motion, Failure Case


We highlight a failure case of the self-consuming loop with self-correction. When the dataset size is small (n=64) the self-consuming loop tends to suffer from a lack of diversity. The video demonstrates how different samples from the same prompt result in nearly identical motions.




Example: Retaining Diversity With Larger Dataset


However, we find that this issue is resolved when using a larger dataset (n=2794). With the same prompt, the self-consuming with self-correction model generates diverse and correct motions.


BibTeX

@InProceedings{gillman2024selfcorrecting,
  title={Self-Correcting Self-Consuming Loops for Generative Model Training},
  author={Gillman, Nate and Freeman, Michael and Aggarwal, Daksh and Hsu, Chia-Hong and Luo, Calvin and Tian, Yonglong and Sun, Chen},
  booktitle={Proceedings of the 41st International Conference on Machine Learning},
  pages={15646--15677},
  year={2024},
  volume={235},
  publisher={PMLR}
}