Forward Diffusion Process

What is Forward Diffusion Process?

The Forward Diffusion Process is a key component of a diffusion model. It involves gradually and iteratively adding a small amount of Gaussian noise to an input data sample (e.g., an image) over a series of time steps. This process continues until the data is transformed into pure, unstructured noise, effectively destroying the original information. The process is a fixed, non-learned Markov chain.

Where did the term "Forward Diffusion Process" come from?

The concept of diffusion processes originates from thermodynamics and physics, describing how particles spread out over time. In machine learning, the idea was first proposed in the 2015 paper 'Deep Unsupervised Learning using Nonequilibrium Thermodynamics.' However, it gained widespread attention with the 2020 paper 'Denoising Diffusion Probabilistic Models,' which demonstrated its effectiveness for high-quality image generation.

How is "Forward Diffusion Process" used today?

The Forward Diffusion Process is the foundational first step in training all diffusion-based generative models, such as Stable Diffusion, DALL-E 2, and Midjourney. While the forward process itself isn't used during inference (the reverse process is), it is essential for creating the training data that teaches the model how to denoise and generate new data from scratch.

Related Terms