In the previous post, we have introduced the fundamental mechanism behind diffusion models: the forward process and the reverse process. Now, we will go into more detail about how to train and sample from the diffusion-based denoising network.
Training diffusion models
Interestingly, we can see the forward and backward process of diffusion closely resemble that of the well-known Variational Autoencoder (VAE). Recall that in VAE, we leverage an Encoder to map the data into a latent (Gaussian) distribution, and then use a Decoder to reconstruct the original data from the latent. However, in diffusion model, we only learn the reverse process (the decoder) while the forward process has no learnable parameters. In particular, we can use similar derivations of the variational lower bound (VLB) of the VAE to optimize the likelihood:
Since the is always non-negative, to maximize the log-likelihood of the data , we aim to maximize the VLB (the RHS) of the above equation. This is equivalent to minimizing the negation of the RHS, which will later lead to our loss:
Since that does not contain any learnable parameters, we can define our training loss as:
The training objective can be further decomposed as the combination of several KL terms with respect to the time :
In short, the can be written as:
where . We can see that these KL terms (except ) are between two Gaussian and can be computed in closed form, whereas the term can be ignored during training since it has no learnable parameters.
Recall that we need to train a neural network to approximate the conditional distributions in the reverse process . Moreover, the posterior mean (see Equation 5 and 6 in this post) of the reverse distribution is:
Considering the property of Equation 2, i.e., , we can re-expressed by replacing as:
Our ultimate goal is to train the network to approximate the mean , and the loss term measured the KL Divergence between two Gaussian. Furthermore, Ho et. al  suggested predicting the noise using the re-parameterization as in Equation 9, and keeping the variance fixed to the schedule. Therefore, we can calculate the loss as follows:
 also suggested the simplification of this loss (i.e., removing the weighting term) as they found the training results are better, which leads to the "simple" objective:
In summary, the training process iteration is as follows:
- Sample data
- Sample noise
- Compute loss
- Backprop and update network parameters
- Until convergence
Sampling diffusion models
To obtain a sample from the original data distribution, we start by sampling from the noise distribution and then gradually remove the noise until we reach , following the reverse process. At each step, we sample from the approximated reverse distribution:
The sampling process can be summarized as follows:
- Firstly sample
- For do sample from posterior distribution:
- if , else
- , (reparameterization trick)
In the original Denoising Diffusion Probabilistic Model (DDPM), the sampling process is usually quite slow to obtain a sample. This is because we need to follow the whole chain of the reverse diffusion process from to , where the number of steps is mostly up to . For example, it takes around 20 hours to sample 50k images of size 32 × 32 from a DDPM, but less than a minute to do so from a GAN model . Many recent works have proposed several strategies to overcome this limitation to speed up the sampling process , , . They can generate samples within only a few steps (e.g., 50 steps) while the sample quality can be as high as the full sampling process.
 Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In NIPS 2020.
 Song J, Meng C, Ermon S. Denoising diffusion implicit models. in ICLR 2021.
 Liu L, Ren Y, Lin Z, Zhao Z. Pseudo numerical methods for diffusion models on manifolds. ICLR 2022.
 Salimans T, Ho J. Progressive distillation for fast sampling of diffusion models. ICLR 2022.