In the previous posts (Part 1, Part 2). We have introduced fundamental principles of diffusion models, how to train as well as to sample from diffusion model through the learned reverse process. However, the posts only covered standard diffusion mechanism under unconditional scenarios, i.e., we only generate data samples in a purely random manner without any desire for the generated outputs. In this blog post, we will demonstrate how to generate data according to some input conditions.
Literature has found the connection between diffusion models and denoising score matching models , in which samples are produced via an iterative process using gradients of the data distribution estimated with score matching. The score of each sample ’s probability density is defined as the gradient of its log-likelihood (i.e., the steepest increasing direction of the log-likelihood). Specifically, it is shown that training the backward process of denoising diffusion (i.e., predicting the posterior mean) is equivalent to estimating the score function:
For conditional generation, a naive and straightforward approach is to append the condition into the network's inputs as a separate channel, i.e., . However, this strategy may not work well since the network may ignore this new input channel, which can result in some arbitrarily generated outputs not following the condition. Instead, we can use Bayes rule to decompose the conditional probability into the combination of unconditional model , and a discriminative model that predicts the probability of the condition (e.g., class label) given the data sample (e.g., image):
By differentiating both sides of the above equation with respect to , we obtain the following formula (note that since does not depend on ):
In order to incorporate the class label into the diffusion process, Dhariwal and Nichol  proposed a classifier guidance strategy that uses gradients from a classifier to guide the diffusion model during sampling. However, the classifier should be trained on noised images (with respect to different diffusion timesteps) to obtain the correct gradient to guide the sampling process toward the conditioning class. Otherwise, it can adversely affect the sample quality because the noised intermediate images encountered during sampling are out-of-distribution for the classifier.
Recall that the score function is proportional to the output of the denoising network (Equation 1) the classifier guidance has a very similar formula to Equation 2:
where is the newly shifted output of the model guided by the classifier toward the target conditioning class . The author also found that multiplying the condition with a scaling coefficient (called the guidance scale and typically ) can amplify the influence of the conditioning signal. Thus, they can trade off the sample diversity for sample quality adhering to the target class. The figure below summarizes the sampling algorithm of diffusion model with classifier guidance.
Figure 2 demonstrates the generated samples of unconditional diffusion model with classifier guidance trained on the ImageNet dataset. The conditioning class in the generation is "Welsh corgi". It can be seen that when using the guidance scale of 1.0 (left), the samples do not match very well with the target class upon visual inspection, whereas the guidance of scale 10.0 (right) produces much more convincing images that are well-aligned with the desired class. Additionally, the FID (a metric for measuring the fidelity of generated samples against the ground-truth data distribution) of the right images (FID: 12.0) is much lower than the left images (FID: 33.0). This indicates that the classifier-guidance model with high guidance scale can better capture the distribution of images belonging to this "corgi" class. The author also showed that diffusion models yield better performance than other generative models such as GANs.
Although the classifier guidance can have several benefits when we want to control the generation with some conditioning information, there are unfortunately a few challenges that limit its practical utility. Due to the nature of the diffusion process, in which the data are gradually denoised in an iterative manner, the classifier used for guidance is required to be able to deal with high noise levels. Consequently, the training of this external classifier can be quite problematic and cumbersome to the overall system.
Moreover, even if we already had a classifier that is robust to noise, another problem may arise, that is not all the information of the input is useful in predicting . As a result, the gradient of the classifier with respect to the whole input can result in arbitrary and even adversarial guiding signals in the data space, which may lead to undesired behavior of the model.
To alleviate the need for training a cumbersome classifier, Ho & Salimans  proposed the Classifier-Free Guidance technique, which performs guidance by combining both the conditional and unconditional diffusion model. Firstly, let us re-express the class probability in Equation 2 using the Bayes rule again:
We have broken down the conditioning probability term into a combination of the conditional and unconditional model. Now, let's multiply this term with a scale factor and plug it into Equation 2:
Or equivalently, we can obtain the formulation for classifier-guidance:
Interestingly, we can unify both the unconditional/conditional diffusion model into a single model by a special training strategy. Specifically, when training a conditional diffusion model ), the condition is randomly masked out (dropped out) by the null condition with some probabilities. The resulting model can both represent conditional model and unconditional model since it is trained with either condition information or non-condition.
We can see that the setting for is equivalent to the unconditional model (i.e., no guidance), and for , we get the standard conditional model . Intuitively, when , the classifier-guidance formula extrapolates the output of the model by taking a bigger step along the direction starting from the non-condition towards the condition . This can yield useful and robust signals from the network's knowledge to guide the process itself without relying on a separate classifier (which is usually tricky to train). Moreover, this technique is very effective in scenarios when we want to perform guidance on complicated conditioning information such as text or audio, which is very difficult to estimate the probability with a classification model.
The authors of the GLIDE paper  experimented with both classifier guidance and classifier-free guidance strategy in text-condition image generation. For classifier guidance, they use the CLIP model , a model trained on paired text/image and can provide a score of how close an image is to a text caption, to guide the generated image towards the input text (CLIP guidance). They found that classifier-free guidance performs more favorably against CLIP guidance. Figure 3 shows the visual comparisons between CLIP guidance (first row) to classifier-free guidance (second row). They observed that classifier-free guidance often produces more realistic samples than those using CLIP guidance. The former model is also capable of generalizing to a wide variety of prompts.
As we have mentioned, guidance introduces a trade-off: it significantly enhances adherence to the conditioning signal and overall sample quality but at the cost of diversity. The GLIDE's experiments showed that classifier-free guidance can provide a good balance between FID score (measuring realism of the generated compared to the ground-truth images) and IS score (indicating quality and diversity of the samples), see Figure 4. This is usually an acceptable trade-off in conditional generative modeling. The conditioning signal often captures the desired information that we are interested in, and otherwise, if we favor diversity, we can adjust the provided conditioning signal accordingly.
 Song Y, Ermon S. Generative modeling by estimating gradients of the data distribution. In NIPS 2019.
 Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis. In NIPS 2021.
 Ho J, Salimans T. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
 Nichol A, Dhariwal P, Ramesh A, Shyam P, Mishkin P, McGrew B, Sutskever I, Chen M. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In ICML 2022.
 Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G. Learning transferable visual models from natural language supervision. In ICML 2021.