Here I trained the Denoising Diffusion Model using U-Net architecture and sampled the newly generated images from it. This involves generating images unconditionally and randomly, without any reliance on dataset labels. The dataset used was CelebA having 202,599 number of face images. For any other dataset just change the model_params.yaml file accordingly.
Here sample.py file will save the generated images at each timestep within a folder. For training and sampling
! python3 train.py --config /path to model_params file/
! python3 sample.py --config /path to model_params file/
Use this notebook to replicate the results and for direct sampling download the weights from here.