Skip to content

Pytorch implemenation of Diffusion Model with training on custom dataset

Notifications You must be signed in to change notification settings

AAArpan/Diffusion-Model

Repository files navigation

Diffusion-Model

Here I trained the Denoising Diffusion Model using U-Net architecture and sampled the newly generated images from it. This involves generating images unconditionally and randomly, without any reliance on dataset labels. The dataset used was CelebA having 202,599 number of face images. For any other dataset just change the model_params.yaml file accordingly.

Result

x0_0

Here sample.py file will save the generated images at each timestep within a folder. For training and sampling

! python3 train.py --config /path to model_params file/
! python3 sample.py --config /path to model_params file/

Use this notebook to replicate the results and for direct sampling download the weights from here.

About

Pytorch implemenation of Diffusion Model with training on custom dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published