You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use improved-diffusion that guided-diffusion is based on to train a DDPM model based on CelebA dataset consisting of 202,599 align&cropped human face images (each image is 218(height)x178(width) pixels) for human face image inpainting task.
Is there anyone who can give me some suggestions about how to adjust the hyperparameters used in improved-diffusion or in guided-diffusion to train a strong DDPM model for human face image inpainting task?
By the way, for dataset, should I need to resize the images in my dataset to 256x256 firstly?
I hope the trained denoising model in DDPM can be as strong as celeba256_250000.pt provided in the download.sh file in this repository.
I'm very curious about why the size of celeba256_250000.pt is so big (about 2.1GB) and how it is trained.
I hope the strong trained denoising model (UNet model) in DDPM learns the features of human face well so it can be used for human face image synthesis task and human face image inpainting task based on RePaint (i.e. recovering the masked parts of a masked human face image).
I want to know how to adjust the values of the hyperparameters in diffusion_defaults() of script_util.py, model_and_diffusion_defaults() of script_util.py and create_argparser() of image_train.py in improved-diffusion or in the same functions of the same corresponding .py files in guided-diffusion to train a strong DDPM based on my CelebA dataset.
I have tried some combinations of the hyperparameters used in guided-diffusion to train, however, the human face image inpainting results of the saved model files ema_0.9999_XXX.pt and modelXXX.pt are both bad.
In addition, because of the limitation of my GPU memory, I set the value of the hyperparameter num_channels only 64, I want to know if this hyperparameter affects the performance of the traind DDPM model. Should I try to set it larger?
Thanks a lot for anyone's help!!!
p.s. I directly set up the values of the hyperparameters in the codes of improved-diffusion and guided-diffusion not through flags
The text was updated successfully, but these errors were encountered:
I want to use improved-diffusion that guided-diffusion is based on to train a DDPM model based on CelebA dataset consisting of 202,599 align&cropped human face images (each image is 218(height)x178(width) pixels) for human face image inpainting task.
Is there anyone who can give me some suggestions about how to adjust the hyperparameters used in improved-diffusion or in guided-diffusion to train a strong DDPM model for human face image inpainting task?
By the way, for dataset, should I need to resize the images in my dataset to 256x256 firstly?
I hope the trained denoising model in DDPM can be as strong as celeba256_250000.pt provided in the download.sh file in this repository.
I'm very curious about why the size of celeba256_250000.pt is so big (about 2.1GB) and how it is trained.
I hope the strong trained denoising model (UNet model) in DDPM learns the features of human face well so it can be used for human face image synthesis task and human face image inpainting task based on RePaint (i.e. recovering the masked parts of a masked human face image).
I want to know how to adjust the values of the hyperparameters in diffusion_defaults() of script_util.py, model_and_diffusion_defaults() of script_util.py and create_argparser() of image_train.py in improved-diffusion or in the same functions of the same corresponding .py files in guided-diffusion to train a strong DDPM based on my CelebA dataset.
I have tried some combinations of the hyperparameters used in guided-diffusion to train, however, the human face image inpainting results of the saved model files
ema_0.9999_XXX.pt
andmodelXXX.pt
are both bad.In addition, because of the limitation of my GPU memory, I set the value of the hyperparameter num_channels only 64, I want to know if this hyperparameter affects the performance of the traind DDPM model. Should I try to set it larger?
In conclusion, I hope somebody can give me some suggestions about how to adjust the hyperparameters used in guided-diffusion to train my own DDPM model as strong as 256x256 diffusion (not class conditional) or celeba256_250000.pt for human face image synthesis and inpainting task based on RePaint.
Thanks a lot for anyone's help!!!
p.s. I directly set up the values of the hyperparameters in the codes of improved-diffusion and guided-diffusion not through flags
The text was updated successfully, but these errors were encountered: