You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training VAE (stage 1) on the ShapeNet15k dataset by following the instructions given in the README.md file.
I am using the default config, except the batch size is 16 (because using batch size 32 was giving cuda_out_of_memory error). The loss started increasing and eventually became nan.
So, I trained with a lower learning rate of 1e-4 (originally it was 1e-3). This time again, the loss decreased, then increased, and becamenan.
Hi Supriya, I didn't see the nan loss in the log, is it happen after epoch10?
I can think of several hyper-parameters to help with stabilize the training:
reducing lr is definitely helpful. It's great that you are trying this. another thing is the trainer.opt.vae_lr_warmup_epochs, the default value is 0, perhaps you can try setting it to 50 or even larger. This one will have the lr start from small number and slowly increase to the target lr through N epochs.
set sde.kl_anneal_portion_vada to be larger, default is 0.5, you can try increase to 1 (the maximum value), this control how fast the KL weight is increasing (the larger portion, the slower the weight increase), and slowly increasing the kl weight can lead to smoother training dynamic
reduce shapelatent.log_sigma_offset from 6.0 to say 5.0 or so, this is a constant offset pushing the sigma towards 0. Reducing the offset will make the latent point noisier when it's initialized and as a results lower KL loss. I am not sure whether reduce the offset can help, but it may worth a try.
For the checkpoints, sorry we are still under the company process of getting approval to release it. (it's unlikely to release this week, I will track the process next week).
Hi @ZENGXH ,
Thank you for sharing the code.
I am training VAE (stage 1) on the ShapeNet15k dataset by following the instructions given in the README.md file.
I am using the default config, except the batch size is 16 (because using batch size 32 was giving
cuda_out_of_memory
error). The loss started increasing and eventually becamenan
.So, I trained with a lower learning rate of 1e-4 (originally it was 1e-3). This time again, the loss decreased, then increased, and became
nan
.Please see the contents of log file below:
I looked at previous issues #9 , #17 , #18 , #22 , #35 , but did not find any solution.
Could you please tell me how to resolve this issue?
Also, could you please share the checkpoint you mentioned in this section?
Thank you,
Supriya
The text was updated successfully, but these errors were encountered: