You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ! I'm interesting in your awesome work and I want to reproduce the results in the paper. However, it seems that the hyperpaprameters(eg. the BASE_LR=0.0000125) in Base-DiffusionInst.yaml was changed and is not the same with the value mention in the paper (with a learning rate of 2.5e-5 and a weight decay of 1e-4.) Since I only have one RTX 3090 with 24G RAM, can you give me some advice about the choice of learning rate and decay coefficient?Besides, does the schedule 5x in the paper refer to increasing the number of training iters by five times?
The text was updated successfully, but these errors were encountered:
Hi ! I'm interesting in your awesome work and I want to reproduce the results in the paper. However, it seems that the hyperpaprameters(eg. the BASE_LR=0.0000125) in Base-DiffusionInst.yaml was changed and is not the same with the value mention in the paper (with a learning rate of 2.5e-5 and a weight decay of 1e-4.) Since I only have one RTX 3090 with 24G RAM, can you give me some advice about the choice of learning rate and decay coefficient?Besides, does the schedule 5x in the paper refer to increasing the number of training iters by five times?
The text was updated successfully, but these errors were encountered: