-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about automatic hyper-parameter tuning toolkit #3
Comments
Hi @ZhangYuanhan-AI, thank you for your interest in our work, (and sorry for the late response). We have encountered a similar issue in our early development stage. We found one cause is the disparity between the order of the sample sequence between different hyperparameter search runs and the final run, so that some searched hyperparameter might not be stable, and gradient explodes when it encounters some specific combination of the samples in a batch. We fixed the sample sequence order in the latest released toolkit, and the issue has been largely alleviated, while the case can be different for each single checkpoint and its model architecture. You can select a gradient clipping value at Thank you! And it'd be great if anyone who has a different solution for the similar issue can share and we are happy to incorporate them into our doc or toolkit :) |
Hi Haotian, Thanks for your suggestion. I'll try gradient_clip first and see whether it can help. |
Hi we are trying to run our models on Elevator. Do you have a recommended default value for Thanks! |
Hi @Luodian, different models have different statistics of the gradient norms depending on model architecture, pretraining approach, etc.
To do this, we'd recommend first looking at the general statistics of your model running at first one or two epochs on several datasets, and then choose a gradient clipping value that is similar to or slightly larger than these (so that most of the parameter updates are unaffected by the gradient clipping). Thanks. |
Hi, thanks for this great benchmark.
I have a question about the hyper-parameter tuning.
see,
the training accuracy and validation accuracy are good at the hyper-parameter sweeping stage. And toolkit chooses "Learning rate 0.01, L2 lambda 0.0001" as the best one for the final 50 epochs.
However, the performance of the model with the selected hyper-parameter is extremely bad.
see,
.
Have you ever faced this problem? this problem mainly shows in dtd, fer2013, and resics45 datasets. Usually, this problem occurs when a relatively large LR (like 0.01) is selected in the sweeping stage.
I don't think this problem comes from the gap between the validation set and testing set, because you can see training accuracy is also bad for the final 50 epochs of training.
The text was updated successfully, but these errors were encountered: