-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training readout does not work #19
Comments
Improved stability of readout training, worked on issue #19, smaller doc fixes
Hi @jackklpan ! You are right, there were some issues I hope to have resolved with the latest pull request #21 . We will add a test for the readout IBA so this doesn't happen again. Regarding the stability of the training, good spot! We also improved a minimal std in the normalization step to prevent divsion by very small numbers. Please check if this solves your issues, we will do the same. We are happy to hear about your results. Thanks for your contribution! |
Thanks! I will try to train again. |
Hello! Actually, there is a same issue with "per sample" - no alpha given |
Hi,
I meet two problems when training the readout network.
IBA/IBA/pytorch_readout.py
Line 128 in 34baed6
does not work now, since
IBA/IBA/pytorch.py
Line 412 in 34baed6
accepts one argument now.
(Currently, I checkout to the previous commit)
IBA/IBA/pytorch_readout.py
Line 170 in 34baed6
should be
alpha = alpha.clamp(...
.(The alpha become -infinite)
Could you check the current code can train the readout network?
Thanks!
The text was updated successfully, but these errors were encountered: