You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey,
I'm trying out concrete dropout with bigger nets (namely DenseNet121 and ResNet18) and for that tried to port the Keras implementation for spatial concrete dropout to PyTorch.
Since it works for DenseNet121 (model converges) but strangely not for ResNet18, I was wondering, if maybe the initialization I used was wrong.
For both weight_regularizer and dropout_regularizer I used the initialization given in the MNIST example of the spatial concrete dropout Keras implementation (both dependent by division on the train dataset length). However when looking at the paper, you seem to have used 0.01 x N x H x W for the dropout regularizer when using bigger models, but this multiplication would lead to a much much bigger factor than the 2. / N specified in the example.
What kind of initialization is right?
I would greatly appreciate if you could clear up my confusion!
Cheers!
The text was updated successfully, but these errors were encountered:
Hi !
I agree and I am confuse for the same reasons. I read the paper and I did not understand how the weight regularizer and dropout regularizer are initialized. Could you please tell us what means prior length scale ? and which value to assign to this variable ?
Hey,
I'm trying out concrete dropout with bigger nets (namely DenseNet121 and ResNet18) and for that tried to port the Keras implementation for spatial concrete dropout to PyTorch.
Since it works for DenseNet121 (model converges) but strangely not for ResNet18, I was wondering, if maybe the initialization I used was wrong.
For both weight_regularizer and dropout_regularizer I used the initialization given in the MNIST example of the spatial concrete dropout Keras implementation (both dependent by division on the train dataset length). However when looking at the paper, you seem to have used 0.01 x N x H x W for the dropout regularizer when using bigger models, but this multiplication would lead to a much much bigger factor than the 2. / N specified in the example.
What kind of initialization is right?
I would greatly appreciate if you could clear up my confusion!
Cheers!
The text was updated successfully, but these errors were encountered: