-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VGG classifier setting different from Original paper #92
Comments
Thanks for catching this. I'll fix this and retrain the VGG models.
…On Fri, Mar 10, 2017 at 1:44 AM Licheng Yu ***@***.***> wrote:
One of the dropout layer was "wrongly" inserted.
The original final layers of Caffe Version (
https://gist.github.com/ksimonyan/211839e770f7b538e2d8) is:
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout()
nn.Linear(4096, 1000),
)
This won't make difference when we use model.eval(), but will make
discrepancy if we want to finetune VggNet by loading Caffe's parameters.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#92>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAoB-mQoAanKtu7ZOeBuY48ynFDezv_0ks5rkPFjgaJpZM4MZB_2>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
One of the dropout layer was "wrongly" inserted.
The original final layers of Caffe Version (https://gist.github.com/ksimonyan/211839e770f7b538e2d8) is:
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout()
nn.Linear(4096, 1000),
)
This won't make difference when we use model.eval(), but will make discrepancy if we want to finetune VggNet by loading Caffe's parameters.
The text was updated successfully, but these errors were encountered: