-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added bvlc_googlenet prototxt and weights #1598
Conversation
@shelhamer @longjon @jeffdonahue do you know why Travis keep failing because it cannot download CUDA? |
The publication link points to AlexNet research. Should it be http://arxiv.org/abs/1409.4842? |
@emasa thanks for catching the wrong link |
@sguada please push the GoogLeNet paper link and merge. Thanks. (The Travis deal is a just an intermittent issue with bandwidth that doesn't matter. Feel free to ignore it.) |
6270223
to
f15210c
Compare
Added bvlc_googlenet prototxt and weights
@sguada I run your implementation in the newest caffe-dev. There is an error occurred. Using alexnet prototxt in caffe model directory also cause this bug. How can it be fixed? layers { |
@AnshanTJU, to double check I recompiled and tried again and got no errors. So try |
On Sat, Dec 20, 2014 at 11:17 PM, Sergio Guadarrama <
|
@sguada @shelhamer The latest caffe-dev code can not pass "make runtest" tests. The log information is attached below. The master branch code can pass all the tests. However, it can't support "poly" learning rate policy. [ RUN ] NetTest/2.TestBottomNeedBackward |
Great, thanks! |
@sguada I trained the Googlenet with quick_solver.prototxt. And after 730000 iterations, the top-1 accuracy is just 25.57, 33.59, 39.39. Is this any problem? What is your result during training? |
@yulingzhou it seems a bit low but its ok, mine was around 41 top-1 accuracy. With quick_solver.prototxt it gain most of the accuracy near the end. If you want to get a reasonable good model faster, let's say 60 top-1, you can lower |
@sguada @shelhamer The version of caffe-dev in Dec. 20 have unknown bugs which cause the registry count issue. I have downloaded the latest caffe-dev in an hour ago. It runs well. Thanks to @sguada for the great work. |
@sguada It's now 1840000 iterations, and the accuracy is just 35.73, 45.3, 51.64, which is much lower than yours. Currently lr is 0.0048. I trained the googlenet in the code from bvlc_googlenet branch. What may the problem be? |
@yulingzhou I think is going ok. As I said until you get close to the max_iter the accuracy should grow slowly but steadily. When you get near the end you should expect a rapid increase in accuracy. |
@sguada Was this trained by first resizing/warping all training images to 256x256 (and then taking random 224x224 crops)? The dataset preparation details aren't mentioned above or on the ModelZoo page. Also, thanks for releasing the model! |
@seanbell Yes I used the same pre-processed data as for caffe_reference model. Using a more elaborated data pre-processing, such as different scales and different aspect-ratios, should lead to better results. |
Thanks for this! top: "loss1/loss1" top: "loss2/loss1" top: "loss3/loss3" Typo? |
Thank you very much for your sharing! |
You should use the latest version of caffe-dev. @RobotiAi |
@AnshanTJU Thank you so much! |
@sguada Iteration 140000, Testing net (#0) |
I finished training exactly googleNet exactly like @sguada. I0329 02:24:37.016726 21069 solver.cpp:248] Iteration 2400000, loss = 9.71667 And the accuracy vs iterations graph: To further illustrate the weirdness, I used @sguada 's provided .caffemodel file for feature extraction using the C++ tool, everything went fine. Specifically, the output features of the below components start producing identical features per image. The outputs of hese layers are passed on to the next inception module, and along to the loss2 , loss3 classifiers. I am attaching an image of the googlenet structure to show the inception module and the layers where this occurs (2.6 MB image): Any idea why this is happening? I'm guessing I should have stopped at first occurence of this nbehaviour. Thanks in advance. |
@npit I'm not sure what went wrong with your training, but definitely the loss2 and loss3 indicate that the upper layers are not learning anything. A loss around 6.9 means that the network is guessing randomly. Probably it got a bad initialization and couldn't recover. |
I will, thanks for your response. |
@drdan14 Thanks! Is there by any chance a bash version of the parser? |
@npit yes, it's sitting right next to the python version: https://github.com/BVLC/caffe/blob/master/tools/extra/parse_log.sh Please move this sort of question to the Google Group: https://groups.google.com/forum/#!forum/caffe-users |
@drdan14 I meant of your updated log parser, not the standard one. |
Nope, WYSIWYG. But you can run the python version from the command line (type |
Allright, thanks. |
@sguada Hi Sguada, why there is a std field in "xavier" filler? Isn't the magnitude decided by the number of fan-in and fan-out units? Thank you. |
It is not used by "xavier" filler, but left there just in case someone want Sergio 2015-07-02 9:29 GMT-07:00 Xeraph notifications@github.com:
|
@sguada Got it. Thank you. |
@sguada May I ask how did you choose the poly learning rate and with the 0.5 parameter? |
I tried different options and that one seem to be more consistent and Sergio On Sat, Aug 29, 2015 at 5:30 AM, npit notifications@github.com wrote:
|
Thanks for the swift reply. |
This PR add GoogleNet to the set of models provided by BVLC, it includes the the prototxt needed for training and deploying.
This model is a replication of the model described in the GoogleNet publication. We would like to thank Christian Szegedy for all his help in the replication of GoogleNet model.
Differences:
The bundled model is the iteration 2,400,000 snapshot (60 epochs) using quick_solver.prototxt
This bundled model obtains a top-1 accuracy 68.7% (31.3% error) and a top-5 accuracy 88.9% (11.1% error) on the validation set, using just the center crop.
(Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy.)
Timings for bvlc_googlenet with cuDNN using batch_size:128 on a K40c:
P.S: For timing look at #1317