-
Notifications
You must be signed in to change notification settings - Fork 517
Optim-wip: Add the pre-trained InceptionV1 Places365 model #655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@NarineK This PR is ready to be reviewed! The failed tests are caused by the update to |
Thank you, @ProGamerGov! I'll take a look into it next week - This week is busy. |
* The change in serialization format in torch 1.6 is backwards compatible, but not forward compatible and thus I'm raising the minimum torch version for tests involving the model. * Related issue where the new serialization format in 1.6 caused the error: pytorch/pytorch#42239
All the tests should pass once #656 is merged! |
Sometimes input tensors can have values higher than 1 or lower than 0, like for example when using some of the features from Captum's attr module. Rather than disabling these checks, I've changed them into UserWarnings instead.
As of Pytorch 1.10, the We should keep the existing Con2dSame layer for legacy support, but going forward we can probably use |
010bd7f
to
d35f047
Compare
@ProGamerGov, it looks like there is merge conflict related to some files. Do you mind checking that ? Do we still want to merge this PR? |
@NarineK I'll remake this PR to resolve the conflicts |
@ProGamerGov, if we remade this PR here: #935 can we then close this one ? |
This PR adds the InceptionV1 / GoogleNet Places365 model from MIT!
Used by Lucid here: https://github.com/tensorflow/lucid/blob/master/lucid/modelzoo/caffe_models/InceptionV1.py#L89
Originally from: https://github.com/CSAILVision/places365
Associated research paper: https://arxiv.org/abs/1610.02055
Website for the Places image datasets: http://places2.csail.mit.edu/
Pictured below: An Activation Atlas for the the Mixed5b ReLU layer
The InceptionV1 Places365 is the second most popular model from Lucid's model zoo, and appears to have some unique neuron types like 'perspective neurons'. Examples of perceptive detectors can be found on OpenAI's Microscope here, and in the model at layer
model.mixed4a.conv_5x5
.The changes in this PR are as follows:
transform_input
function were calculated by me using the version of the Places365 Standard dataset where every image was resized to256x256
. I then reduced the size down to224x224
via a transform. This resized version of the dataset had a smaller file size than the normal version, but I think the mean values should be same.__constants__
to both models in anticipation of upcoming JIT support forTyping.Union
: Enable Union typing pytorch#53180 The PyTorch docs suggest usingtyping.Final
instead of__constants__
, but earlier versions of Python don't havetyping.Final
.padding
option fromConv2dSame
assame
does not involve setting a padding value.super()
in the model class files tosuper().__init__()
._transform_input
with user warnings.I also generated a KLT matrix for the Places365 Standard dataset, and it can be used as an input the
ToRGB
transform inNaturalImage
:There's not really a large different compared to just using the default ImageNet KLT matrix, though I found that I had to increase the learning rate to
0.06
to get comparable results.