Skip to content

Optim-wip: Add the pre-trained InceptionV1 Places365 model #655

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

ProGamerGov
Copy link
Contributor

@ProGamerGov ProGamerGov commented Apr 26, 2021

This PR adds the InceptionV1 / GoogleNet Places365 model from MIT!

inceptionv1_places365_mixed5b_relu_atlas

atlas_21_attempts_g20x20_mixed5b

The InceptionV1 Places365 is the second most popular model from Lucid's model zoo, and appears to have some unique neuron types like 'perspective neurons'. Examples of perceptive detectors can be found on OpenAI's Microscope here, and in the model at layer model.mixed4a.conv_5x5.

The changes in this PR are as follows:

  • Added the MIT InceptionV1 model trained on the Places365 Standard dataset. The mean normalization values in the model's transform_input function were calculated by me using the version of the Places365 Standard dataset where every image was resized to 256x256. I then reduced the size down to 224x224 via a transform. This resized version of the dataset had a smaller file size than the normal version, but I think the mean values should be same.
  • Added the list of class names for the InceptionV1 Places365 model.
  • Added __constants__ to both models in anticipation of upcoming JIT support for Typing.Union: Enable Union typing pytorch#53180 The PyTorch docs suggest using typing.Final instead of __constants__, but earlier versions of Python don't have typing.Final.
  • Auxiliary branch layer names were renamed for both models, based on Torchvision's GoogLeNet model.
  • Removed padding option from Conv2dSame as same does not involve setting a padding value.
  • Fixed the download link for the InceptionV1 model.
  • Corrected all instances of super() in the model class files to super().__init__().
  • Improved existing model tests, added new CUDA tests, and added tests for the Places365 model.
  • Replaced input range assertions in _transform_input with user warnings.
  • JIT support for both InceptionV1 models

I also generated a KLT matrix for the Places365 Standard dataset, and it can be used as an input the ToRGB transform in NaturalImage:

# RGB format, use torch.flip([0, 1]) to convert to BGR
klt_mtx = torch.tensor(
    [[0.0175, -0.0713, -0.2234], [-0.0340, -0.0019, -0.2299], [0.0168, 0.0702, -0.2332]]
)

There's not really a large different compared to just using the default ImageNet KLT matrix, though I found that I had to increase the learning rate to 0.06 to get comparable results.

@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented Apr 26, 2021

@NarineK This PR is ready to be reviewed! The failed tests are caused by the update to black which is unrelated to the code introduced in this PR.

@NarineK
Copy link
Contributor

NarineK commented Apr 29, 2021

@NarineK This PR is ready to be reviewed! The failed tests are caused by the update to black which is unrelated to the code introduced in this PR.

Thank you, @ProGamerGov! I'll take a look into it next week - This week is busy.

* The change in serialization format in torch 1.6 is backwards compatible, but not forward compatible and thus I'm raising the minimum torch version for tests involving the model.

* Related issue where the new serialization format in 1.6 caused the error: pytorch/pytorch#42239
@ProGamerGov
Copy link
Contributor Author

All the tests should pass once #656 is merged!

Sometimes input tensors can have values higher than 1 or lower than 0, like for example when using some of the features from Captum's attr module. Rather than disabling these checks, I've changed them into UserWarnings instead.
@ProGamerGov
Copy link
Contributor Author

As of Pytorch 1.10, the nn.Conv2d layer supports SAME padding. Every Inception / Googlenet Conv2dSame layer can replaced except for the first first one as the new same padding doesn't support strides larger than 1 yet.

We should keep the existing Con2dSame layer for legacy support, but going forward we can probably use nn.Conv2d where possible.

@NarineK
Copy link
Contributor

NarineK commented May 8, 2022

@ProGamerGov, it looks like there is merge conflict related to some files. Do you mind checking that ? Do we still want to merge this PR?

@ProGamerGov
Copy link
Contributor Author

ProGamerGov commented May 8, 2022

@NarineK I'll remake this PR to resolve the conflicts

@ProGamerGov
Copy link
Contributor Author

@NarineK I remade this PR here: #935

@NarineK
Copy link
Contributor

NarineK commented May 8, 2022

@ProGamerGov, if we remade this PR here: #935 can we then close this one ?

@ProGamerGov ProGamerGov closed this May 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants