Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features map #2613

Closed
GiuliaCiaramella opened this issue Mar 26, 2021 · 7 comments · Fixed by #3804
Closed

Features map #2613

GiuliaCiaramella opened this issue Mar 26, 2021 · 7 comments · Fixed by #3804
Labels
question Further information is requested

Comments

@GiuliaCiaramella
Copy link

❔Question

is there a feature map in yolov5 whose output is independent on image size?

Additional context

Hello :)
I am interested in extracting the feature map from last layer in the head of the network, in particular, one of the three that goes to feed the Detect() layer. I want this feature map to produce a feature vector representing the input image. The problem I am facing, is that this vector depends on the input image size, and since my goal si to compare images starting from their feature vectors, I can't accomplish my goal if the two images have different sizes.

For now, in order to make things work, I am using VGG16() neural network to process the images and take the feature vector in the last layer before classification, which has always a lenght of 4096, since it depends on the network layer and not on the input image size.

However, since I am already using yolov5 for detection, that would be much more efficient extract the feature vector from yolov5 instead of reprocessing the image in another NN.

So my question is: is there a feature map in yolov5 whose output is independent on image size?
I hope that this is clear.

@GiuliaCiaramella GiuliaCiaramella added the question Further information is requested label Mar 26, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Mar 26, 2021

👋 Hello @GiuliaCiaramella, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@zonguu
Copy link

zonguu commented Mar 26, 2021

As I concered, the true length of vector is (number of class + 5). The size of input influences the the number of vector , for example, 20^2 + 40^2 + 80^2. You can compare them with the given position , layers and anchors.

@GiuliaCiaramella
Copy link
Author

@zonguu hey, thanks for the answer.
I think I don't complete understand it. What are those numbers that you reported? how should I take them?

What I am sure by now, is that the smallest feature map before the detect layer has size= [1,3,8,13,9] for an image size of 416*256. 1 and 3 are always there (I think they represent 1=batch, 3 like channels of colors, but not sure), and 9 =4classes in my model+5.
But the lenght in third and fourth position (number 8 and 13) change with image size.

@glenn-jocher
Copy link
Member

glenn-jocher commented Mar 26, 2021

@GiuliaCiaramella you can always use a nn.AdaptiveAvgPool2d() layer to average the spatial dimensions away, no matter what size they are. Most classification models like VGG do this, which is why their output vector is unaffected by image size. We have a Classify() module that does this as well here that you can use an example of how to get started (i.e. and if you want you can also apply it to each of the 3 Detect() layer inputs to get equal-size feature vectors.)

yolov5/models/common.py

Lines 305 to 315 in 005d7a8

class Classify(nn.Module):
# Classification head, i.e. x(b,c1,20,20) to x(b,c2)
def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
super(Classify, self).__init__()
self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
self.flat = nn.Flatten()
def forward(self, x):
z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
return self.flat(self.conv(z)) # flatten to x(b,c2)

@GiuliaCiaramella
Copy link
Author

@glenn-jocher Amazing! Thank you, I will focus on it :)

@kimngoc99
Copy link

@GiuliaCiaramella
Hello, you have solved the above problem. And you can help can I convert feature map to feature vector. I can't do that

@glenn-jocher glenn-jocher linked a pull request Jun 28, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 28, 2021

@GiuliaCiaramella good news 😃! Feature map visualization was added ✅ in PR #3804 by @Zigars today. This allows for visualizing feature maps from any part of the model from any function (i.e. detect.py, train.py, test.py). Feature maps are saved as *.png files in runs/features/exp directory. To turn on feature visualization set feature_vis=True in the model forward method and define the layer you want to visualize (default is SPP layer).

yolov5/models/yolo.py

Lines 158 to 160 in 20d45aa

if feature_vis and m.type == 'models.common.SPP':
feature_visualization(x, m.type, m.i)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

layer_8_SPP_features

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants