-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to extract features responsible for particular object? #385
Comments
Hello @bingiflash, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook , Docker Image, and Google Cloud Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@bingiflash you could try a grad-cam style approach. https://keras.io/examples/vision/grad_cam/ Might be an interesting feature to incorporate here. Feel free to submit a PR if your work proves fruitfull! |
@glenn-jocher Thanks for the fast response. I am hoping to attach a mask head to do instance segmentation, but for that I need a access point feature map in the network. something like this |
@bingiflash when you load a model it shows you exactly which stages are being passed to Detect(): |
@glenn-jocher Thank you. I'll try that. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@bingiflash Sorry to revive this thread but I'm also interested in this feature, did you have any success in incorprorating grad-map in yolov5? |
@AndreaBrg Sorry, but that grad-map wasn't exactly the reason I wanted to extract features. So i didn't work on it. |
@bingiflash Did you have any success with integrating a mask head for instance segmentation? If yes, could you share some insights? Thanks. Related: #1123 |
Hi, @bingiflash did you manage to integrate the mask rcnn head? Please give us some insights if it worked or not |
Same question as above, any advances on this matter? |
Same question as above, any advances on this matter? @glenn-jocher How to get the features corresponding to objects? |
@Edwardmark 'features' corresponding to objects? Every weight and bias in the entire model is responsible for every output to varying degrees, this is the nature of AI and the reason for its performance, so I don't understand your question. |
@glenn-jocher I mean each objects can we get the RoI of objects such as faster RCNN, before feed the features to detection head to get bbox and classification scores. For example, we get a car with bbox[10,20, 100, 120] which corresponds to the 1170th anchors in the all anchors, then each anchors can map to a certain RoI of features. |
@Edwardmark in YOLO classification and detections occur simultaneously, unlike in older two stage detectors like faster RCNN. |
@glenn-jocher I understand that, but I mean before detection branch(classification and regresion), we can get features for each anchor(that is totally feasible), I just want to extract the feas corresponding to each anchor(according to their space coordinates correspondance), so that I can extract the features to do further work, such as another classification or something. |
@Edwardmark you can extract any intermediate values from the model by updating the module you're interested in, or simply placing code in the model forward function. See yolo.py for the model forward function. |
Thanks. |
@bingiflash @AndreaBrg @Edwardmark good news 😃! Feature map visualization was added ✅ in PR #3804 by @Zigars today. This allows for visualizing feature maps from any part of the model from any function (i.e. detect.py, train.py, test.py). Feature maps are saved as *.png files in runs/features/exp directory. To turn on feature visualization set Lines 158 to 160 in 20d45aa
To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Hi, this is great news, thanks @Zigars and @glenn-jocher for setting this up. |
@glenn-jocher Thanks for your great work! |
I searched for |
@besbesmany to visualize features:
|
Hi @glenn-jocher For example, if my model detects a person in an image, I would like to segment only the person features excluding the background pixels. Would this be possible with feature map visualisation? Thanks in advance! |
@caraevangeline 👋 Hello! Thanks for asking about feature visualization. YOLOv5 🚀 features can be visualized through all stages of the model from input to output. To visualize features from a given source run python detect.py --weights yolov5s.pt --source data/images/bus.jpg --visualize An example Notebook visualizing bus.jpg features with YOLOv5s is shown below: All stages are visualized by default, each with its own PNG showing the first 32 feature maps output from that stage. You can open any PNG for a closer look. For example the first 32 feature maps of the Focus() layer output are shown in Feature maps may be customized by updating the Lines 403 to 427 in bb5ebc2
Good luck 🍀 and let us know if you have any other questions! |
@glenn-jocher Thanks for this, I have looked into this before. I understand I can customise the feature map, but would this give me just the object of interest (heat-map is sufficient) excluding background pixels? Thanks for your prompt reply! |
@caraevangeline it's really up to you to handle the feature maps however you want, we simply provide a tool for exposing them. |
Thank you @glenn-jocher |
Hi @caraevangeline Thanks in advance |
Hello everyone, |
Hello did you find a solution how to extract the feature vector of the detected object? |
@tibbar_upp You can extract the object feature vector using the following steps:
I hope this helps! Let me know if you have any further questions. |
`import torch model = torch.hub.load('ultralytics/yolov5', 'yolov5s-cls', pretrained=True)model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s-cls.pt') # load from PyTorch Hub image_path = '/content/yolov5/data/images/bus.jpg' Define the target image size for YOLOv5-small (640x640 pixels)target_size = (640, 640) Preprocess the imagepreprocess = transforms.Compose([ input_data = preprocess(image) result = model(input_data) output_feature_maps = model(input_data, feature_vis=True)x = model.forward(input_data) Now i am getting a vector [1,1000], but I want to extract feature before classifier layer ..... |
@prince0310 the feature vector you obtained from the YOLOv5 model represents the output of the classifier layer, which consists of 1000 dimensions. If you want to extract features before the classifier layer, you can modify your code as follows: ...
# Remove the classifier layer from the model
model_without_classifier = torch.nn.Sequential(*(list(model.children())[:-1]))
# Pass the input data through the modified model
features = model_without_classifier(input_data)
print(features.shape) # Shape of the extracted features
... By removing the last layer of the model, you will obtain the feature tensor before the classifier layer, which will have a different shape depending on the specific architecture of the YOLOv5 model you are using. Let us know if you have any further questions or need additional assistance! |
Thanks @glenn-jocher. I have implemented the the same. for feature extraction by removing classifier layers. Here I am attaching the link of repos where I have mentioned the implementation for community uses. Once again I would like to thanks for your reply . Your are amazing. |
@prince0310 thank you for your kind words and glad to hear that you were able to implement feature extraction by removing the classifier layers. Keeping the community informed by sharing your implementation through a repository is a great initiative! If you have any further questions or need any assistance, feel free to ask. We are always here to help. |
Hi @glenn-jocher why model_without_classifier.eval() return empty ? |
@prince0310 When calling Therefore, calling Let me know if you have any other questions or if there's anything else I can assist you with! |
❔Question
I want to see if we can extract the features that are contributing to the prediction of an object. Is there a way to do it? If so at which layer should I be taking these intermediate features?
Additional context
The text was updated successfully, but these errors were encountered: