Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov5x.pt and yolov5x6.pt does work with YOLO V6.0.6 #5974

Closed
1 of 2 tasks
NarenZen opened this issue Dec 14, 2021 · 6 comments
Closed
1 of 2 tasks

yolov5x.pt and yolov5x6.pt does work with YOLO V6.0.6 #5974

NarenZen opened this issue Dec 14, 2021 · 6 comments
Labels
bug Something isn't working Stale Stale and schedule for closing soon

Comments

@NarenZen
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Detection

Bug

While loading the model load_state_dict(model.float().state_dict())., I get the below error.

I'm using YOLO V6.0.6

This are the models I tried to load: yolov5x6.pt and yolov5x.pt

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
	Unexpected key(s) in state_dict: "model.33.anchor_grid". 

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@NarenZen NarenZen added the bug Something isn't working label Dec 14, 2021
@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 14, 2021

@NarenZen 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. Your issue is not reproducible. YOLOv5x and YOLOv5x6 both work correctly:

Screenshot 2021-12-14 at 09 21 08

We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible to produce the problem
  • Complete – Provide all parts someone else needs to reproduce the problem
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

For Ultralytics to provide assistance your code should also be:

  • Current – Verify that your code is up-to-date with GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been solved in master.
  • Unmodified – Your problem must be reproducible using official YOLOv5 code without changes. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@NarenZen
Copy link
Author

NarenZen commented Dec 16, 2021

@glenn-jocher
Here is the reproducable code., I'm using latest yolo version yolov5 6.0.6.

This is the snippet I'm using to infer using the model yolov5x6.pt.

from yolov5.utils.torch_utils import torch
from yolov5.models.yolo import Model
from yolov5.utils.general import set_logging, yolov5_in_syspath

model_path = "./yolov5x6.pt"
img = 'https://raw.githubusercontent.com/ultralytics/yolov5/master/data/images/bus.jpg'


device=""

if not device:
	device = "cuda:0" if torch.cuda.is_available() else "cpu"
with yolov5_in_syspath():
	model = torch.load(model_path, map_location=torch.device(device))
if isinstance(model, dict):
	model = model["model"]  # load model
hub_model = Model(model.yaml).to(next(model.parameters()).device)  # create
hub_model.load_state_dict(model.float().state_dict())#, strict=False)  # load state_dict
hub_model.names = model.names  # class names
## save
self.model = hub_model.autoshape()
results = hub_model(img)
print(results)

Here is the error message:

hub_model.load_state_dict(model.float().state_dict())  # load state_dict
File "/home/naren/Videos/venv/venv_superhawk/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
Unexpected key(s) in state_dict: "model.33.anchor_grid". 

@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 16, 2021

@NarenZen your workflow is incorrect. Both YOLOv5x and YOLOv5x6 models work correctly:

Screen Shot 2021-12-16 at 12 59 35 PM

See YOLOv5 PyTorch Hub tutorial for details:

YOLOv5 Tutorials

Good luck 🍀 and let us know if you have any other questions!

@github-actions
Copy link
Contributor

github-actions bot commented Jan 16, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Jan 16, 2022
@Mehrab-Hossain
Copy link

when I run yolov6 infer.py this error displays .. what is the solution for it ..

Fusing model...
Switch model to deploy modality.
Traceback (most recent call last):
File "tools/infer.py", line 116, in
main(args)
File "tools/infer.py", line 111, in main
run(**vars(args))
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "tools/infer.py", line 103, in run
inferer = Inferer(source, weights, device, yaml, img_size, half)
File "/content/YOLOv6/yolov6/core/inferer.py", line 50, in init
self.model(torch.zeros(1, 3, *self.img_size).to(self.device).type_as(next(self.model.model.parameters()))) # warmup
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/YOLOv6/yolov6/layers/common.py", line 360, in forward
y, _ = self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/YOLOv6/yolov6/models/yolo.py", line 39, in forward
x = self.backbone(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/YOLOv6/yolov6/models/efficientrep.py", line 98, in forward
x = self.stem(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/YOLOv6/yolov6/layers/common.py", line 209, in forward
return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 1, 3, 416] to have 3 channels, but got 1 channels instead

@glenn-jocher
Copy link
Member

@Mehrab-Hossain It seems like you are encountering a runtime error during model inference. The error message indicates an inconsistency in expected input channels, likely caused by a mismatch between the model's input configuration and the actual input data received.

To troubleshoot this, I recommend verifying the input image data and the model's input configuration:

  1. Input Data: Ensure that the input image data has the correct number of channels (e.g., 3 for RGB images) and the expected dimensions specified by the model.

  2. Model Configuration: Double-check the model's input configuration, including the expected input dimensions and channels. You may need to inspect the model's architecture or configuration files to ensure that the input data matches the model's requirements.

If the input data and model configuration seem to be in order, you might want to inspect the code snippet where the input is prepared for inference to identify any potential issues in data preprocessing.

Additionally, it's a good practice to check for compatibility between the YOLOv6 version and the specific model you are using to ensure they are designed to work together seamlessly.

Let me know if this helps or if you need further assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

3 participants