Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use yolov8-lite-s #3

Open
aifanboylearner opened this issue May 14, 2023 · 6 comments
Open

Cannot use yolov8-lite-s #3

aifanboylearner opened this issue May 14, 2023 · 6 comments

Comments

@aifanboylearner
Copy link

Hi,

I can use yolov8n but cannot seem to be able to use the other checkpoints: yolov8-lite-s and yolov8-lite-t

I tried installing latest version of ultralytics but then the weigths cannot be loaded.

I also tried using the custom folder ultralytics from this repo. In that case the weights can be loaded but inference then does not work.

@Vincent-Stragier
Copy link

@aifanboylearner,

Same issue here. I'm using Python 3.10 on Windows 11.

I have this kind of error:

    results = face_detector.predict(img, verbose=False, show=True, conf=0.25)[0]
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\yolo\engine\model.py", line 252, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\yolo\engine\predictor.py", line 157, in __call__
    return list(self.stream_inference(source, model))  # merge list of Result into one
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\yolo\engine\predictor.py", line 221, in stream_inference
    preds = self.model(im, augment=self.args.augment, visualize=visualize)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\nn\autobackend.py", line 313, in forward
    y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\nn\tasks.py", line 203, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\nn\tasks.py", line 58, in _forward_once
    x = m(x)  # run
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\nn\modules.py", line 479, in forward
    stem_1_out  = self.stem_1(x)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\nn\modules.py", line 66, in forward_fuse
    return self.act(self.conv(x))
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\Vincent\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (Tensor, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (Tensor, Parameter, Parameter, tuple of (int, int), tuple of (int, int), tuple of (bool, bool), int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (Tensor, Parameter, Parameter, tuple of (int, int), tuple of (int, int), tuple of (bool, bool), int)

@YSGFF
Copy link

YSGFF commented Dec 7, 2023

I also have this problem, have you solved it?

@Vincent-Stragier
Copy link

@YSGFF,

Only yolov8n is working, nobody managed to work with the two others. So there is no solution, just use yolov8n.

Best,
Vincent

@JYW-SZ
Copy link

JYW-SZ commented Aug 7, 2024

"ultralytics\nn\modules\block.py",line42

class StemBlock(nn.Module):
    def __init__(self, c1, c2, k=3, s=2, p=None, g=1,d=1, act=True):
        super(StemBlock, self).__init__()
        self.stem_1 = Conv(c1, c2, k, s, p, g, d, act)
        self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0)
        self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1)
        self.stem_2p = nn.MaxPool2d(kernel_size=2,stride=2,ceil_mode=True)
        self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0)

@ganquan0513
Copy link

"ultralytics\nn\modules\block.py",line42

class StemBlock(nn.Module):
    def __init__(self, c1, c2, k=3, s=2, p=None, g=1,d=1, act=True):
        super(StemBlock, self).__init__()
        self.stem_1 = Conv(c1, c2, k, s, p, g, d, act)
        self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0)
        self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1)
        self.stem_2p = nn.MaxPool2d(kernel_size=2,stride=2,ceil_mode=True)
        self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0)

This is helpful ,but you need modify the yolov8-lite-s-pose.yaml and yolov8-lite-t-pose.yaml config file ;

here the yaml `backbone:

[from, number, module, args]

[ [ -1, 1, StemBlock, [32, 3, 2,None,1] ], # 0-P2/4
[ -1, 1, Shuffle_Block, [96, 2]], # 1-P3/8
[ -1, 3, Shuffle_Block, [96, 1]], # 2
[ -1, 1, Shuffle_Block, [192, 2]], # 3-P4/16
[ -1, 7, Shuffle_Block, [192, 1]], # 4
[ -1, 1, Shuffle_Block, [384, 2]], # 5-P5/32
[ -1, 3, Shuffle_Block, [384, 1]], # 6
[ -1, 1, SPPF, [384, 5]],
]

v5lite-e head

head:
[ [ -1, 1, Conv, [96, 1, 1,None,1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest']],
[[ -1, 4], 1, Concat, [1]], # cat backbone P4
[ -1, 1, DWConvblock, [96, 3, 1]], # 11

[ -1, 1, Conv, [96, 1, 1,None,1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest']],
[[ -1, 2], 1, Concat, [1]],  # cat backbone P3
[ -1, 1, DWConvblock, [96, 3, 1] ],  # 15 (P3/8-small)

[-1, 1, DWConvblock, [96, 3, 2]],
[[ -1, 12], 1, ADD, [1]],  # cat head P4
[ -1, 1, DWConvblock, [96, 3, 1]],  # 18 (P4/16-medium)

[ -1, 1, DWConvblock, [96, 3, 2]],
[[ -1, 8], 1, ADD, [1]],  # cat head P5
[ -1, 1, DWConvblock, [96, 3, 1]],  # 21 (P5/32-large)

[[ 15, 18, 21], 1, Pose, [nc, kpt_shape]],  # Detect(P3, P4, P5)

]`

lite-t-pose:::`backbone:

[from, number, module, args]

[ [ -1, 1, StemBlock, [16, 3, 2,None,1] ], # 0-P2/4
[ -1, 1, Shuffle_Block, [48, 2]], # 1-P3/8
[ -1, 2, Shuffle_Block, [48, 1]], # 2
[ -1, 1, Shuffle_Block, [96, 2]], # 3-P4/16
[ -1, 5, Shuffle_Block, [96, 1]], # 4
[ -1, 1, Shuffle_Block, [192, 2]], # 5-P5/32
[ -1, 2, Shuffle_Block, [192, 1]], # 6
[ -1, 1, SPPF, [192, 5]],
]

v5lite-e head

head:
[ [ -1, 1, Conv, [48, 1, 1,None,1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest']],
[[ -1, 4], 1, Concat, [1]], # cat backbone P4
[ -1, 1, DWConvblock, [48, 3, 1]], # 11

[ -1, 1, Conv, [48, 1, 1,None,1]],
[ -1, 1, nn.Upsample, [ None, 2, 'nearest']],
[[ -1, 2], 1, Concat, [1]],  # cat backbone P3
[ -1, 1, DWConvblock, [48, 3, 1] ],  # 15 (P3/8-small)

[-1, 1, DWConvblock, [48, 3, 2]],
[[ -1, 12], 1, ADD, [1]],  # cat head P4
[ -1, 1, DWConvblock, [48, 3, 1]],  # 18 (P4/16-medium)

[ -1, 1, DWConvblock, [48, 3, 2]],
[[ -1, 8], 1, ADD, [1]],  # cat head P5
[ -1, 1, DWConvblock, [48, 3, 1]],  # 21 (P5/32-large)
[[ 15, 18, 21], 1, Pose, [nc, kpt_shape]],  # Detect(P3, P4, P5)

]
`

@clibdev
Copy link

clibdev commented Nov 23, 2024

I have created a fork repository clibdev/yolov8-face which contains yolov8-lite-t and yolov8-lite-s models compatibility fixes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants