Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in torchscript model inference #2129

Closed
sourabhyadav opened this issue Feb 4, 2021 · 10 comments
Closed

Issue in torchscript model inference #2129

sourabhyadav opened this issue Feb 4, 2021 · 10 comments
Labels
bug Something isn't working

Comments

@sourabhyadav
Copy link

🐛 Bug

I am facing the below issue when I try to load a saved torchscript model:

To Reproduce (REQUIRED)

Model saving was done using export.py file:

 # Update model
    for k, m in model.named_modules():
        m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatability
        if isinstance(m, models.common.Conv) and isinstance(m.act, nn.Hardswish):
            m.act = Hardswish()  # assign activation
        # if isinstance(m, models.yolo.Detect):
        #     m.forward = m.forward_export  # assign forward (optional)
    model.model[-1].export = True  # set Detect() layer export=True
    y = model(img)  # dry run

    # TorchScript export
    try:
        print('\nStarting TorchScript export with torch %s...' % torch.__version__)
        f = opt.weights.replace('.pt', '.torchscript.pt')  # filename
        ts = torch.jit.trace(model, img)
        ts.save(f)
        print('TorchScript export success, saved as %s' % f)
    except Exception as e:
        print('TorchScript export failure: %s' % e)

Run:

python3 models/export.py --weights yolov5m.pt --img-size 640

Model loading is done like this:

self.model = torch.jit.load(weights, map_location=self.device)

Model loading seems fine. But the issue comes when we try to inference the model:
Output:

Traceback (most recent call last):
  File "face_recognition.py", line 84, in <module>
    embed_for_analysis()
  File "face_recognition.py", line 80, in embed_for_analysis
    inference.main(args, logger)
  File "/data/sourabh/Releases/smart_vision/facenet_master/tycoai_vision/inference.py", line 81, in main
    camera_object_list, args.face_detector, logger=logger, args=args)
  File "/data/sourabh/Releases/smart_vision/facenet_master/tycoai_vision/analyze_frames.py", line 37, in detect_and_track_faces
    logger=logger, removed_tracks=removed_tracks, args=args)
  File "/data/sourabh/Releases/smart_vision/facenet_master/tycoai_vision/analyze_frames.py", line 120, in detect_normalize_by_face_detector
    bounding_boxes = detection_network.detect(np.stack(frames_downsampled))
  File "/data/sourabh/Releases/smart_vision/facenet_master/tycoai_vision/face_detector/yolov5/yolov5_model.py", line 176, in detect
    inf_out, _ = self.model(imgs, augment=False)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
RuntimeError: forward() expected at most 2 argument(s) but received 3 argument(s). Declaration: forward(__torch__.models.yolo.Model self, Tensor x) -> (Tensor[])

Environment

  • OS: Ubuntu 18.04
  • GPU: Nvidia RTX 2080 Ti
  • Cuda: 10.1

Is am I missing something here? Please guide me.

@sourabhyadav sourabhyadav added the bug Something isn't working label Feb 4, 2021
@glenn-jocher
Copy link
Member

@sourabhyadav we don't provide support for torchscript loading or inference, only export.

@sourabhyadav
Copy link
Author

@glenn-jocher Ok I will raise it as a question to the community,

@zhiqwang
Copy link
Contributor

zhiqwang commented Feb 5, 2021

Hi @sourabhyadav I have a custom implementation of the loading and inference with torchscript, maybe you could check it in here.

@nobody-cheng
Copy link

are you fix??

@pugovka91
Copy link

Hi @sourabhyadav I have a custom implementation of the loading and inference with torchscript, maybe you could check it in here.

@zhiqwang if it's possible, could you please give a link to your implementation of the loading and inference with torchscript (current link is not available any more)? I need to speed up my inference of custom yolov5 model, but I'm new to CV and I don't know, how to implement it myself(

@zhiqwang
Copy link
Contributor

zhiqwang commented Jul 23, 2021

Hi @pugovka91 , the notebook has a python interface of loading and inference with torchscript, and you can check this if you wanna the C++ interface.

@pugovka91
Copy link

Hi @pugovka91 , the notebook has a python interface of loading and inference with torchscript, and you can check this if you wanna the C++ interface.
@zhiqwang Thanks a lot! I’ll try it.

@pugovka91
Copy link

pugovka91 commented Aug 6, 2021

Hello @zhiqwang, it’s a bit out of topic, but I wanted to ask, if it’s possible to make detection with augment flag using yolort-model? Thanks a lot!

@zhiqwang
Copy link
Contributor

zhiqwang commented Aug 6, 2021

Hi @pugovka91 ,

I'm not sure I understand your meaning correctly. Did you mean the Test-Time Augmentation (TTA), If that's the feature you're concerned about, we don't have this feature implemented yet in yolort.

@pugovka91
Copy link

@zhiqwang yes, exactly) will be waiting for this feature implementation, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants