Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix of model.export() to work correct with bs>1 #1551

Merged
merged 18 commits into from
Oct 23, 2023

Conversation

BloodAxe
Copy link
Contributor

@BloodAxe BloodAxe commented Oct 18, 2023

This PR is quite sized, but its unpractical to split it since it represent an single, atomic change that includes a number of bugfixes here and there and following testing & documenting on what is supported where.

In short, the the changes are :

Bugfxies

  • Picking wrong indexes with exporting models with bs>1 in decoding modules (YoloNAS, YoloNAS-Pose, YoloX, PPYolo-E)

  • When using ONNX NonMaxSupression a max_predictions_per_image parameter was used incorrectly (Flat/Batch format predictions could contain more than a requested number of detections)

Improvement / Support for TensorRT

  • A support matrix has been added to indicate what export option is supported (Currently these changes in YoloNAS & YoloNAS-Pose docstrings, feel free to suggest alternative location where the should be places)

Support matrix

Detection models (YoloNAS, PPYolo-E, YoloX)

    | Batch Size | Export Engine | Format | OnnxRuntime 1.13.1 | TensorRT 8.4.2 | TensorRT 8.5.3 | TensorRT 8.6.1 |
    |------------|---------------|--------|--------------------|----------------|----------------|----------------|
    | 1          | ONNX          | Flat   | Yes                | Yes            | Yes            | Yes            |
    | >1         | ONNX          | Flat   | Yes                | No             | No             | No             |
    | 1          | ONNX          | Batch  | Yes                | No             | Yes            | Yes            |
    | >1         | ONNX          | Batch  | Yes                | No             | No             | Yes            |
    | 1          | TensorRT      | Flat   | No                 | No             | Yes            | Yes            |
    | >1         | TensorRT      | Flat   | No                 | No             | Yes            | Yes            |
    | 1          | TensorRT      | Batch  | No                 | Yes            | Yes            | Yes            |
    | >1         | TensorRT      | Batch  | No                 | Yes            | Yes            | Yes            |

Pose Estimation

| Batch Size | Format | OnnxRuntime 1.13.1 | TensorRT 8.4.2 | TensorRT 8.5.3 | TensorRT 8.6.1 |
|------------|--------|--------------------|----------------|----------------|----------------|
| 1          | Flat   | Yes                | Yes            | Yes            | Yes            |
| >1         | Flat   | Yes                | Yes            | Yes            | Yes            |
| 1          | Batch  | Yes                | No             | No             | Yes            |
| >1         | Batch  | Yes                | No             | No             | Yes            |

 1) Picking wrong indexes with exporting models with bs>1 in decoding modules (YoloNAS, YoloNAS-Pose, YoloX, PPYolo-E)

 2) When using ONNX NonMaxSupression a max_predictions_per_image parameter was used incorrectly (Flat/Batch format predictions could contain more than a requested number of detections)
Copy link
Contributor

@Louis-Dupont Louis-Dupont left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, I just didn't dig super deep into each line of code, I'll trust your tests and @shaydeci this time 😇
Let me know if there is something specific you want me to review in detail

Copy link
Contributor

@shaydeci shaydeci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@BloodAxe BloodAxe merged commit 0515496 into master Oct 23, 2023
2 checks passed
@BloodAxe BloodAxe deleted the feature/SG-1195-fix-nms branch October 23, 2023 13:35
BloodAxe added a commit that referenced this pull request Oct 26, 2023
BloodAxe added a commit that referenced this pull request Oct 26, 2023
* [Improvement] max_batches support to training log and tqdm progress bar. (#1554)

* Added max_batches support to training log and tqdm progress bar.

* Added changing string in accordance which parameter is used (len(loader) of max_batches)

* Replaced stopping condition for the epoch with a smaller one

(cherry picked from commit 749a9c7)

* fix (#1558)

Co-authored-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
(cherry picked from commit 8a1d255)

* fix (#1564)

(cherry picked from commit 24798b0)

* Bugfix of model.export() to work correct with bs>1 (#1551)

(cherry picked from commit 0515496)

* Fixed incorrect automatic variable used (#1565)

$@ is the name of the target being generated, and $^ are the dependencies

Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
(cherry picked from commit 43f8bea)

* fix typo in class documentation (#1548)

Co-authored-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
(cherry picked from commit ec21383)

* Feature/sg 1198 mixed precision automatically changed with warning (#1567)

* fix

* work with tmpdir

* minor change of comment

* improve device_config

(cherry picked from commit 34fda6c)

* Fixed issue with torch 1.12 where _scale_fn_ref is missing in CyclicLR (#1575)

(cherry picked from commit 23b4f7a)

* Fixed issue with torch 1.12 issue with arange not supporting fp16 for CPU device. (#1574)

(cherry picked from commit 1f15c76)

---------

Co-authored-by: hakuryuu96 <marchenkophilip@gmail.com>
Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
Co-authored-by: Alessandro Ros <aler9.dev@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants