English | 中文
zetton-inference-tensorrt is an open source extension of zetton-inference package that enables deep learning inference using the TensorRT framework. It's a part of the Project Zetton.
Please refer to changelog.md for details and release history.
For compatibility changes between different versions of zetton-inference-tensorrt, please refer to compatibility.md.
Please refer to Installation for installation instructions.
Please see get_started.md for the basic usage of zetton-inference-tensorrt.
NVIDIA Jetson Xavier NX
Task | Model | FP32 | FP16 | INT8 |
---|---|---|---|---|
Detection | YOLOv5 | |||
Detection | YOLOX | |||
Detection | YOLOv7 | |||
Tracking | DeepSORT | |||
Tracking | ByteTrack |
x86 with NVIDIA A6000 GPU
Task | Model | FP32 | FP16 | INT8 |
---|---|---|---|---|
Detection | YOLOv5 | |||
Detection | YOLOX | |||
Detection | YOLOv7 | |||
Tracking | DeepSORT | |||
Tracking | ByteTrack |
Please refer to FAQ for frequently asked questions.
We appreciate all contributions to improve zetton-inference-tensorrt. Please refer to CONTRIBUTING.md for the contributing guideline.
We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the package and benchmark could serve the growing research and production community by providing a flexible toolkit to deploy models.
-
For academic use, this project is licensed under the 2-clause BSD License, please see the LICENSE file for details.
-
For commercial use, please contact Yusu Pan.
-
zetton-inference: main package for deep learning inference.
-
zetton-ros-vendor: ROS-related examples.