-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
calibration failed #33
Comments
@andyluo7 Our submission codes for v0.5 only support TRT 6. We don't support TRT 7 or above. |
@nvpohanh is there anyway to downgrade to TRT6 from TRT7 with cuda 10.2 on Ubuntu 18.04? |
@andyluo7 We don't use docker on Xavier. TRT6 should be part of JetPack 4.3 DP. If you need to install TRT6 separately, I will ask internally if that's possible. |
@nvpohanh Pls let me know if I can install TRT6 seperately |
FWIW, once we successfully did the opposite - copied TRT6 libraries from a board with JetPack 4.3 to a board with JetPack 4.2 (with TRT5), and easily switched between the alternative TRT versions with the Collective Knowledge technology. |
The command I use:
make calibrate RUN_ARGS="--benchmarks=resnet"
The output:
[2020-05-20 01:03:19,521 main.py:291 INFO] Using config files: measurements/Xavier/resnet/SingleStream/config.json,measurements/Xavier/resnet/MultiStream/config.json,measurements/Xavier/resnet/Offline/config.json
[2020-05-20 01:03:19,522 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/SingleStream/config.json ...
[2020-05-20 01:03:19,527 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/MultiStream/config.json ...
[2020-05-20 01:03:19,528 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/Offline/config.json ...
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_SingleStream"
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_MultiStream"
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_Offline"
[2020-05-20 01:03:19,529 main.py:244 INFO] Generating calibration cache for Benchmark "resnet"
Traceback (most recent call last):
File "code/main.py", line 327, in
main()
File "code/main.py", line 323, in main
handle_calibrate(benchmark_name, benchmark_conf)
File "code/main.py", line 248, in handle_calibrate
b.calibrate()
File "/home/aluo/inference_results_v0.5/closed/NVIDIA/code/common/builder.py", line 134, in calibrate
self.initialize()
File "/home/aluo/inference_results_v0.5/closed/NVIDIA/code/resnet/tensorrt/ResNet50.py", line 64, in initialize
raise RuntimeError("ResNet50 onnx model parsing failed! Error: {:}".format(parser.get_error(0).desc()))
RuntimeError: ResNet50 onnx model parsing failed! Error: Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
Makefile:309: recipe for target 'calibrate' failed
make: *** [calibrate] Error 1
The text was updated successfully, but these errors were encountered: