Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calibration failed #33

Open
andyluo7 opened this issue May 20, 2020 · 5 comments
Open

calibration failed #33

andyluo7 opened this issue May 20, 2020 · 5 comments

Comments

@andyluo7
Copy link

The command I use:
make calibrate RUN_ARGS="--benchmarks=resnet"

The output:

[2020-05-20 01:03:19,521 main.py:291 INFO] Using config files: measurements/Xavier/resnet/SingleStream/config.json,measurements/Xavier/resnet/MultiStream/config.json,measurements/Xavier/resnet/Offline/config.json
[2020-05-20 01:03:19,522 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/SingleStream/config.json ...
[2020-05-20 01:03:19,527 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/MultiStream/config.json ...
[2020-05-20 01:03:19,528 init.py:142 INFO] Parsing config file measurements/Xavier/resnet/Offline/config.json ...
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_SingleStream"
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_MultiStream"
[2020-05-20 01:03:19,529 main.py:295 INFO] Processing config "Xavier_resnet_Offline"
[2020-05-20 01:03:19,529 main.py:244 INFO] Generating calibration cache for Benchmark "resnet"
Traceback (most recent call last):
File "code/main.py", line 327, in
main()
File "code/main.py", line 323, in main
handle_calibrate(benchmark_name, benchmark_conf)
File "code/main.py", line 248, in handle_calibrate
b.calibrate()
File "/home/aluo/inference_results_v0.5/closed/NVIDIA/code/common/builder.py", line 134, in calibrate
self.initialize()
File "/home/aluo/inference_results_v0.5/closed/NVIDIA/code/resnet/tensorrt/ResNet50.py", line 64, in initialize
raise RuntimeError("ResNet50 onnx model parsing failed! Error: {:}".format(parser.get_error(0).desc()))
RuntimeError: ResNet50 onnx model parsing failed! Error: Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
Makefile:309: recipe for target 'calibrate' failed
make: *** [calibrate] Error 1

@nvpohanh
Copy link

@andyluo7 Our submission codes for v0.5 only support TRT 6. We don't support TRT 7 or above.

@andyluo7
Copy link
Author

@nvpohanh is there anyway to downgrade to TRT6 from TRT7 with cuda 10.2 on Ubuntu 18.04?
Is there any inference_result_v0.5 docker container which can run on Xavier?

@nvpohanh
Copy link

@andyluo7 We don't use docker on Xavier. TRT6 should be part of JetPack 4.3 DP. If you need to install TRT6 separately, I will ask internally if that's possible.

@andyluo7
Copy link
Author

@nvpohanh Pls let me know if I can install TRT6 seperately

@psyhtest
Copy link

FWIW, once we successfully did the opposite - copied TRT6 libraries from a board with JetPack 4.3 to a board with JetPack 4.2 (with TRT5), and easily switched between the alternative TRT versions with the Collective Knowledge technology.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants