-
Notifications
You must be signed in to change notification settings - Fork 364
🐛 [Bug] Regression : Torch-TensorRT now fail to convert due to unsupported negative pad for torch.nn.ConstantPad2d #2079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
By using So I have make a benchmark of the same net compiled in the following environments: environment 1: environement 2: Environement1
Environement2
Please note that in environment1 I can't use ir="fx_ts_compat" (not yet available) and I set dtype=torch.float so that expected input for compiled trt_ts_module is float32 (i.e But in case of environement2 we are forced to use dtype=torch.half (to prevent In environment 1 I get many of this following warning (but inference result and inference time is good):
In environment 2 I get also a warning :
and torch.jit.save(trt_ts_module, path) fail to save my compiled trt_ts_module:
Do you have suggestion to improve inference time when upgrading to torch_tensort 1.4.0 ? full stack trace for environment2 compilation :
|
Hi @fabricecarles - thanks for the detailed information. The error reported in TorchScript does seem to derive from these lines: TensorRT/core/conversion/converters/impl/constant_pad.cpp Lines 33 to 41 in 6ceaed8
Regarding the slow-down with |
using In the tail of my network definition self.upscore refer to a ConvTranspose2d layer which is follow by ConstantPad2d to crop result from (8, 16, 256, 256) to (8, 16, 192, 192) which is my target output size
For debugging purpose I can test other options if you want full stack trace
|
I can confirm that there is no issue with and inference time is good
|
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
Bug Description
This is a regression due to :
Now we fail to convert every network with torch_tensorrt.compile() if negative pad is used inside ConstantPad layer while conversion work fine in earlier version
example:
torch.nn.ConstantPad2d(-6, float(0.0))
will raisedRuntimeError: [Error thrown at core/conversion/converters/impl/constant_pad.cpp:35] Expected left >= 0 to be true but got false Unsupported negative pad at index 0
To Reproduce
Steps to reproduce the behavior:
trt_ts_module = torch_tensorrt.compile( model.to(device), inputs=[torch_tensorrt.Input((1, 1, args.image_size, args.image_size))], enabled_precisions={torch.half}, # Run with FP16 enabled_precisions, workspace_size = 1024, )
RuntimeError: [Error thrown at core/conversion/converters/impl/constant_pad.cpp:35] Expected left >= 0 to be true but got false Unsupported negative pad at index 0
full stack trace
Expected behavior
Conversion work fine in earlier version:
-torch 1.12.1
-torch-tensorrt 1.2.0
-torchvision 0.13.1
-tensorrt 8.0.3.4
Environment
Additional context
Pure torchscript conversion with torch.jit.trace work fine for all versions !
The text was updated successfully, but these errors were encountered: