We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A bug? I use the Plugin of InstanceNorm plugin, when use trtexec to get the engine, Error show like:
InstanceNorm
trtexec
[07/27/2023-06:18:54] [I] [TRT] No importer registered for op: InstanceNormalization_TRT. Attempting to import as plugin. [07/27/2023-06:18:54] [I] [TRT] Searching for plugin: InstanceNormalization_TRT, plugin_version: 1, plugin_namespace: [07/27/2023-06:18:54] [V] [TRT] Local registry did not find InstanceNormalization_TRT creator. Will try parent registry if enabled. [07/27/2023-06:18:54] [V] [TRT] Global registry found InstanceNormalization_TRT creator. [07/27/2023-06:18:54] [W] [TRT] builtin_op_importers.cpp:5221: Attribute scales not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build. [07/27/2023-06:18:54] [F] [TRT] Validation failed: scale.count == bias.count plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96 [07/27/2023-06:18:54] [E] [TRT] std::exception [07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:771: While parsing node number 3415 [InstanceNormalization_TRT -> "InstanceNormV-27"]: [07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:772: --- Begin node --- [07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:773: input: "/unet/input_blocks.1/input_blocks.1.0/in_layers/in_layers.0/Reshape_output_0" output: "InstanceNormV-27" name: "InstanceNormN-27" op_type: "InstanceNormalization_TRT" attribute { name: "epsilon" f: 1e-05 type: FLOAT } attribute { name: "scale" floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 floats: 1 type: FLOATS } attribute { name: "bias" floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 floats: 0 type: FLOATS } attribute { name: "relu" i: 0 type: INT } attribute { name: "alpha" f: 0 type: FLOAT } attribute { name: "plugin_version" s: "1" type: STRING } [07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:774: --- End node --- [07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:5412 In function importFallbackPluginImporter: [8] Assertion failed: plugin && "Could not create plugin" [07/27/2023-06:18:54] [E] Failed to parse onnx file [07/27/2023-06:18:54] [I] Finished parsing network model. Parse time: 7.78809 [07/27/2023-06:18:54] [E] Parsing model failed [07/27/2023-06:18:54] [E] Failed to create engine from model or file. [07/27/2023-06:18:54] [E] Engine set up failed &&&& FAILED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=./combine_0.onnx --saveEngine=combine_1.plan --verbose --workspace=3000 --fp16
I give five attributes to this plugin: epsilon,scale, bias, relu and alpha like the readme: https://github.com/NVIDIA/TensorRT/blob/release/8.6/plugin/instanceNormalizationPlugin/README.md#parameters ,
epsilon
scale
bias
relu
alpha
Therefore, I think there is a problem with our readme parameter. Should the scale parameter be replaced with scales?
scales
The text was updated successfully, but these errors were encountered:
@samurdhikaru ^ ^
Sorry, something went wrong.
@DataXujing I see this from the log attached , could you check your model, thanks!
Validation failed: scale.count == bias.count plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96
@DataXujing I see this from the log attached , could you check your model, thanks! Validation failed: scale.count == bias.count plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96
I have reconfirmed that my model is correct, and I believe there is a bug in this line of plugin code: https://github.com/NVIDIA/TensorRT/blob/35477bdb94eab72862ffbdf66d4419e408bef45f/plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu#L621C1-L621C98
in line 621:
621
mPluginAttributes.emplace_back(PluginField("scales", nullptr, PluginFieldType::kFLOAT32, 1));
should the attribute "scales" be written as "scale" ?
"scales"
"scale"
samurdhikaru
No branches or pull requests
A bug? I use the Plugin of
InstanceNorm
plugin, when usetrtexec
to get the engine, Error show like:I give five attributes to this plugin:
epsilon
,scale
,bias
,relu
andalpha
like the readme: https://github.com/NVIDIA/TensorRT/blob/release/8.6/plugin/instanceNormalizationPlugin/README.md#parameters ,Therefore, I think there is a problem with our readme parameter. Should the
scale
parameter be replaced withscales
?The text was updated successfully, but these errors were encountered: