-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Properly cast intermediate Int8 tensors to TensorRT Engines in Fallback #1549
Conversation
- Fix compilation error for GPT-2 model arising from Byte-type inputs fed into TensorRT Engine - Update translation dictionary between Torch and TensorRT types to include `at::kByte` - Add field to PartitioningInfo specifying whether to cast Int8 inputs to TensorRT Engines to Int, to avoid error arising from Int8 inputs being fed into non-quantized engines - Add automatic detection of quantized/calibrated models and disable Int8 => Int32 casting in those cases - Fix bug where LoweringInfo target device was not being updated for Python API - Allow `castNode` to force creation of a new node and avoid searching for an existing one to convert - Add test to ensure cast is inserted in the Torch engine preceding a TensorRT engine, when the Byte tensor is an output of the Torch engine
ee69829
to
a4c2d60
Compare
core/partitioning/shape_analysis.cpp
Outdated
if (partitioning_info.truncate_long_and_double) { | ||
for (size_t i = 0; i < seg_block.inputs().size(); ++i) { | ||
if (ivalues_maps[seg_block.raw_inputs()[i]].isTensor()) { | ||
auto cur_ivalue = ivalues_maps[seg_block.raw_inputs()[i]]; | ||
at::ScalarType t = cur_ivalue.toTensor().scalar_type(); | ||
if (t == at::kLong) { | ||
// we add a cast operation to cast the type to Int64 | ||
auto cast_node = createCastNode(seg_block, i, true, target_device); | ||
seg_block.g()->prependNode(cast_node); | ||
seg_block.inputs()[i]->replaceAllUsesAfterNodeWith(cast_node, cast_node->outputs()[0]); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just linter formatting changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I manually made the formatting changes to reduce redundancy of if
statements, but they should be functionally equivalent to the previous version
- Address review comments - Improve documentation and logging messages - Restructure casting function to allow for casting of variable data types - Add casting for `at::kByte` segment block inputs as well as segment block outputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
at::kByte
castNode
to force creation of a new node and avoid searching for an existing one to convertError displayed when passing
Int8
inputs to non-quantized TRT Engine:With this PR, GPT-2 now compiles and runs inference successfully.
Fixes #1455
Type of change
Checklist: