Add check for float32 in IsBinaryTensor #642
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What do these changes do?
When we try to convert a float16 model, the MLIR converter crashes in
IsBinaryTensor
which assumes the type is float32.With this fix, float16 models will still fail to convert but that's due to tensorflow not supporting it: it causes an assertion failure somewhere in a pattern in
prepare_tf.cc
.Alternatively we could try to support bitpacking float16. I did not see a simple way of automatically getting float32 values from a float16
DenseElementsAttr
, so I think that in order to support that we'd have to explicitly haveif ( float16 ) { ... }
blocks in the code here and also in several other functions that access the filter data (the bitpacking functions).In any case that won't help us yet since the non-bconv parts of the network will still crash in the tensorflow part of the code.
How Has This Been Tested?
Tested on a float16 model: still crashes but in TF code and not in our code.
Related issue number
@lgeiger already commented on a TF issue about this long ago: tensorflow/tensorflow#46380