-
Notifications
You must be signed in to change notification settings - Fork 941
Support TfLite schema buffer and custom options offsets #3197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@tensorflow/micro Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter. Parse TfLite schema Buffer tables where the offset and size fields are active. Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active. Correctly process the Offline Memory Planner metadata buffer. Correctly process the compression metadata buffer. Add unit tests for all of the above. bug=fixes tensorflow#3196
…rse the model correctly for cortex-m3-qemu builds. Update TestAllocatePersistentTfLiteTensor test to match updated test_conv_model.tflite model.
f8a8b80
to
8cfa65a
Compare
…d test_conv_model.cc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! I've left a couple of minor comments regarding __func__
and 2Gb
.
Beyond those, I have two broader suggestions for improvement:
Optimize GetBufferStartFromRootPointer Calls
I'm concerned that this function's internal loop could increase initialization time. Since this is for a niche use case, could we make it to avoid if possible? Avoiding this call when it's not needed would be great.
Simplify Buffer Access Logic
The current approach to accessing the buffer requires handling two separate cases, which leads to some repetitive and lengthy code. To improve readability and maintainability, would it be possible to encapsulate this logic in a helper function or a macro?
} | ||
|
||
TfLiteStatus MicroInterpreter::PrepareNodeAndRegistrationDataFromFlatbuffer() { | ||
// needed for custom options when model is larger than 2Gb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it 2GiB?
const uint8_t* flatbuffer_start = | ||
flatbuffers::GetBufferStartFromRootPointer(model_); | ||
if (flatbuffer_start == nullptr) { | ||
MicroPrintf("%s: Unable to locate flatbuffer start", __func__); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it okay not to have __func__
here?
@tensorflow/micro
Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter. Parse TfLite schema Buffer tables where the offset and size fields are active. Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active. Correctly process the Offline Memory Planner metadata buffer. Correctly process the compression metadata buffer. Add unit tests for all of the above.
bug=fixes #3196