Skip to content

Conversation

ddavis-2015
Copy link
Member

@tensorflow/micro

Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter. Parse TfLite schema Buffer tables where the offset and size fields are active. Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active. Correctly process the Offline Memory Planner metadata buffer. Correctly process the compression metadata buffer. Add unit tests for all of the above.

bug=fixes #3196

@tensorflow/micro

Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter.
Parse TfLite schema Buffer tables where the offset and size fields are active.
Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active.
Correctly process the Offline Memory Planner metadata buffer.
Correctly process the compression metadata buffer.
Add unit tests for all of the above.

bug=fixes tensorflow#3196
@ddavis-2015 ddavis-2015 requested a review from a team as a code owner September 21, 2025 02:55
@ddavis-2015 ddavis-2015 self-assigned this Sep 21, 2025
@ddavis-2015 ddavis-2015 added bug Something isn't working ci:run_full labels Sep 21, 2025
…rse the model correctly for cortex-m3-qemu builds.

Update TestAllocatePersistentTfLiteTensor test to match updated test_conv_model.tflite model.
Copy link
Collaborator

@veblush veblush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I've left a couple of minor comments regarding __func__ and 2Gb.

Beyond those, I have two broader suggestions for improvement:

Optimize GetBufferStartFromRootPointer Calls

I'm concerned that this function's internal loop could increase initialization time. Since this is for a niche use case, could we make it to avoid if possible? Avoiding this call when it's not needed would be great.

Simplify Buffer Access Logic

The current approach to accessing the buffer requires handling two separate cases, which leads to some repetitive and lengthy code. To improve readability and maintainability, would it be possible to encapsulate this logic in a helper function or a macro?

}

TfLiteStatus MicroInterpreter::PrepareNodeAndRegistrationDataFromFlatbuffer() {
// needed for custom options when model is larger than 2Gb
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it 2GiB?

const uint8_t* flatbuffer_start =
flatbuffers::GetBufferStartFromRootPointer(model_);
if (flatbuffer_start == nullptr) {
MicroPrintf("%s: Unable to locate flatbuffer start", __func__);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it okay not to have __func__ here?

@veblush veblush added the ci:run label Oct 2, 2025
@TFLM-bot TFLM-bot removed the ci:run label Oct 2, 2025
@ddavis-2015 ddavis-2015 marked this pull request as draft October 2, 2025 23:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support TfLite schema buffer and custom options offsets
3 participants