Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Select indirect BGEMM kernels - Benchmarking grouped binary convolutions #711

Closed
simonmaurer opened this issue Feb 9, 2022 · 3 comments
Closed

Comments

@simonmaurer
Copy link
Contributor

simonmaurer commented Feb 9, 2022

Given the commits #549, #550, #551 LCE supportes grouped binary convolutions. this is great work as the standard TFLite still does not support the groups argument for inference: tensorflow/tensorflow#40044
I've successfully created models with appropriate channel dimensions, in which the grouped binary convolutions are correctly identified by the LCE Converter.

How can I benchmark this with the lce_benchmark_model binary ? In other words how can we select the indirect_bgemm kernels, as the regular bgemm kernels don't support grouped convolution ?

Additionally there is a flag use_reference_bconv in the LCE Interpreter, but I do not know what this actually means.
Assuming if it is set to True the binary bgemm kernels from https://github.com/larq/compute-engine/tree/main/larq_compute_engine/core/bgemm are selected, otherwise the indirect_bgemm from https://github.com/larq/compute-engine/tree/main/larq_compute_engine/core/indirect_bgemm.

Update: the assumption is not correct, as use_reference_bconv is False by default. so use_reference_bconv is explained differently.

@Tombana
Copy link
Collaborator

Tombana commented Feb 9, 2022

We currently don't have a CLI flag in lce_benchmark_model to choose between these. For internal benchmarks we simply replaced the registration on the following line:

resolver->AddCustom("LceBconv2d",
compute_engine::tflite::Register_BCONV_2D());

with Register_BCONV_2D_OPT_INDIRECT_BGEMM.

I'd welcome a PR to make this into a commandline flag, my suggestion would be:

  • Add a bool use_indirect_bgemm (default false) argument to RegisterLCECustomOps with another if-branch next to use_reference_bconv in lce_ops_register.h.

  • To add it as a commandline flag, I'd say the simplest (without modifying the TFLite benchmark BenchmarkTfLiteModel code) is to parse the commandline flags in lce_benchmark_main.cc and store the result as a global bool in that file, which can then be passed to RegisterLCECustomOps on line 26.

Note that use_reference_bconv uses core/bconv2d/reference.h which supports 'everything' such as zero-padding, one-padding and groups. The optimized implementations, however, don't support all of those.

@simonmaurer
Copy link
Contributor Author

@Tombana thanks a lot for pointing me to the right direction.
can do a PR and include a filtering of the arguments, so we can parse the flag (as suggested by you) and remove it from argv before passing it to the BenchmarkTfLiteModel as I assume (need to verify though) this will throw an unrecognized argument error

@simonmaurer
Copy link
Contributor Author

closing issue as it has been solved by #717

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants