-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Select indirect BGEMM kernels - Benchmarking grouped binary convolutions #711
Comments
We currently don't have a CLI flag in compute-engine/larq_compute_engine/tflite/kernels/lce_ops_register.h Lines 31 to 32 in a2611f8
with I'd welcome a PR to make this into a commandline flag, my suggestion would be:
Note that |
@Tombana thanks a lot for pointing me to the right direction. |
closing issue as it has been solved by #717 |
Given the commits #549, #550, #551 LCE supportes grouped binary convolutions. this is great work as the standard TFLite still does not support the
groups
argument for inference: tensorflow/tensorflow#40044I've successfully created models with appropriate channel dimensions, in which the grouped binary convolutions are correctly identified by the LCE Converter.
How can I benchmark this with the lce_benchmark_model binary ? In other words how can we select the
indirect_bgemm
kernels, as the regularbgemm
kernels don't support grouped convolution ?Additionally there is a flag
use_reference_bconv
in the LCE Interpreter, but I do not know what this actually means.Assuming if it is set to
True
the binarybgemm
kernels from https://github.com/larq/compute-engine/tree/main/larq_compute_engine/core/bgemm are selected, otherwise theindirect_bgemm
from https://github.com/larq/compute-engine/tree/main/larq_compute_engine/core/indirect_bgemm.Update: the assumption is not correct, as
use_reference_bconv
is False by default. souse_reference_bconv
is explained differently.The text was updated successfully, but these errors were encountered: