Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend quantiser support so as to accelerate more binary models. #668

Merged
merged 1 commit into from
Jul 2, 2021

Conversation

AdamHillier
Copy link
Contributor

What do these changes do?

This PR extends the LCE converter with patterns that support converting tf.where-style binary quantisers. It also adds support for binary input/output of LceQuantize/LceDequantize.

Note that in larq/larq#677 we are moving towards having the Larq quantisers implemented with tf.where instead of the current tf.sign implementation. The main change is that this PR adds support for converting tf.where-style quantisers.

The second change to add support for boolean input, meaning that a wider variety of binary quantisers will be accelerated by LCE. For example, the following wacky quantiser will now convert successfully:

lambda x: tf.where(tf.logical_or(tf.abs(x) < 0.5, tf.abs(x) > 1.0), 1.0, -1.0)

image

With these changes any binary quantiser that can be implemented with tf.where(boolean_condition, 1, -1) can be converted into an LceQuantize op (and consequently, a subsequent convolution can be converted to a BConv2D and thus accelerated). The boolean_condition will be implemented with TFL ops, as in the example above, but since the quantisation in general is so quick compared to the binary convolution this doesn't present much of a performance issue.

How Has This Been Tested?

MLIR test cases have been added; the end2end tests have been extended.

Benchmark Results

N/A.

Related issue number

larq/larq#677

@AdamHillier AdamHillier requested a review from a team June 21, 2021 15:17
@AdamHillier AdamHillier force-pushed the quantize-bool-input branch from edfd15a to 6f85162 Compare June 21, 2021 15:21
Copy link
Collaborator

@Tombana Tombana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@AdamHillier AdamHillier requested a review from a team June 23, 2021 14:33
Copy link
Collaborator

@Tombana Tombana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
Just a note to ourselves: when merging this into our private fork it will probably not cause any automatic merge conflicts, but we'll need to update the micro quantization kernels (should be fairly trivial, just by copying the changes from the public quantization kernel).

Copy link
Member

@lgeiger lgeiger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done!

It took me a while to wrap my head around boolean input for lq.quantize, but I think it makes a lot of sense and is much easier to maintain than passing an additional threshold attribute to the op or doing something similar.

I just have some additional comments regarding broadcasting and the matching of tf.where.

@AdamHillier AdamHillier force-pushed the quantize-bool-input branch 2 times, most recently from 2e18538 to 962a202 Compare June 24, 2021 21:30
@AdamHillier AdamHillier requested a review from lgeiger June 24, 2021 21:35
Copy link
Member

@lgeiger lgeiger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, thank you so much for figuring this out!

I just have a few questions to understand the code, other than that this looks great to me 🚀

@AdamHillier AdamHillier force-pushed the quantize-bool-input branch from 962a202 to bd04370 Compare July 2, 2021 13:50
Add the ability to convert `tf.where`-style binary quantisers, and
add support for boolean input to `LceQuantize` and `LceDequantize`.
@AdamHillier AdamHillier force-pushed the quantize-bool-input branch from bd04370 to 0f0db5f Compare July 2, 2021 13:51
@AdamHillier AdamHillier enabled auto-merge (squash) July 2, 2021 14:21
@AdamHillier AdamHillier merged commit 95199a7 into main Jul 2, 2021
@AdamHillier AdamHillier deleted the quantize-bool-input branch July 2, 2021 18:21
@lgeiger lgeiger added the feature New feature or request label Jul 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants