We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I see you have qconv2d_batchnorm layer which folds the weights of the two layers and then quantizes.
qconv2d_batchnorm
We're bringing support for that to hls4ml, and it should help us save some resources & latency.
I'm wondering, do you plan to add the equivalent combined QDense + BatchNormalization layer to QKeras?
QDense
BatchNormalization
QKeras
The text was updated successfully, but these errors were encountered:
@thesps great and glad to see this helps! Yes, QDenseBatchnorm is one of our TODO items, but we have other higher priority tasks at this moment.
Sorry, something went wrong.
#74
lishanok
No branches or pull requests
I see you have
qconv2d_batchnorm
layer which folds the weights of the two layers and then quantizes.We're bringing support for that to hls4ml, and it should help us save some resources & latency.
I'm wondering, do you plan to add the equivalent combined
QDense
+BatchNormalization
layer toQKeras
?The text was updated successfully, but these errors were encountered: