-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Significant Accuracy Decrease After FHE Execution #875
Comments
Hello, could we have a GitHub repo to reproduce the problem, please? We need some code to reproduce. Thanks |
Hi @Sarahfbb, As @bcm-at-zama says, it's much easier for us to help with some simple code to reproduce. I will try to answer what I can from what you say:
That is not ideal since it won't represent the actual distribution of your input. This can lead to a significant in quantization. Ideally we prefer having a small representative dataset here. Basically a few random points taken from the training set.
That is correct. That first drop comes from quantization. You can study the effect of quantization on your model by playing around with the
That drop isn't expected unless you have changed the values of |
|
Thanks you so much for your reply,I've change the visibility of this repository,and here is my repository:https://github.com/Sarahfbb/FHE/tree/main/S, the workline is :Extracted_features,S_training,S_qat_training,S_compilation,Batch_test. But i don't think it's simulation,i tried the FHE mode directly |
Thanks for the code. I can see that you use a polynomial approximation for the activation function. If you did that on purpose to make the FHE runtime faster then it's not going to work. Just using a simple torch activation function like relu sigmoid will run fine. I am not sure where you are evaluating the quantized model from concrete-ml. I see you are evaluating the quantized torch model built with brevitas so I think that 50% is what you got from this evaluation? Once you compile you should get a
|
Sure will have a try to modify the activation function as your advice,thanks a lot! |
Summary
What happened/what you expected to happen?
Description
We've observed significant accuracy discrepancies when running our model with different FHE settings. The original PyTorch model achieves 63% accuracy. With FHE disabled, the accuracy drops to 50%, and with FHE execution enabled, it further decreases to 32%. The compilation uses a dummy input of shape (1, 100) with random values (numpy.random.randn(1, 100).astype(numpy.float32)). Since the accuracy with FHE disabled matches the quantized model's accuracy, it suggests that the accuracy loss from 63% to 50% is likely due to quantization. However, the substantial drop to 32% when enabling FHE execution indicates a potential issue with the FHE implementation or configuration that requires further investigation.
Step by step procedure someone should follow to trigger the bug:
minimal POC to trigger the bug
The text was updated successfully, but these errors were encountered: