-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can differential privacy's protective effect be verified? #108
Comments
Additionally, there is a puzzling issue in this tutorial. For the CIFAR-10 dataset, although the training accuracy is relatively high, at over 80%, the testing accuracy is quite poor, at less than 50%. This is an overfitting phenomenon, and the model has no practical value. Suppose we want to increase the test accuracy by changing the training structure or hyperparameters (learning rate, batch size), the resulting MIA ROC is almost the same as random guessing. In this case, it seems that the MIA attack becomes meaningless. How should we understand this situation? |
Your work is excellent, providing a great verification tool for security and privacy researchers. I would like to inquire whether your method can be combined with existing differential privacy defense frameworks, such as the Opacus differential privacy framework. Is it possible to create a tutorial to demonstrate how to verify the effectiveness of differential privacy in defending against your MIA attack method? Thank you!
The text was updated successfully, but these errors were encountered: