Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yeom Colluding adversary #18

Open
TTitcombe opened this issue Feb 11, 2020 · 2 comments
Open

Yeom Colluding adversary #18

TTitcombe opened this issue Feb 11, 2020 · 2 comments

Comments

@TTitcombe
Copy link

have I understood it correctly that you only implement Adversaries 1 and 2 of Yeom et al. (in yeom_membership_inference)? If so, was there a technical reason the colluding adversary (adversary 3) was not included in your analysis?

@evansuva
Copy link

Do you mean the colluding adversary? This requires a very different, and much stronger, threat model where the adversary controls the data owner's training process. It is an interesting attack, but in practice, an adversary who can control the training algorithm can do a lot worse harm in most cases than just enabling inference attacks.

@TTitcombe
Copy link
Author

Yes, I agree that it would be an unlikely attack in practice. Primarily it's interesting because it demonstrates that overfitting isn't strictly required (the attack works on MNIST), so I imagine it would exhibit a different accuracy loss/membership advantage against epsilon relationship

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants