You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
have I understood it correctly that you only implement Adversaries 1 and 2 of Yeom et al. (in yeom_membership_inference)? If so, was there a technical reason the colluding adversary (adversary 3) was not included in your analysis?
The text was updated successfully, but these errors were encountered:
Do you mean the colluding adversary? This requires a very different, and much stronger, threat model where the adversary controls the data owner's training process. It is an interesting attack, but in practice, an adversary who can control the training algorithm can do a lot worse harm in most cases than just enabling inference attacks.
Yes, I agree that it would be an unlikely attack in practice. Primarily it's interesting because it demonstrates that overfitting isn't strictly required (the attack works on MNIST), so I imagine it would exhibit a different accuracy loss/membership advantage against epsilon relationship
have I understood it correctly that you only implement Adversaries 1 and 2 of Yeom et al. (in
yeom_membership_inference
)? If so, was there a technical reason the colluding adversary (adversary 3) was not included in your analysis?The text was updated successfully, but these errors were encountered: