Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] #183

Open
ShuchunXu opened this issue May 2, 2024 · 2 comments
Open

[QUESTION] #183

ShuchunXu opened this issue May 2, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@ShuchunXu
Copy link

ShuchunXu commented May 2, 2024

❔ Any questions

When I use the pgd.py to attack my model, I find that the model predicts the adversarial example to the last class classes in all classes. That is unusual, can you tell me why? @nguyenvulong @Framartin @noppelmax @khalooei
robustness_conf_matrix

@ShuchunXu ShuchunXu added the enhancement New feature or request label May 2, 2024
@nguyenvulong
Copy link
Contributor

  1. can you describe the results before the attack
  2. show the code snippet that you implemented pdg attack on your model
  3. make sure there's no unbalanced between the number of samples from the classes

@khalooei
Copy link
Contributor

Dear @ShuchunXu, as @nguyenvulong mentioned, could you please provide more detailed information regarding your evaluation process, the data, and the problem statement?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants