Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the results of the coco dataset #2

Open
maming109 opened this issue Dec 12, 2024 · 1 comment
Open

the results of the coco dataset #2

maming109 opened this issue Dec 12, 2024 · 1 comment

Comments

@maming109
Copy link

Hello author, the VCD article about the Coco-Adversarial in the POPE he published this kind of indicators, why in his paper the LLAVA-1.5-7b data is ACC80.88, F181.33, but the data in your paper are 75.6 and 78.14, can you reproduce this kind of data? But I also reproduced the VCD paper, and I got 80.28 and 79.42, so I am currently confused, and I hope you can answer my doubts

@sangminwoo
Copy link
Owner

Hi @maming109,

Thanks for your attention to our work!

The differences in the experimental results of the VCD in our paper compared to the original VCD paper are primarily due to variations in hyperparameters. Specifically, there are some specific differences in the setups. For example, in our experiments, we set the diffusion noise steps to 500, whereas the VCD paper mentions using either 500 or 999. Additionally, in the code, our seed value was fixed at 42, while the VCD paper used 55. Beyond these, there are various decoding parameters that could influence the results. You can find the default configuration we used in our experiments in the GitHub repository under eval_bench > scripts > pope_eval.sh.

We fixed these hyperparameters in both methods to ensure a fair comparison.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants