Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The result from open_llm_leaderboard is not as expected. #48

Open
chi2liu opened this issue Jun 13, 2023 · 7 comments
Open

The result from open_llm_leaderboard is not as expected. #48

chi2liu opened this issue Jun 13, 2023 · 7 comments

Comments

@chi2liu
Copy link

chi2liu commented Jun 13, 2023

open_llm_leaderboard had updated the result for open-llama-3b and open-llama-7b.

image

This result is much worse than llama-7b and does not match expectations. Is it because of the fast tokenizer issue mention in the document?

@gjmulder
Copy link

Relative scores compared to llama-7b:

image

There's a clear performance hit for the multi-shot tasks, as compared to llama-7b

@young-geng
Copy link
Contributor

This is likely the issue of the auto-converted fast tokenizer. I've created an issue here

@c0bra
Copy link

c0bra commented Jun 22, 2023

@young-geng looks like the issue in that repo was fixed last week. I'm assuming this could be retried now? (@chi2liu)

@codesoap
Copy link

@c0bra There has not yet been a new release of huggingface/transformers since the fix has been merged: https://github.com/huggingface/transformers/releases. I assume we still need to wait for this.

The already existing entries for OpenLLaMa on the leader-board disappeared around a week ago as well. Maybe there is a connection and the maintainers of the leader-board removed the results, because they learned of the bug and are now waiting for the next release of huggingface/transformers... That's just my guess, though.

@young-geng
Copy link
Contributor

@codesoap Yeah I've contacted the maintainers for the leaderboard for a re-evaluation request, and the model should be in the queue right now.

@gjmulder
Copy link

open-llama-7b-open-instruct is pending evaluation in open_llm_leaderboard. They confirmed that they fine-tuned with use_fast = False

@HeegyuKim
Copy link

OpenLLaMa 3B result is not pending. is there any reason?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants