-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Raw scores? #10
Comments
Hi @WesleyYue, here are the Gemini scores for each task category. For other models, you can find the results in appendix of our paper.
We have also tested beyond 128K, but we receive a lot of API errors for length > 256K. |
Thank you! I was referring to the individual task scores. In the paper, it seems to only show averages across a category (for example, NIAH is an average of 8 scores, Aggregation is avg of 2, etc). The API erroring out beyond 256k is actually a pretty interesting data point. I thought >128k might have been skipped due to cost constraints. |
Hi @hsiehjackson, |
Hey authors, really nice work!
The paper shows scores that are averaged across tasks for each test. Are the full set of task scores per model available anywhere? Particularly, for Gemini, only the final averaged score is available on Github.
Also, any plans to test beyond 128k for Gemini? Given that the test doesn't saturate at 128k for Gemini, it seems important.
The text was updated successfully, but these errors were encountered: