Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility of Re-running Only Failed Queries After Rate Limit Reached #419

Closed
hank0316 opened this issue Nov 5, 2024 · 2 comments
Closed

Comments

@hank0316
Copy link

hank0316 commented Nov 5, 2024

Hi,

Thanks for the brilliant evaluation framework! Recently, I encountered an issue where around 40 queries failed due to rate limits and the maximum number of retries, while approximately 760 queries were successfully processed. It would be very cost-effective if there were an option to re-run only the failed queries instead of the entire batch.

Is there a way to achieve this?

Thanks!

@hank0316
Copy link
Author

hank0316 commented Nov 5, 2024

I found a potential solution! It seems we can manually remove entries from the cache file where raw_completion is null.

I'm using weighted_alpaca_eval_gpt4_turbo as my evaluator, so the cache file is located at evaluators_configs/weighted_alpaca_eval_gpt4_turbo/annotations_seed0_configs.json.

However, I'm not entirely sure if this approach is correct. Am I on the right track?

@YannDubs
Copy link
Collaborator

Yes that works. Alternatively you should be able to achieve it with is_store_missing_annotations=False see here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants