You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the brilliant evaluation framework! Recently, I encountered an issue where around 40 queries failed due to rate limits and the maximum number of retries, while approximately 760 queries were successfully processed. It would be very cost-effective if there were an option to re-run only the failed queries instead of the entire batch.
Is there a way to achieve this?
Thanks!
The text was updated successfully, but these errors were encountered:
I found a potential solution! It seems we can manually remove entries from the cache file where raw_completion is null.
I'm using weighted_alpaca_eval_gpt4_turbo as my evaluator, so the cache file is located at evaluators_configs/weighted_alpaca_eval_gpt4_turbo/annotations_seed0_configs.json.
However, I'm not entirely sure if this approach is correct. Am I on the right track?
Hi,
Thanks for the brilliant evaluation framework! Recently, I encountered an issue where around 40 queries failed due to rate limits and the maximum number of retries, while approximately 760 queries were successfully processed. It would be very cost-effective if there were an option to re-run only the failed queries instead of the entire batch.
Is there a way to achieve this?
Thanks!
The text was updated successfully, but these errors were encountered: