You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 10, 2022. It is now read-only.
More of an idea: since you're hitting the endpoint so many separate times, and since the script takes so long, it sucks when it fails for maybe a small disconnection because of the internet and none of the results you had accumulated so far get written. Maybe think about how to catch failures, maybe with exponential backoff on retries, and possibly after a certain amount of failures, save what the script has collected so far. Thoughts?
The text was updated successfully, but these errors were encountered:
@yiblet Can you add this as part of your parallel code? You should be able to just add a try-catch around this line in aggregate.py, where you retry on failure a certain number of times before aborting.
More of an idea: since you're hitting the endpoint so many separate times, and since the script takes so long, it sucks when it fails for maybe a small disconnection because of the internet and none of the results you had accumulated so far get written. Maybe think about how to catch failures, maybe with exponential backoff on retries, and possibly after a certain amount of failures, save what the script has collected so far. Thoughts?
The text was updated successfully, but these errors were encountered: