-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
does not handle 502 errors (returned by GCS when server is overloaded) #31
Comments
We apparently skip 502 in https://github.com/dask/gcsfs/blob/master/gcsfs/utils.py#L116 - feel free to add. Actually, any >=500 may be reasonable to add here, although I'm not sure what other errors are possible. As the message suggests, though, retrying immediately may not be the right thing to do. |
To solve the |
@martindurant how many lines in the above block did you mean to move: this one or less ? |
https://github.com/dask/gcsfs/blob/master/gcsfs/core.py#L825:L826 , just the one block |
code move helped get a more explanatory message for other error #33 |
This issue is marked as closed but the relevant change has been overwritten and now the retriable error codes are |
@bachsh what makes you say it's removed ? The code is |
whoops, what a n00b :) yes, we are hitting 502 errors when saving using gcsfs. It makes sense to retry only a few times. Thanks for the answer |
I have faced several times the same 502 error when writing in chunks. Errors are sporadic and look like:
Google writes this may happen when system/network is under stress - our script does indeed read and write from
GCS
several GB of files in chunks after on-the-fly transformations.Look at
Handling Errors
at https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload.They indicate to implement
exponential-backoff
: https://cloud.google.com/storage/docs/exponential-backoffIs this something that can be implemented in
GCSFS
i.e. resubmit a chunk when such5xx
error occurs ?If not, wrapping the write statement in a try/exception, delete the target file written so far and reprocess is not an option for me. Because the records I write are based off data in a compressed
GCSFS
-file I read. ZipFileExt are not seekable so what is read is gone. I would have to delete the whole batch and restart.The text was updated successfully, but these errors were encountered: