-
Notifications
You must be signed in to change notification settings - Fork 213
Fail after lengthy upload: s3 plugin #448
Comments
Can you search the logs (.syncany/logs/) for the identifier of the multichunk on which it fails? There should be some mention of it being uploaded to the temp location. |
That is a very unfortunate case. I have encountered similar situations in the past with S3. To the best of my knowledge, S3 does not guarantee that an object that is written is available right away. In this case, a temporary file which may be not be available to be moved. This is an S3 and Swift problem. Both do not guarantee that. The Thinking out loud. What if we used a different non-retriable TM for the rollbacks and enabled retries for moves of non-existing files? With Christian's new code, that'd be really easy. Related:
|
Also, @jgn Thanks for reporting this. Stuff like this is reeeally helpful in the long run! |
The problem here is that upload() is not actually atomic. To ensure that we could block there until we can detect that the file exists. That might mean a performance hit, but we can't continue anyway until the file is unambiguously uploaded. |
Good idea. If we only annotate some plugins (namely s3 and swift) we won't slow down all other plugins. BTW: This would also solve #276. |
After a lot gets uploaded, I get (with
--debug
):If I then attempt a fresh
sy up
I get:Note that I have removed everything from the bucket, as well as the entire local
.syncany
and have tried again -- same issue every time.Total data destined for the bucket: About 39GB, over about 10K files.
The text was updated successfully, but these errors were encountered: