Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug preventing uploading the same large file twice #33

Merged
merged 3 commits into from
Feb 9, 2017

Conversation

tobiasKaminsky
Copy link
Member

The successful chunks were stored always, whereas they should only get stored if the upload is not successful.

Ref: nextcloud/android#581

SharedPreferences.Editor editor = sharedPref.edit();
editor.remove(chunkId).apply();
} else {
SharedPreferences.Editor editor = sharedPref.edit();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate code, can be done before the if block

@AndyScherzinger
Copy link
Member

How are already saved uploads then handled? As in: Will this fix correct them too, does the user need to upload them again after the error and it works then or are they corrupted forever?

@@ -161,8 +161,13 @@ protected int uploadFile(OwnCloudClient client) throws IOException {
}

} finally {
SharedPreferences.Editor editor = sharedPref.edit();
editor.putStringSet(chunkId, successfulChunks).apply();
if (this.isSuccess(status)) {
Copy link
Member

@AndyScherzinger AndyScherzinger Jan 31, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't this part make line 158ff obsolete since the finally block takes care of it anyways?

@@ -162,7 +162,11 @@ protected int uploadFile(OwnCloudClient client) throws IOException {

} finally {
SharedPreferences.Editor editor = sharedPref.edit();
editor.putStringSet(chunkId, successfulChunks).apply();
if (this.isSuccess(status)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the success part seems to be exactly the same as the last lines of the try {} part and the finally block will always be executed.

@tobiasKaminsky
Copy link
Member Author

Will this fix correct them too, does the user need to upload them again after the error and it works then or are they corrupted forever?

No.
But while thinking about it...The server will not store the uploaded chunks forever. I think this will be cleaned at least after 24h (@LukasReschke?)
So the best would be to add a date to the chunks. This would then also solve the problem for existing stored chunks.

@tobiasKaminsky
Copy link
Member Author

I hope I did not messed it up, I hast had 3 hours of sleep ;-)
I added a day string yyyy-mm-dd to identify the age of a chunk. If the chunk is not uploaded on the same day it will be ignored (as we have to assume that the chunk is deleted on server) and uploaded again.
If we upload the same chunk twice or even more to the server it should not be a problem.

This should also fix all older versions and allow to reupload the files as the day string is not included and therefore the chunks will be uploaded again.

All was tested on my emulator.

@AndyScherzinger
Copy link
Member

If the chunk is not uploaded on the same day it will be ignored (as we have to assume that the chunk is deleted on server) and uploaded again.

What happens for Uploads which start at 23:58 and error on 00:05? You then will have chunks from "yesterday" and chunks from "today" of the same file and the same upload.

@tobiasKaminsky
Copy link
Member Author

Yes, in that case the resume on 00:06 or later will trigger a complete new upload.
The problem is that successfulChunks.contains(String.valueOf(chunkIndex + "_" + getDateAsString())) can only work on a defined key argument. I cannot find a chunk that was uploaded < 24 hours.
If this/your use case is something to consider the best would be to put the uploaded chunks in our sqlite database, where such an query is easily possible (or iterate over the array and test each chunk).

@mario
Copy link
Contributor

mario commented Feb 9, 2017

As long as we:

a) Don't end up with duplicate, hanging chunks I'm fine
b) As long as things work even if I try to upload things 5 times, I'm fine

And avoid complicating things further.

@AndyScherzinger
Copy link
Member

I agree with you guys! 👍

So merge?

@AndyScherzinger
Copy link
Member

AndyScherzinger commented Feb 9, 2017

👍

Approved with PullApprove

@mario
Copy link
Contributor

mario commented Feb 9, 2017

LGTM - will merge, but will not make a new release. Will wait until you tell me I can merge the timestamp too.

Approved with PullApprove

@mario
Copy link
Contributor

mario commented Feb 9, 2017

@AndyScherzinger this repo still has LGTM?

@mario mario merged commit c6b0582 into master Feb 9, 2017
@mario mario deleted the fixTwiceUploading branch February 9, 2017 22:16
@AndyScherzinger
Copy link
Member

Nope, but all PRs that got opened before we removed it still show it. No worries :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants