Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lib: Fixes #9106 Enable uploading attachment to dropbox with size >150MB #10243

Closed
wants to merge 18 commits into from

Conversation

7adidaz
Copy link

@7adidaz 7adidaz commented Apr 1, 2024

What i did do?

  1. I call files/upload_session/start with no payload to get the session_id
  2. start uploading 5MB chunks* extracted from a chunk generator built on top of FsDriver().readFileChunk().

Testing:

  • tested with ~130 MB file on A13 android, no crashes.
  • tested with the same file on the desktop.
  • both synced perfectly and the hash sum matched.

*This currently works with Android, testing with larger chunks leads to crashes since Joplin uses RNFS.read which has an unresolved memory leak, my approach works despite the memory leak because Joplin doesn't allow the attachment of +200MB on mobile.

Copy link
Collaborator

@personalizedrefrigerator personalizedrefrigerator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this! I've left a few comments.

@@ -0,0 +1,51 @@
import chunkGenerator from '../chunkGenerator';
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use UpperCamelCase for classes.

packages/lib/utils/tests/chunkGenerator.test.ts Outdated Show resolved Hide resolved
@@ -206,11 +206,12 @@ function shimInit() {

const headers = options.headers ? options.headers : {};
const method = options.method ? options.method : 'POST';
const body = (options.skip) ? null : (!options.loaded) ? RNFetchBlob.wrap(options.path) : options.body;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by the addition of skip and loaded options. Consider renaming them or adding a brief description.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the case of uploading a file by Dropbox, the first request has no payload that's why I'm using skip here to skip loading any data into the body of the request.

in the case of a successful first request (obtained session_id), I load the chunks earlier before executing the request, that's why I'm using loaded and skip here.

Copy link
Owner

@laurent22 laurent22 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why you closed your previous pull request and re-opened this one. Still same comment here:


To validate this pull request, the test units will need to run against a real Dropbox sync target using the technique mentioned here: https://joplinapp.org/help/dev/spec/sync#testing

I suggest using a very low chunk size so that most content get chunked. We'll need to see the test output so please save it to a file or even take screenshots, whatever is easiest.

packages/lib/file-api-driver-dropbox.js Outdated Show resolved Hide resolved
},
options,
);
if (options && options.source === 'file') {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with your changes you made every Dropbox request 3 times slower? Instead of having one request, you have three (start/append/complete) if I'm reading correctly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, it didn't, instead of uploading a 10 MB attachment in one go, I first requested a session_id then I uploaded the file chunked.

this happens only to attachments, not notes, folders, or locks.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But 99.9% of attachment are below the limit so that's still a lot of complexity for something that could be uploaded in one request.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can write an additional if statement to only split files above a certain size into chunks while uploading, but the original issue addresses that attachments above 150 MB are not uploadable.

@7adidaz
Copy link
Author

7adidaz commented Apr 7, 2024

I suggest using a very low chunk size so that most content get chunked. We'll need to see the test output so please save it to a file or even take screenshots, whatever is easiest.

sure!

@7adidaz
Copy link
Author

7adidaz commented Apr 8, 2024

To validate this pull request, the test units will need to run against a real Dropbox sync target using the technique mentioned here: https://joplinapp.org/help/dev/spec/sync#testing

I have tested this approach against a real Dropbox target, and it works fine, i have done testing on multiple chunk sizes, both from Android and desktop, and I have tried the approach mentioned here but it seems that there are failing tests, but I don't think they're related since the same tests fail on the dev branch and my branch.

@laurent22
Copy link
Owner

The CI failure is valid - you can use "any" as a type.

Regarding the Dropbox test units, please provide the log output from these tests.

I can write an additional if statement to only split files above a certain size into chunks while uploading, but the original issue addresses that attachments above 150 MB are not uploadable.

Yes please, that logic should only applies for file larger than the limit. Are you sure it's 150 MB? Please could you provide a link to the Dropbox documentation for this?

@7adidaz
Copy link
Author

7adidaz commented Apr 9, 2024

This is the documentation related to the previously used files/upload, I quote:

Do not use this to upload a file larger than 150 MiB. Instead, create an upload session with upload_session/start.

@7adidaz
Copy link
Author

7adidaz commented Apr 9, 2024

This is the output.txt for npm test in packages/lib with

setSyncTargetName('dropbox');

and the access_token in joplin/packages/app-cli/tests/support/dropbox-auth.txt

@laurent22
Copy link
Owner

You should probably use yarn test because that's what we support, and also add the runInBand flag to avoid timeouts: yarn test --runInBand

Could you try again with this please and post the log again? At the moment there are many errors but I don't know if it's due to using npm test, or the lack of runInBand or if it's an actual error

@7adidaz
Copy link
Author

7adidaz commented Apr 13, 2024

sorry for the late reply, it's been Eid here:)

here is the output.txt of yarn test --runInBand
and the output.txt of yarn test

@laurent22
Copy link
Owner

laurent22 commented Apr 13, 2024

Thanks, but as you can see there are many sync errors. Do you know what to do from here? You would need to run just one failing test using yarn test nameOfFile -t 'name of test' then debug from this. If you don't know how to proceed please ask

@7adidaz
Copy link
Author

7adidaz commented Apr 15, 2024

I'll proceed from here and if I faced any difficulties ill post on discord:)

@laurent22
Copy link
Owner

Closing for now but please let me know if you'd like to continue working on this. Happy to re-open in this case.

@laurent22 laurent22 closed this Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants