-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lib: Fixes #9106 Enable uploading attachment to dropbox with size >150MB #10243
Conversation
…ropbox-max-upload
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this! I've left a few comments.
@@ -0,0 +1,51 @@ | |||
import chunkGenerator from '../chunkGenerator'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use UpperCamelCase for class
es.
@@ -206,11 +206,12 @@ function shimInit() { | |||
|
|||
const headers = options.headers ? options.headers : {}; | |||
const method = options.method ? options.method : 'POST'; | |||
const body = (options.skip) ? null : (!options.loaded) ? RNFetchBlob.wrap(options.path) : options.body; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by the addition of skip
and loaded
options. Consider renaming them or adding a brief description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the case of uploading a file by Dropbox, the first request has no payload that's why I'm using skip
here to skip loading any data into the body of the request.
in the case of a successful first request (obtained session_id
), I load the chunks earlier before executing the request, that's why I'm using loaded
and skip
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why you closed your previous pull request and re-opened this one. Still same comment here:
To validate this pull request, the test units will need to run against a real Dropbox sync target using the technique mentioned here: https://joplinapp.org/help/dev/spec/sync#testing
I suggest using a very low chunk size so that most content get chunked. We'll need to see the test output so please save it to a file or even take screenshots, whatever is easiest.
}, | ||
options, | ||
); | ||
if (options && options.source === 'file') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think with your changes you made every Dropbox request 3 times slower? Instead of having one request, you have three (start/append/complete) if I'm reading correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it didn't, instead of uploading a 10 MB attachment in one go, I first requested a session_id
then I uploaded the file chunked.
this happens only to attachments, not notes, folders, or locks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But 99.9% of attachment are below the limit so that's still a lot of complexity for something that could be uploaded in one request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can write an additional if
statement to only split files above a certain size into chunks while uploading, but the original issue addresses that attachments above 150 MB are not uploadable.
sure! |
I have tested this approach against a real Dropbox target, and it works fine, i have done testing on multiple chunk sizes, both from Android and desktop, and I have tried the approach mentioned here but it seems that there are failing tests, but I don't think they're related since the same tests fail on the |
The CI failure is valid - you can use "any" as a type. Regarding the Dropbox test units, please provide the log output from these tests.
Yes please, that logic should only applies for file larger than the limit. Are you sure it's 150 MB? Please could you provide a link to the Dropbox documentation for this? |
This is the documentation related to the previously used
|
This is the output.txt for
and the |
You should probably use Could you try again with this please and post the log again? At the moment there are many errors but I don't know if it's due to using npm test, or the lack of runInBand or if it's an actual error |
sorry for the late reply, it's been Eid here:) here is the output.txt of |
Thanks, but as you can see there are many sync errors. Do you know what to do from here? You would need to run just one failing test using |
I'll proceed from here and if I faced any difficulties ill post on discord:) |
Closing for now but please let me know if you'd like to continue working on this. Happy to re-open in this case. |
What i did do?
files/upload_session/start
with no payload to get the session_idFsDriver().readFileChunk()
.Testing:
*This currently works with Android, testing with larger chunks leads to crashes since Joplin uses
RNFS.read
which has an unresolved memory leak, my approach works despite the memory leak because Joplin doesn't allow the attachment of +200MB on mobile.