Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

How ipfs support breakpoint resume? #1392

Closed
YoleYu opened this issue Jun 11, 2018 · 6 comments
Closed

How ipfs support breakpoint resume? #1392

YoleYu opened this issue Jun 11, 2018 · 6 comments
Labels
kind/support A question or request for support

Comments

@YoleYu
Copy link

YoleYu commented Jun 11, 2018

HI,
When upload a large file to ipfs network, how the ipfs supported breakpoint resume? Which APIs can be used.

Thanks a lot.

@alanshaw
Copy link
Member

Hi @YoleYu I'm not sure I understand the question - could you clarify that you're trying to do?

@alanshaw alanshaw added kind/support A question or request for support status/ready Ready to be worked labels Jun 18, 2018
@YoleYu
Copy link
Author

YoleYu commented Jun 18, 2018

Hi @alanshaw , Let's see this scenario, when I upload a film to ipfs network, at the time when 80% of the file has been uploaded, my network is cut off. After some minutes, I fixed my network. Then I try to resume the upload.

  1. Where is the start point when I resume, from 0% or 80%?
  2. If it's from 80%, how ipfs supported this feature? Is the previous content cached in ipfs network or they are permanently stored in ipfs?

@alanshaw
Copy link
Member

@YoleYu with IPFS you don't typically upload, you add the file to your local IPFS node and then other peers can download the chunks from you.

If you're using the HTTP API to upload a file to a remote IPFS node you have write access to then yes there's a chance that upload could fail mid way. I don't know for sure but I'd guess that there is currently no resume in this case and you'd have to retry.

@achingbrain @pgte can you speak to what happens if the stream is aborted while the content is being streamed to the importer? Are DAG nodes created and stored in the repo as the content is streamed? i.e. if the same content is then re-uploaded will some DAG nodes be reused?

@alanshaw
Copy link
Member

ping @achingbrain

@alanshaw
Copy link
Member

alanshaw commented Aug 2, 2018

Closing as I believe the original question was answered - please shout if more info is needed!

@alanshaw alanshaw closed this as completed Aug 2, 2018
@ghost ghost removed the status/ready Ready to be worked label Aug 2, 2018
@OstlerDev
Copy link

OstlerDev commented Aug 10, 2018

@alanshaw I am curious about the second part of your answer above where you ask what happens if the stream is aborted. What would happen if a re-upload attempt happens?

Basically, the flow I am curious of is this:

  1. Use the js-ipfs-api function files.add() in a browser to start adding a large 1GB file to a remote go-ipfs node.
  2. The files.add()/"upload" (over HTTP using js-ipfs-api) for whatever reason is disconnected/fails and causes the remote go-ipfs server to only have received 80% of the 1GB file.
  3. Using the js-ipfs-api function files.add() the add/"upload" (over HTTP) to the remote go-ipfs is re-attempted 24 hours later.

In that instance, would the remote go-ipfs node remember the blocks/pieces that it had received the day before, even though the add was interrupted?

If the remote go-ipfs node does still have the blocks from the previous interrupted add/upload, would the HTTP Range Header (or similar) be used to tell the js-ipfs-api file upload to resume starting at 80%?

If it does not currently resume, what functionality would need to be added to go-ipfs/js-ipfs/js-ipfs-api to enable "resuming" of file uploads?

If you need, I can explain my reasoning for why I want to upload to a remote go-ipfs server, just let me know.

Thank you!
Sky Young

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/support A question or request for support
Projects
None yet
Development

No branches or pull requests

3 participants