Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update download_checkmd5.py #560

Closed
wants to merge 7 commits into from
Closed

Update download_checkmd5.py #560

wants to merge 7 commits into from

Conversation

bricerebsamen
Copy link

Created a new urlretrieve function that handles retrying if the connection drops. Fixes #559.

Created a new urlretrieve function that handles retrying if the connection drops. Fixes #559.
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
import urllib2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code must be Python 2 and 3 compatible. Please keep using the conditional import and make sure the rest of the code works with Python 3.

python3 compatibility (tested)

def http_error_206(self, req, fp, code, msg, hdrs):
# 206 Partial Content Response
r = urllib.addinfourl(fp, hdrs, req.get_full_url())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

urllib is not imported.

fixed some more problems pointed by dirk
@dirk-thomas
Copy link
Member

I tried using the code from this pull request and ran into several issue. The first thing is that it currently fails the unit tests since it does not support file urls anymore. The second problem is that is assumes that the server provides the Content-Length header. For some servers that might not be true so the script needs to check for that and should also validate that the response contains the corresponding Content-Range header. I also think that the retry and resume logic should not be tied to the same counter. We should only retry a low number of times (e.g. 3) but resume should be possible for an arbitrary amount as long as there is progress.

I can look into this but am not sure when I will find time to implement all this cleanly.

@bricerebsamen
Copy link
Author

Thanks for reviewing. I'll work on it tomorrow.

@bricerebsamen
Copy link
Author

I think I addressed all your comments. Please let me know if there is something wrong.

@dirk-thomas
Copy link
Member

The pull request if very lengthy and I tried to come up with a much conciser version of it. Please take a look at #575. It works fine for me (but I don't think that any of our download servers actually stops the download before it is complete). Can you please check it against the server which you had the original problem with to verify that it actually resumes the download correctly?

@dirk-thomas
Copy link
Member

Resolved by #575.

@dirk-thomas dirk-thomas closed this Jan 8, 2014
cwecht pushed a commit to cwecht/catkin that referenced this pull request Mar 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants