-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
v0.15.0 Can't read csv.gz from url #8685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Is this a documented behavior that is supposed to work? It doesn't seem like the current code handles reading a compressed file from a URL from a quick glance at it. |
We would have to add something similar to this so |
yep this looks not supported at the moment welcome a pull request to fix |
+1, this is an important feature for the modern workflow Until someday, I've been using the following workaround in Python 3.4 and Pandas 0.16.0: response = requests.get(url)
bytes_io = io.BytesIO(response.content)
with gzip.open(bytes_io, 'rt') as read_file:
df = pandas.read_csv(read_file) |
@dhimmel pull-requests are welcome to add this feature. |
closed by #10649 |
For `process.ipynb`: + Improve documentation with markdown cells. + Switch to commit specific links for dhimmel/uniprot. + Adopt pandas 17.0 gzipped url support. See pandas-dev/pandas#8685 + Exclude rows 192304-192473 (one indexed) where `BindingDB Reactant_set_id` was missing. + Handle affinities that cannot be converted to floats. For `collapse.Rmd`: + Use readr for tsv io. + Retain pubmed_ids and sources when collapsing.
But the local file works:
I have v0.15.0
The text was updated successfully, but these errors were encountered: