Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload on SSL - intermittent hang #283

Open
c0llab0rat0r opened this issue May 29, 2021 · 5 comments
Open

Upload on SSL - intermittent hang #283

c0llab0rat0r opened this issue May 29, 2021 · 5 comments

Comments

@c0llab0rat0r
Copy link
Contributor

c0llab0rat0r commented May 29, 2021

When using docs/publish.py to publish to an IPFS server that is protected by SSL using Nginx as a reverse-proxy, I've noticed numerous instances of the script getting stuck waiting on

File "/usr/lib/python3.8/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)

It fails more often than it succeeds. It's not clear to me if this is an issue with the underlying SSL library (seems possible but unlikely), or Nginx (possible but also unlikely), or if there are some incorrect assumptions in the IPFS client about how it interacts with the IPFS server pushing numerous files up over a slow connection.

Example traceback, after pressing Ctrl+C to cancel it after its stuck:

Traceback (most recent call last):
File "publish.py", line 81, in
sys.exit(main(sys.argv[1:]))
File "publish.py", line 34, in main
return publish(
File "publish.py", line 55, in publish
hash_docs = client.add("build/html", recursive=True, raw_leaves=True, pin=False)[-1]["Hash"]
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/ipfshttpclient/client/files.py", line 373, in add
resp = self._client.request('/add', decoder='json', data=body, headers=headers, **kwargs)
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/ipfshttpclient/http_common.py", line 583, in request
closables, res = self._request(
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/ipfshttpclient/http_requests.py", line 165, in _request
res = session.request(
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/ipfshttpclient/requests_wrapper.py", line 230, in request
return super().request(method, url, *args, **kwargs)
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/venv/lib/python3.8/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/venv/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/c0llab0rat0r/code/github/self/py-ipfs-http-client/venv/lib/python3.8/site-packages/requests/adapters.py", line 482, in send
r = low_conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1347, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.8/ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.8/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)

@c0llab0rat0r
Copy link
Contributor Author

May be related to #245

@ntninja
Copy link
Contributor

ntninja commented May 30, 2021

Does this also happen when you run it with PY_IPFS_HTTP_CLIENT_PREFER_HTTPX=y?

@c0llab0rat0r
Copy link
Contributor Author

Does this also happen when you run it with PY_IPFS_HTTP_CLIENT_PREFER_HTTPX=y?

It fails with 401 Unauthorized, which either means there's a defect in the httpx library, or a defect in the ipfshttpclient wrapper over it.

@c0llab0rat0r
Copy link
Contributor Author

I did some more testing. It's not related to SSL.

If the local client connects to a remote Nginx reverse proxy to connect to an IPFS on the same machine as Nginx, the upload hangs. It can be over HTTP or over HTTPS; behavior is the same.

If Nginx is removed from the picture, the upload succeeds each time. I tried setting proxy_buffering off; in Nginx and that did not help.

I traced the local connection with WireShark and the failures all happen during POST /api/v0/add?stream-channels=true&trickle=False&only-hash=False&wrap-with-directory=False&pin=False&raw-leaves=True&nocopy=False HTTP/1.1 when all of the files in the folder are being sent in one giant multi-part upload. The WireShark trace shows an HTTP 200 OK coming back, and the response payload includes several JSON objects with the CIDs of uploaded files. The response is partial, only some of the CIDs are returned - not all of them. Additionally, because the HTTP response is partial, WireShark does not recognize it as HTTP activity and it must be viewed as a TCP stream instead of as an HTTP stream.

It's not clear to me if this is an Nginx configuration issue, or if IPFS doesn't play nice with Nginx proxying.

I doubt switching Python HTTP clients or reconfiguring the Python HTTP client will make any difference here, as the data simply did not come back from the remote HTTP server (Nginx). I did try setting SO_RCVBUF to ten megabytes just for fun (no difference).

Recommendation

I think the client should implement a piecemeal approach to directory uploads, instead of sending it all in one giant POST request. It's likely single-file requests would succeed where this large request is failing; it also provides a way to measure and report upload progress (see #122).

@ntninja
Copy link
Contributor

ntninja commented Jun 16, 2021

I think the client should implement a piecemeal approach to directory uploads, instead of sending it all in one giant POST request. It's likely single-file requests would succeed where this large request is failing; it also provides a way to measure and report upload progress (see #122).

Thank you for the detailed analysis!

I do sincerely hope that your conclusion isn't correct however: Implementing piecemeal uploading (using client-side IPLD with pipelined .block.put or the .object.patch.* APIs) is a very big project however and I'm not sure if it is in scope for this project at all – the reason we have .add after all is so that client's don't need to know all these details… Additionally it will be hard to do this without massively hurting performance when uploading many small files, since we'd have to be very careful not to introduce additional round-trips. (This second requirement also kills any hope of using the .object.patch.* APIs, since they require lot's of round-trips, so the replacement would have to be raw IPLD and .object.put only.)

For upload progress we could already invoke a callback after every file or file chunk, but we don't currently know the total number of files and total upload size before completing the upload, so that is of limited utility and this wouldn't really change either if we had the above.

I do have one other idea however: Maybe we can get this to work by forcing the write-end of the TCP stream to be closed after the upload has been completed on our end. For this, could you try hacking in a call to socket.socket.shutdown(socket.SHUT_WR) after the last chunk has been sent (only unencrypted HTTP connection for now) and see if that changes anything?
Also: Can you if both py-ipfs-http-client when uploading to nginx and nginx when uploading to go-IPFS terminate the chunked upload with the required 0\r\n\r\n sequence? Quite possible, it's a bug in nginx though, as chunked HTTP uploads is a pretty seldomly used feature…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants