-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Comments not loading #244
Comments
@daveajones you can assign this one to me. |
I have a pretty good idea of what this is, but I need to do some research, in the meantime, here are some references:
|
So, the current implementation assumes that each From https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_streams:
So, yeah, it should be one chunk, so I suspected something else was at play here, then did some verbose Production:
Local:
Noted something?
I'm assuming that Cloudflare is doing http2 by default and converting it at its reverse proxy. @daveajones , any details from the prod setup that you thing by be influencing here, besides the observations I did above? |
I noticed this too while poking around on my dev copy. My workaround was to add newlines to each chunk (e.g. ndjson) and then split out the newlines on the client side. |
@dellagustin I was able to recreate it by removing Line 146 in e4ff8cf
Here is my workaround for the issue: ericpp@eaa8c87 |
Hello @ericpp , thanks for jumping in. I'm curious to understand the reason you removed The solution I was considering is similar to yours, but we cannot assume that all parts of the received data frame (using the HTTP/2 term that is kind of equivalent to chunks, according to what I read), will contain a full JSON, take a look at the following testing screenshot: You can see here 3 complete chunks and the last one that is incomplete. We need to store the last incomplete part and wait for the next data frame. Nevertheless I'm reasonably certain that this is impairing the intended performance improvement. We can workaround that, but if we could somehow ensure a 1:1 chunk to data frame ratio, that would likely give us the best performance. I'll try to fix it this weekend. Testing in prod-like conditions is one of the challenges. |
Production is client to CF (http/2) then CF to origin (nginx reverse proxy - http/2). I can disable http/2 in nginx and see if it’s the origin buffering that is the problem? |
Hi @daveajones, when I mentioned Cloudflare I was not aware there was nginx in the mix as another "moving part". |
@dellagustin I think I removed the replies part just to see how the code was working and to figure out more ways to speed up the comments loading. |
Ok, here is the nginx config: server {
if ($host = www.podcastindex.org) {
return 302 https://podcastindex.org$request_uri;
}
root /var/www/html;
index index.html;
server_name www.podcastindex.org podcastindex.org;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8000; # First attempt to serve request as file, then
}
listen 80 default_server;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
} |
Thanks you @daveajones . Seems quite straight forward. I guess that the HTTP/2 issue that is going on is the default behavior then. |
@dellagustin I'm able to recreate the problem through Nginx/Node in Docker: ericpp@314519a It seems like Nginx is buffering and sending the response in roughly 4096 byte chunks: chunks.txt |
@dellagustin Looks like passing the |
Comments for an episode are sent back to the front end using chunked Transfer Enconding, but that that is not supported on HTTP/2 which is used by the reverse proxy (nginx). At the moment nginx buffers the chunks before sending them to the browser, which breaks the way the chunks are processed. With this commit, we use the header X-Accel-Buffering to disable this buffering, which worked in local testing. We should still add a more clear chunk delimiter so that any other buffering or unknown edge case will not break the comments function (increase robustness). Co-authored-by: ericpp
I tested @ericpp 's solution and it works, I have created a PR for that. |
…or-comments Fix #244: Disable buffer for comments
This commit is a refactoring on the Comments function to improve robustness for loading partial responses form the comments API. Once we introduced partial responses (i.e. chunked encoding), it worked in dev, but failed in production due to buffering in the reverse proxy (nginx). This was already solved with Podcastindex-org#247, but only if the reverse proxy remains unchanged. With this refactoring, the implementation would continue to work even if buffering took place, making it more robust. For testing, the change introduced in the PR mentioned above was reverted (temporarily). For additinal testing instructions, see Podcastindex-org#247 (comment)
…ing-robustness Refactoring #244 - Comments robustness
This is likely a regression of #236.
The text was updated successfully, but these errors were encountered: