Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server-sent-events are buffered. #199

Open
schmod opened this issue May 12, 2020 · 22 comments
Open

Server-sent-events are buffered. #199

schmod opened this issue May 12, 2020 · 22 comments

Comments

@schmod
Copy link

schmod commented May 12, 2020

I've been having difficulty streaming responses to clients using Server-Sent Events (SSE) while using Argo.

Argo (or Cloudflare) appears to be doing some sort of buffering that is preventing data from being streamed to clients incrementally.

Is there any way to opt out of this behavior?


For example, the following ExpressJS route (which sends an incrementing number to the client every 10ms) behaves very differently depending upon whether it's accessed directly or via an Argo tunnel.

app.get('/count', (req, res) => {
  res.setHeader('Cache-Control', 'no-cache no-transform');
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader("Access-Control-Allow-Origin", "*");
  res.setHeader('Connection', 'keep-alive');
  res.flushHeaders();

  let count = 0;
  const loop = setInterval(() => {
    console.log(count);
    res.write(`${count++}\n`);
    
    if (count > 5000) {
      clearInterval(loop);
      res.end();
    }
  }, 10);
});

cURLing this route directly:
local30

cURLing this route over an Argo tunnel:
argo

(Client on left – Server on right)

@analytically
Copy link

@chungthuang I'm seeing this as well using 2020.11.9

@chungthuang
Copy link
Contributor

I've tested SSE works with our hello-world server https://github.com/cloudflare/cloudflared/blob/master/hello/hello.go#L190. Can you share another simple server I can test with?

@AndreaCensi
Copy link

I can reproduce this exact issue on my setup (Python + current version of Gunicorn, Waitress).

I see this with a "regular" webpage being server (Content-Type: text/html), not with text/event-stream.

These are the headers I receive connecting directly to the server process:

HTTP/1.1 200 OK
Server: gunicorn/20.0.4
Date: Mon, 30 Nov 2020 07:59:29 GMT
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff

Note the chunked transfer encoding.
The data loads incrementally in the browser.

These are the headers when going through Argo:

cf-cache-status: DYNAMIC
cf-ray: 5fa3088b5ff33756-MXP
cf-request-id: 06b9c3ab15000037569a0d2000000001
content-encoding: gzip
content-type: text/html; charset=UTF-8
date: Mon, 30 Nov 2020 08:00:12 GMT
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
nel: {"report_to":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=%2Bcl2PMxbvgs16Wn5g6nwt3ULjAQ8jWG3h2vbdS3PdH3e%2FsZwv4%2B%2BQj1tvOniUJGxhM8dm2ou48Ntdnv0A5hocmGI2qHXRQUyVRtiO4o12XT2PwD%2BcjK5DHB7ZKe5cEVm"}],"group":"cf-nel","max_age":604800}
server: cloudflare
x-content-type-options: nosniff

In this case, the data still loads incrementally, however with much larger buffer size. (I don't know how to measure exactly.)

@robertoandrade
Copy link

having a similar issue attempting this with Apache/PHP-FPM sitting in front of the Tunnel. When I check the response via ngrok tunnel or directly it works as expected but as soon I put CloudFlare in front of it (either via tunnel or traditionally via the CDN proxying the origin), it seems to be buffering 3-4K before sending each "chunk" even though we're using the "no chunked encoding" option.

this issue has been unresolved for almost 2 years now, any workarounds anyone found?

@robertoandrade
Copy link

something to be mentioned here is that we're sending standard text/html content-type not necessarily text/event-stream but it seems cloudflared treats responses to that content-type differently, wonder if we can have an option/env variable or whatever to control that behavior for other (or all) content types.

@robertoandrade
Copy link

ie: specify flushableContentTypes overrides via cli parameter or config?

@Nicarim
Copy link

Nicarim commented Oct 24, 2022

I'm having trouble with SSE getting buffered, i.e. the header content-type is set to text/event-stream but it still seems to take almost 2 or more minutes to get any response from the event-stream endpoint. Is that functionality working correctly?

@rh2048
Copy link

rh2048 commented Dec 19, 2022

Having the same trying to stream logs when proxying Nomad with Cloudflared. Works fine when proxied through Cloudflare normally.

Related: hashicorp/nomad#5404 (comment)

@thearchitect
Copy link

Same issue with https://connect.build

We use gRPC server streams extensively, and this issue is a blocker.

@vcarus
Copy link

vcarus commented Apr 3, 2023

I am having the same issue with the latest version of cloudflared.
Does anyone have a workaround for this?

@meino-meskenas
Copy link

I had same problem in .NET core implementation of SSE.
Patch fix was to remove response headers "Connection" and "Content-Length" of SSE result. Also added "X-Accel-Buffering" :"no" for nginx reverse proxy. Then every request works like a charm.

@H01001000
Copy link

I believe I encounter the same issue with React 18 streaming

@arslancodes
Copy link

Hey guys, did anyone manage to find a fix of this issue?

@deansundquist
Copy link
Contributor

Came across this issue via a support ticket. I would ensure that that we're using:

  res.setHeader('Content-Type', 'text/event-stream');

@ttrushin
Copy link

ttrushin commented Aug 1, 2023

Came across this issue via a support ticket. I would ensure that that we're using:

  res.setHeader('Content-Type', 'text/event-stream');

This is the answer. Add the header to your response and SSE will work as expected.

Just tested it on a project that was facing this exact issue.

@computator
Copy link

I haven't tested this in awhile so maybe it works now, but I just wanted to point out that per the original post this was not working even when including the Content-Type: text/event-stream header. As such, that's not actually the fix to the original issue.

@SohumB
Copy link

SohumB commented Nov 28, 2023

We just ran an experiment by forking cloudflared and it's not just the flushableContentTypes special-casing inside this repository. There must be internal cloudflare code that we can't modify that also does the same special-casing, because we're seeing the exact same behaviour before and after our fork: text/event-stream and application/grpc feed through correctly, but our actual content type breaks (in our case, application/x-ndjson, as a 524.)

Testing via a request that reports transfer-encoding: chunked but then closes the connection immediately, we note that in a non-special-cased request cloudflare injects a content-length header, which it doesn't do in a special-cased request.

@mattholy
Copy link

How about application/octet-stream, it seems to be buffered as well.
Any idea to set something to avoid? Like proxy_buffering off; in nginx.

@mattholy
Copy link

How about application/octet-stream, it seems to be buffered as well. Any idea to set something to avoid? Like proxy_buffering off; in nginx.

The same as #1018

@xorrvin
Copy link

xorrvin commented Mar 2, 2024

I have found a workaround for this, since I was also having similar issues. I'm running a Git-like service (https://github.com/xorrvin/gitoboros) which uses Chunked encoding and outputs some info in a stream. It's proxied by nginx. I'm using default settings for the tunnel.

The solution has two steps.

First, you need to mask original content type header (in my case application/x-git-*) in nginx and add "streamable" one (like application/grpc) instead:

location /your-endpoint/ {
    proxy_pass http://your-backend;

    # these two are needed so that nginx doesn't buffer the response internally
    proxy_cache off;
    proxy_buffering off;
 
    # cloudflare tunnel hack: replace original Content-Type
    proxy_hide_header Content-Type;
    add_header Content-Type application/grpc;

Then, you need to go to your Cloudflare Dashboard, and set up few rules for that specific endpoint:

  • Go to Rules -> Transform Rules -> Modify Response Header and restore original content type there, like so:
When incoming requests match...

(http.request.method eq "GET" and starts_with(http.request.uri, "/your-endpoint"))

Then...

Modify response header -> Set static -> Content-Type = "your original content type"
  • Another one is in Caching -> Cache Rules, to explicitly Bypass cache for /your-endpoint

That's it!

@vanduc2514
Copy link

@xorrvin It works as expected. Thank you for the suggestion. I use a traefik ingress controller and did the similar thing (like nginx)

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: ui-cloudflared-stream-header
spec:
  headers:
    customResponseHeaders:
      Content-Type: text/event-stream

Then magic happens, response is not buffered anymore

@la3rence
Copy link

In my case setting the HTTP header for Content-Type doesn't work.
(I am running an ollama instance for LLM apps, the default response header for it is application/x-ndjson.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests