-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traefik is routing traffic to wrong backend. #1174
Comments
@klausenbusk I think it's due to the use of websockets. Traefik will not kill the current connections indeed. But this needs some discussions. In your opinion, what would be the perfect behavior? |
@emilevauge shouldn't the graceTimeOut parameter take care of killing those connections on configuration reload? |
@timoreimann indeed, but it seems there may be a regression on this... Even with normal HTTP requests (not websocket). |
I have done a little more debugging since I opened the issues. I added a So what I think is going on here, is that some traffic is forwarded over the connection created in |
On second thought, you probably don't want to shut down requests after Maybe a second graceful termination parameter might be useful to get rid of (too) long-running websocket connections. |
Finally after 2 hours debugging I was able to figure out the root cause. So, the issue is caused by the fact that we use Cloudflare which use keepalive. docker run -p 8081:80 --rm -t -i emilevauge/whoami
docker run -p 8082:80 --rm -t -i emilevauge/whoami Then start traefik with the following config: logLevel = "DEBUG"
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":8080"
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
url = "http://127.0.0.1:8081"
weight = 1
[backends.backend2]
[backends.backend2.servers.server1]
url = "http://127.0.0.1:8082"
weight = 1
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:backend1.com"
[frontends.frontend2]
backend = "backend2"
[frontends.frontend2.routes.test_1]
rule = "Host:backend2.com" Then install nginx and use the following config:
Nginx is configured with keepalive, so it reuse connections. Now we should be able to call both backend like: curl 127.0.0.1 -H "Host: backend1.com"
Hostname: c7994ef9a8db
[...]
curl 127.0.0.1 -H "Host: backend2.com"
Hostname: d4ff25df4e8b Now lets try requesting a websocket "upgrade" curl 127.0.0.1 -H "Host: backend1.com" -H "Upgrade: websocket"
Hostname: c7994ef9a8db
[...] and finally call curl 127.0.0.1 -H "Host: backend2.com"
Hostname: c7994ef9a8db
[...] and nginx reuse the hijacked connection which now point to I'm not sure what the proper way to fix this is, but I can't be the only one using Cloudflare and this can also be abused. I hope this make sense :) |
What version of Traefik are you using (
traefik version
)?v1.1.2Edit: also present with
v1.2.0-rc1
What is your environment & configuration (arguments, toml...)?
Traefik is running in a Docker container on CoreOS and pulling config from etcd and all traffic is routed through Cloudflare first (ddos protection).
etcd config: (taken from debug log
Configuration received from provider etcd:
)What did you do?
Point backend b5 to another server (the old server did also host the load balancer but on a different port).
What did you expect to see?
That traffic from only frontend f5 get forwarded to b5.
What did you see instead?
That some traffic from f1 (primary traffic to
admin.foobar.com
) get forwarded to b5Another thing I observed, none of the request which is forwarded to the wrong server is in the access log, also I haven't be able to reproduce the issue with
curl
, but I did look at the headers with help from tcpdump and everything looked as it should (I have posted log in the Slack channel).Edit: Another thing, b5 is used for websocket, if that has anything to say. Maybe that somehow screw something up? Also feel free to ping me on the slack channel.
/cc @containous
The text was updated successfully, but these errors were encountered: