Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docker swarm mode] No round robin when using service #718

Closed
vincentlepot opened this issue Oct 6, 2016 · 8 comments
Closed

[Docker swarm mode] No round robin when using service #718

vincentlepot opened this issue Oct 6, 2016 · 8 comments

Comments

@vincentlepot
Copy link

vincentlepot commented Oct 6, 2016

What I tried

docker network create -d overlay frontend

docker service create --name traefik --network frontend -p 8081:8080 -p 81:80 --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock traefik:camembert --web --docker.swarmmode=true --docker.domain=swarm.test --logLevel=DEBUG

docker service create --name whoami --replicas 5 --label traefik.port=80 --network frontend emilevauge/whoami

But every time I run:

curl -H 'Host: whoami.swarm.test:81' http://x.y.z.t:81/

I have the same container responding (instead of seeing different container ids).

That's strange since this should be distributed by docker itself...

@vincentlepot
Copy link
Author

EDIT: It seems like it switches to another container if no activity occurs. Is there a keep-alive between Traefik and the backend or something equivalent?

@vdemeester
Copy link
Contributor

@vincentlepot This is kinda to be expected. Træfik in swarmmode points to the virtual ip of the swarm and thus, let the docker swarm mode internal load balancer work. So it really depends on how the swarm mode load balancer handle these.

@mvdstam
Copy link
Contributor

mvdstam commented Oct 7, 2016

Possibly related to moby/moby#25325

@vincentlepot
Copy link
Author

The strange thing is that when using another reverse proxy - Interlock for instance - I don't have the issue with the same setup.

@vincentlepot
Copy link
Author

@mvdstam Does not really look the same since I don't have any error, just staying on requesting the same container. The other ones are untouched.
It is as if a connection is settled on a container and remains opened. Is there a way to tell Traefik not to use persistent connections on the backends in this case?

@bgv
Copy link

bgv commented Oct 9, 2016

In this particular example the emilevauge/whoami http server sets Keep-Alive due to the default settings in Go HTTP server which is 30 seconds, so if you try to refresh in 31s it switches to another instance.

For me it looks that traefik obeys that which i'd not classify as bug. More of a feature which might need on/off configuration switch?

@vincentlepot try this (from Docker demos):

docker service create --name vote --replicas 5 --label traefik.port=80 --network frontend instavote/vote

and it does the balancing on every request.

@vincentlepot
Copy link
Author

I agree with you, this is not a bug, more a configuration switch request.

@vincentlepot
Copy link
Author

Moreover, that sounds like a client side issue, not a traefik one. Hence, I close this one.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants