Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multicast in Overlay driver #552

Open
nicklaslof opened this issue Sep 21, 2015 · 60 comments
Open

Multicast in Overlay driver #552

nicklaslof opened this issue Sep 21, 2015 · 60 comments

Comments

@nicklaslof
Copy link

Multicast network packages doesn't seem to be transported to all containers in an Overlay network configuration.

I have investigated this a bit trying various manual config inside the network name space but haven't got anywhere.

@sanimej
Copy link

sanimej commented Sep 22, 2015

@nicklaslof Yes, this is the current behavior. overlay is implemented using vxlan unicast. So handling multicast need some form of packet replication. We are looking into possible options to support multicast on top of overlay.

@nicklaslof
Copy link
Author

Just to make it clear (especially since I wrote host instead of container in my original text) is that I mean using multicast between containers whilst still having vxlan unicast between the docker hosts.

@dave-tucker
Copy link
Contributor

@sanimej @mrjana @mavenugo hey guys, any update on whether there a solution in sight for this one? Per #740, this impacts use of the official elasticsearch image for creating a cluster.

If we can outline a possible solution here, perhaps someone from the community can attempt a fix if we don't have bandwidth for 1.10

@sanimej
Copy link

sanimej commented Nov 6, 2015

@dave-tucker Multiple vxlan fdb entries can be created for all zero mac, which is the default destination. This gives an option (with some drawbacks) to handle multicast without the complexities of snooping. I have to try this out manually to see if it works.

@dweomer
Copy link

dweomer commented Nov 15, 2015

@sanimej, @dave-tucker: any news on this one? We are looking for this so as to support containerization, with minimal refactoring, of a multicast-based service discovery integration in our stack. We can probably make unicast work but would prefer to not incur such a refactoring so as to avoid unintended consequences and/or further refactoring in our stack.

@mavenugo
Copy link
Contributor

@dweomer this is not planned for the upcoming release. But, have added the help-wanted label to request help from any interested dev. If someone is interested in contributing this feature, we can help with design review & get this moving forward for the upcoming release.

@alvinr
Copy link

alvinr commented Dec 2, 2015

+1 - some infrastructure components (like Aerospike) rely on Multicast for cluster discovery.

@oobles
Copy link

oobles commented Dec 2, 2015

+1 - This would be very useful. At the very least the documentation should note that this is not currently supported.

@bboreham
Copy link
Contributor

Note there are other Docker Network plugins available which do support multicast. For instance the one I work on.

@jainvipin
Copy link

Note there are other Docker Network plugins available which do support multicast. For instance the one I work on.

Same with Contiv plugin. More here.

@tomqwu
Copy link

tomqwu commented Apr 14, 2016

+1 in order to adapt Guidewire Application multihost clustering, multihost is must

@DSTOLF
Copy link

DSTOLF commented Jul 13, 2016

+1 Can't get Wildfly and mod_cluster to work on Swam Mode, because the overlay network doesn't support multicast.

One could fall back to Unicast, but since one would also need to provide a proxy list with the ip addresses of all httpd load balancers and it would be very difficult to figure them out beforehand, I would say that Wildfly and Modcluster don't currently work on Swarm Mode. Regards.

@medined
Copy link

medined commented Jul 21, 2016

+1 to support http://crate.io

@DanyC97
Copy link

DanyC97 commented Oct 17, 2016

any update on getting the multicast implemented ?

@mavenugo
Copy link
Contributor

@DanyC97 @medined @DSTOLF @tomqwu and others. this issue is currently labeled help-wanted. If someone is interested in contributing this feature, we will accept it.

@ghost
Copy link

ghost commented Oct 17, 2016

@DanyC97 used macvlan underlay instead for my needs as a quick solution and worked fine.

@jocelynthode
Copy link

@codergr : Were you able to use macvlan in a swarm with services ? Can you easily change the scope of a network ?

@mavenugo
Copy link
Contributor

@jocelynthode that is not supported yet unfortunately. PTAL moby/moby#27266 and it is one of supporting such a need. But it needs some discussion to get it right.

@mjlodge
Copy link

mjlodge commented Oct 19, 2016

Another option is to use the Weave Net networking plugin for Docker, which has multicast support. Full disclosure: I work for Weaveworks.

@jonlatorre
Copy link

@mjlodge but there is no support for network plugins on swarm mode, no? So weave can't be used in swarm mode and docker services. It's pity that in such environment (docker swarm and services) much of the applications that support clustering can't be used due to the lack of multicast.

@bboreham
Copy link
Contributor

@jonlatorre local scope plugins do work, so Weave Net can be used with ordinary containers when Docker is in swarm mode. But not services.

With Docker 1.12 it is not possible to use Docker services with plugins.

Docker 1.13 is intended to support network plugins, but as of today, using Docker 1.13-rc5, we have been unable to get a network plugin to do anything with Docker services.

(note I work for Weaveworks)

@jonlatorre
Copy link

@bboreham thanks for the update. I hope that in final release wave could work with docker swarm and services, i'm impatient to use it :)

@fssilva
Copy link

fssilva commented Sep 5, 2017

+1

@blop
Copy link

blop commented Sep 5, 2017

I use macvlan network for now because of this (macvlan is supported in swarm since version 17.06), but it's clearly less convenient.

@markwylde
Copy link

markwylde commented Sep 13, 2017

@mavenugo is there any plan or tips on how this feature could/should be designed. What would be the starting point for getting it implemented.

I'm guessing the code goes somewhere in here:
https://github.com/docker/libnetwork/tree/master/drivers/overlay

Does the driver contain a list or method of fetching all the IP's within the network. Could it watch for a multicast then replicate it over all the IP's individually. Would this work or be a performance hit?

@gokhansari
Copy link

+1

@tunix
Copy link

tunix commented Sep 26, 2017

I've been relying on Hazelcast's multicast discovery until I found out that overlay network doesn't support multicast. An external network definition with macvlan driver (swarm scoped) seems to be working although it cannot be defined inside the compose file. (as part of the stack) There is an issue already filed for this one as well: docker/cli#410

@intershopper
Copy link

+1

@deratzmann
Copy link

+1
@tunix right now I try to install a Hazelcast Cluster (Running on payara full Profile Server) on different nodes via docker service...and run into the same issue. Could you Please describe your macvlan workaround? This issue seems to be a along lasting One...

@tunix
Copy link

tunix commented Nov 9, 2017

There are 2 solutions to this problem afaik:

  • Using hazelcast-discovery-spi plugin for Docker
  • Using macvlan network driver

It's been a while since my last trial on this but just creating a macvlan network and using it (as an external network) should be sufficient.

$ docker network create -d macvlan --attachable my-network
$ docker service create ... --network my-network ...

@deratzmann
Copy link

deratzmann commented Nov 13, 2017

@tunix creating a macvlan in a Swarm scope doesn't seem to Work. The Container starts, but it cannot reach any ipadress. Running with Overlay it work ( but then multicast is not available).
Any ideas?
` docker network create --driver macvlan --scope swarm sa_hazel

docker service create --network sa_hazel ...foo
`

@conker84
Copy link

+1

@dhet
Copy link

dhet commented Nov 30, 2017

@deratzmann Using a network like yours, I can ping containers on remote hosts but multicast still doesn't work.
+1

@blop
Copy link

blop commented Jan 10, 2018

Found another tradeoff when using macvlan driver in swarm.

The macvlan driver driver does not support port mappings, which prevents from using "mode=host" published ports as described in https://docs.docker.com/engine/swarm/services/#publish-a-services-ports-directly-on-the-swarm-node

@torokati44
Copy link

Just asking: Is there any progress on this?

@bbiallowons
Copy link

bbiallowons commented Apr 18, 2018

Is there any chance, that this will be implemented sometime?

@dhet
Copy link

dhet commented Apr 27, 2018

For the time being I suggest everyone to use Weave Net which works flawlessly in my setup.

@KylePreuss
Copy link

KylePreuss commented Oct 18, 2018

+1. From my own testing, multicast does not work with the bridge driver either. I'm not talking about routing multicast between the host's network and the containers' NAT'ed network -- I'm talking about two containers deployed side-by-side (same host) using the default bridge or a user-defined bridge network. Said containers cannot communicate via multicast. IMO getting multicast working within the bridge network would be a logical first step before moving on to the overlay network.

Also, the suggestion to use Weave Net will only work with Linux hosts.

I wish I knew earlier that containers cannot use multicast.

Edit: I know multicast should work with "net=host" but, aside from that not being an ideal solution in any sense, it does not work with Windows hosts.

@davidzwa
Copy link

Any update? Want to run discovery services in docker setup, because our images are automagically pulled by a provisioning service (IoT-Edge). I can't update any binaries outside the docker system... or it would be great hackery.

@tymonx
Copy link

tymonx commented Aug 25, 2022

@KylePreuss it is possible to setup side-by-side (same host) multicast traffic between two or more containers using virtual ethernet veth with macvlan or ipvlan drivers.

Create the veth1 device:

sudo nmcli connection add type veth con-name veth1 ifname veth1 peer veth2 ipv4.method manual ipv4.addresses 192.168.128.1/23

Next the veth2 device:

sudo nmcli connection add type veth con-name veth2 ifname veth2 peer veth1 ipv4.method manual ipv4.addresses 192.168.129.1/23

Bring created veth1 connection up:

sudo nmcli connection up veth1

Bring created veth2 connection up:

sudo nmcli connection up veth2

Create Docker network configuration for created veth1 device:

docker network create --config-only --gateway 192.168.128.1 --subnet 192.168.128.0/23 --ip-range 192.168.128.2/24 --opt parent=veth1 veth1-config

Create Docker network configuration for created veth2 device:

docker network create --config-only --gateway 192.168.129.1 --subnet 192.168.128.0/23 --ip-range 192.168.129.2/24 --opt parent=veth2 veth2-config

Create Docker Swarm network veth1:

docker network create --scope swarm --driver macvlan --config-from veth1-config veth1

Create Docker Swarm network veth2:

docker network create --scope swarm --driver macvlan --config-from veth2-config veth2

Use it in Docker compose:

services:
  multicast-sender:
    networks:
      - veth1
      
  multicast-receiver:
    networks:
      - veth2
      
networks:
  veth1:
    external: true
    
  veth2:
    external: true

@Alqio
Copy link

Alqio commented Oct 11, 2022

The solution to use weave net seems to work on linux hosts, but is there a way to achieve this on windows hosts? I would like to have linux host as the manager and windows host as a worker. I can install the weave plugin, but multicast communication between the hosts does not work

@elvys-zhang
Copy link

Weave has not been updated over 4 years. Is there any implementations now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests