-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add hostname to DNS service discovery #1855
base: master
Are you sure you want to change the base?
Add hostname to DNS service discovery #1855
Conversation
This closes moby#1854 which is related to moby/moby#34239 and moby/swarmkit#2325. Details of this patch are in the PR. Signed-off-by: Mario Kleinsasser <mario.kleinsasser@gmail.com>
@kleinsasserm I wrote the big answer on the moby/moby#34239 ticket. We are looking into the possibility to create tasks aliases, from a libnetwork point of view we have all the constructs already in place, like tasks aliases (will be uniquely applied to each task) and service aliases will be applied to all the tasks of a specific service. This change will be in swarmkit side to properly set them. |
As I wrote in moby/moby#34239 duplicate hostnames wouldn't cause any problems, because the DNS implementation used will create multiple A-records like it does for tasks.servicename which is already implemented at https://github.com/docker/libnetwork/blob/master/service_common.go#L30. It does the same as my PR does, just for the tasks entry. There is no difference. I agree with you that someone could just pick the service “tasks.servicename” as hostname for the containers. Well, the tasks.servicename will be overwritten, but with exactly the same information as it would get through https://github.com/docker/libnetwork/blob/master/service_common.go#L30. The result will be just the same. I will try to make the evidence tomorrow because I currently does not have the build on the pc I am writing from. I also agree with you that there might be a simpler solution, but this PR would add a more generic usability which in turn would provide more possibilities on how to connect services like clusters - I choosed Etcd/Redis because that are well known services anyone knows about. But, for example if a software could not provide a DNS on his own, then this PR would provide a generic solution to get things containerized. In the real world there is a lot of software that would benefit from it. Like in a multiple of environments, for example Microsoft ADS, the hostname gets always automatically registered to DNS - why not in the Docker case? Why you would like to build up a construct with task aliases if there is a more convenient and in my humble opinion more logical way. |
@kleinsasserm won't be a problem if the containers that are resolved by the same name are the same application but if they are not things will start breaking. Today services has unique name resolution guarantees across the cluster. |
OK, but hey with the stack (and of course service) yml you have to specify the hostname per service. If you choose the same name for different services (the hostname parameter) then this is, a copy and paste error or a usage failure. Normally there will be no problem because the container hostnames are generated, but there are practical use cases where I and of course others would like to resolve simple hostnames. Like I mentioned before - it is a common behavior. But maybe, what do you think about if we update the documentation on this point. Maybe here?: https://docs.docker.com/docker-cloud/apps/service-links/#dns-hostnames-vs-service-links Because this documentation does not work in swarm: eg. a ping to web-1 like in the docs is not possible currently. I mean, when the merge is accepted, we can leave a clear documentation about the cautions there. Dont use the same hostname golang template for different services :) |
@kleinsasserm yep that example is definitely too simple, the name is more complex.
That is the name that you can ping:
|
Yes! Exactly! That is the problem that I faced at work and that is what I mean. That pingable (resolvable) hostnames (with the dynamic task id) are pointless in a more sophisticated setup because the id changes every time the service is redeployed and therefore it is finally not useful to use in software. |
@kleinsasserm still the proposal using the tasks template is a best effort, as I explain before if you request 5 replicas you have no guarantees that you will have tasks slots from 1 to 5. That is not an assumption that you can build on top if you use services |
Yes that is also true. But therefore, for example, there is Raft. For example if I would like to have 5 replicas and there are only 3 it is OK. Because the consensus is made. It does not matter which of them are there eg, 1,3,5 or 1,2,3 it matters that there are 3. A quorum. When the missing two are coming online, fine. If not fine. If there are less then 3, fine. But I really need the possibility to write in the config files --join web-1, web-2, web-3, ... and so on. Simply we cannot use this test_srv.11.ii7hoziq9n0x1no6f0kt3rnh5 there in the config, this is not and will never be (task id) deterministic. |
just to understand, why tasks.service_name does not work here if you don't care about the specific instances? that will work all the time also across restarts because the list will always be fresh |
Because A DNS-RR does not work during a cluster setup where the software needs to connect to the single instances. For example Redis: https://redis.io/topics/cluster-tutorial:
In a swarm you have, like in this example 5 containers with 5 different (unpredictable) ip-adresses but the same port on the overlay network. Therefor you would like to create the cluster like:
If the software is somewhat intelligent, it will write to the config files on this 5 containers the entries (pseudo): So it will never matter what ip-address is currently behind web.1 or web.5, simply it will work, if the hostname is registered via DNS during the service configuration, via {{.Service.Name}}.{{.Task.Slot}} for example. This will result in resolvable hostnames which each container can try to contact through the information which is stored in the configuration.
|
Ok, here is the output with the PR active. The result is as it should be, the hostnames are resolveable:
Now here is what happens, if a user specify the
Now at last, I will start two different services with the same container hostnames, joining the same network. Even in this case the result is predictable. All containers get the same hostname and with this PR all containers get registered in the service discovery DNS. Therefore, in my humble opinion, this is a major benefit because due to this PR I can create, if needed, two services which share the same container hostnames and I get a DNS record with all container ips of two services for free. The behavior is as it is always when registering the same DNS entry multiple times - in fact, this is the standard practice of any DNS service. If the user makes a configuration mistake, no existing DNS implementation will reject it, because it is not an DNS violation, its a configuration mistake of the stack/service.
|
I would like to get this merged, because I currently see no reasons why this is not a common behavior of any DNS and why this would not be useful in a general generic point of view. Furthermore it seems that this PR does not harm any existing functionality. Therefore I would like to ping @fcrisciani, @mavenugo and @sanimej as you are merging PR's against this project. I don't want to be annoying but, if there is any disagreement, please let me know - I would like to discuss it. If any of my explanations is not clear, please let me know, I will explain it once again. I would like to support this PR because this solves a situation which occurs in real world Docker devops usage. Thank you! |
It has been a week now since the last update on this PR and I would like to ask if I can do something more @fcrisciani, @mavenugo and @sanimej ? |
I'm not sure this should be merged before a general design is there. Implementing this could easily conflict with the result of that discussion. Also see moby/moby#24973 which was closed for that reason.
docker service create --hostname="hello🐳world" --name foo nginx:alpine
docker service create --hostname="this is just something" --name bar nginx:alpine
docker service create --hostname="192.168.1.1" --name baz nginx:alpine Before going full-steam ahead, a proper design is needed, and alternatives explored, e.g. docker service create \
--name my-service \
--hostname="{{.Service.Name}}.{{.Task.Slot}}" \
--network name=y-network,"task-alias={{.Task.Hostname}}.{{.Task.Slot}}" \
nginx:alpine |
Further discussion is going on on the Slack channel https://dockercommunity.slack.com/messages/C24JNLCHJ/ . I will post updates as there will be notable outcomes. Please feel free to join the discussion there. |
Also linking moby/swarmkit#1242, which is where the global design for a DNS schema is being discussed |
@thaJeztah thx for referencing. As nearly two weeks are gone and as I've got no response on slack about it, I would like to ask once again if anyone sees any problem with this PR. I know about moby/swarmkit#1242 but from my point of view the discussion there is a little bit out of scope because it adds a lot of functionality (external IPAM, external DNS, etc). What this PR does is to put the hostname of the container into the internal DNS of swarm. Nothing more. Its OK, that there should be a bigger solution but using external IPAM or external DNS will add product specific dependencies which can only be optional. The issue moby/swarmkit#1242 is opened for more than a year and it is understandable because this needs a lot of changes (compose file, Docker client, etc). So all I got as response is that there might be a problem here and there, but I could not reproduce any real problems. If I register a not conforming (RFC) hostname at the DNS it will be rejected, period. If no one has concrete suggestions, ideas or found problems on the matter with this PR, why dont merge? |
And the next 14 days are gone without any response. Once again the question why don't you merge this PR? It seems like that there are more customers who are needing this one, see moby/swarmkit#1242. But there @stevvooe tell us we should start a new discussion. Are you serious? That looks like ping-pong (or passport 38 from Asterix). But OK tell me where I should open up the next issue or what I should do and I will do it because I need this functionality. |
@thaJeztah moby/swarmkit#1242 now seems to be a dead end (see last post of stevvooe there). Please can anyone help me to get my PR merged? @fcrisciani, @mavenugo now I ask you directly as it seems to be not discussed anywhere else now: Whats the problem with my PR? Could you please merge it if there are no arguments against it left? This functionality would be very helpful, not only for me. Thank you! |
We want this PR! |
@kleinsasserm apologies on the delayed response. I was under the impression that the response from @fcrisciani (moby/moby#34239 (comment)) was adequate & I failed to pay attention to this thread as well. I have to spend some time to go over all the details to give my comments. Though the changes are simple and straight forward, we have to dig into the details on why |
Thank you, that would be very, very helpful. Please check it and let us know. As you can see in the comments, there are different places where the actual behavior is being discussed, but this one here seems to be the last one left. Maybe you have also read through moby/swarmkit#1242 I've also explained it there and in the other linked issues. eg. swarmkit issuecomment-319949075 |
Just to let you all know: I've chatted with @mavenugo on the docker-networking channel on Slack. He is under heavy load at the moment but he will have a look at this PR soon. |
@mavenugo I can't reach you on Slack too... are you finally lost in work? 😄 Any status update on this one? |
@kleinsasserm :) thanks for the follow ups. This is certainly in my stack and I will get my investigation going this weekend. Sorry about the delay and thanks for your patience. |
No problem! The main point is that you are ok 👍 |
@mavenugo one more week has passed 😄 - any news on this? |
@mavenugo 😁 one new week, did you find a free time slot to have a look? 🙈 |
@mavenugo Maduh? |
@kleinsasserm apologies on the delay again. I went through all the discussions across the different repos and I understand the requirement. It is indeed very useful if we can identify a task individually without having to call the docker API to get the Task-id and add extra magic. IMHO, there are few reasons why reusing the
As you can see, my concern on this is backward compatibility and existing application assumptions on hostname for its own operation and changing the operational nature of DNS resolution might impact it in unforeseen ways. But, since we are discussing on a functionality to better identify a task individually, we should consider the Also, am very skeptical of the success of even the Having said that, we could implement the Again sorry about the delayed response. But I think this is a much safer approach considering all the legacy that Docker carries atm. |
I recently noticed that there already is an |
Sorry @mavenugo for my late answer, I was busy the last weeks too. Now I have had myself read in again to this and I'am pretty sure that your suggestion to implement something like a template aware The mentioned network aliasing probably won't work, because it would still assign the If we go the way with Whats is your opinion on this? |
@mavenugo We wrote a little tool (a service wrapper) which is able to retrieve the needed information directly from the Docker Swarm API via the network inspect and service inspect calls. So I am able to get the information about the containers of an Docker network, especially the ip addresses and of course their corresponding names and service. In conjunction with the already existing hostname templating and the labeling possibilities of the service together with the Golang templating functionality we now have a flexible way dynamically configure and reconfigure nearly every service. I think you can close this one, as we made a workaround. If someone is interested in, we have released it on Github: https://github.com/n0r1sk/bosnd . It's really far away from being perfect but it works. 😄 Thanks! |
@kleinsasserm I don't think that your workaround is a good one. As you write in the |
Yes that is basically true but the host where the bosnd is running doesn't have to be a manager host. Second you can add something like Kong between the bosnd and the manager interface to get some kind of API security. Third, yes it would be nice to have RBAC in the Docker Engine like Casbin provides to allow only GET requests by default (yes I know Docker EE). Mounting the SSL certificates is always a sub optimal solution but sometimes you have no options (ssl won't work without). Furthermore you cannot expose to a "ipaddress:port:port" combination with Docker Swarm. And you would like to share the Docker host with a lot of external explicit ip's to get your external DNS mappings done (for many services). If this would be working you would be able to use Docker secrets. On the other hand if you use Docker-compose you cannot use external Docker secrets, only secrets created during the Docker compose up are working. And this secrets (file based) are bind mounts from the file system. Sadly, no solution, but what is possible is, that Docker-compose can map "ipaddress:port:port" correctly. And to come to the end, docker run cannot use any secrets at all even it is running on a swarm. Why? There is no combination that works for any of these cases. Conclusion: You are correct, but real life is not black and white, it is gray. And you need the flexibility to dynamically get the container ip's into the, for example, load balancer configuration of Apache httpd, because only Apache httpd can talk Ajp13 with Apache Tomcats. Without it, you are not able to do MTA. (Modernize Traditional Applications (c)Docker) 😏 |
i'm failing to see why this is not merged as it seems to solve a lot of issues with stack deployments for different types of services mentioned above and others like distributed erlang apps (rabbitmq & others). or..., is there a way to replace this embedded dns with another implementation (batteries included but replaceable) or extended it ? @m4r10k besides this being accepted here, what else is needed to have it deployed with docker-ce ? |
@lazedo you did read it correctly, there are no big concerns about it. But there might be problems like @mavenugo said. As I wrote above, personally I can't see any heavy problems with it including garbled hostnames for example, but I am far away from knowing the code good enough to close out side effects. This PR is just straight hacked into it, without fuses. One discussed problem would be, that during scaling, it is not guaranteed, that new containers are filling up the gaps in the line. For example, you start with three containers, ct-1, ct-2, ct-3 and after some up and down scaling you stay with ct-3, ct-8, ct-10. The gaps must not be filled. This could, if you do not know it, be a problem. If you use this PR for a cluster service, where you won't autoscale voluntary, it might work. But it is still not sure because, if a Docker host die, and Swarm starts up the missing container (replica) on another Swarm node, the old DNS might be still there (eg host-03) and you will end up with host-04 which might not be part of your cluster config (and probably will never be). A timing problem. Therefore, this functionality should be optional and switchable by config. As for merge: No, there would be nothing more to do than to merge this (or a cleaner code) into the libnetwork and afterwards, the resulting git-hash has to be updated in the vendor list of docker-ce etc. to be recognized in the next build cycle eg 18.01... I think the greatest problem at the moment is, that all libnetwork people are very busy as I can see on slack. Therefore a ongoing discussion is a little bit hard to establish. Glad for me to see, that there are others out there who are sharing my point of view too. For replacing dns: No, there is no direct way to replace the internal dns based service discovery. The only way you can go is, that you use something like HashiCorp Consul. But in my personal view, these are things which are hard to setup and to maintain. A build in solution, with the already existing dns service discovery, would be preferable. For our needs, we wrote a simple configuration tool which is based on the |
👍 |
@thaJeztah is there any chance you can bring this issue to a resolution? I was just needing to be able to get replicated swarm service tasks to talk to each other - if you look around, you'll see too many example compose files hard code the ip addresses for the tasks into the compose files, or worse, create 4 tasks for a service, with identical yaml by adding 4 services... for example: |
FYI. Afaiu my PR moby/moby#39204 (which is already merged) implements same thing. Just on different way. So it will be part of next Docker version. |
This closes #1854 which is related to moby/moby#34239 and
moby/swarmkit#2325.
Signed-off-by: Mario Kleinsasser mario.kleinsasser@gmail.com
- What I did
As mentioned in moby/swarmkit#2325 and in #34239 it is sometimes neccessary to
resolve the hostnames of the containers from within a service. This is usful as
it provides the posibility for the service software running inside the containers
to resolve each other through DNS. I've looked into the source code and found out,
that some minor changes must be made to vendor/github.com/docker/libnetwork/agent.go.
- How I did it
I've looked at the mentioned file and there are two functions which are
responsible for the creation of the DNS entries in the service discovery.
Therefore I've patched these two functions that in any case, regardless if it
is a service task or a simple container that joined a network, the hostname
will be added to the DNS service discovery. For the hostname of the container
I use the sandbox struct which is available in the context of the code.
- How to verify it
-Impact
Even if the hostname of all containers would be the same, this would not be a
problem, because the result is DNS entry like
test_srv
that will beresolved to all ip adresses (multiple A-records) like te
tasks.servicename
entry is resolved. The benefit is, that all containers can reach each other container
of the same or other service on the same network due to knowing the hostname
pattern, which is deterministic and predictable. Therefore, configuration possibilites
for clusterservices are given -> see #34239 for details.
- Description for the changelog
Add container hostname to DNS service discovery