-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate ip #121
Propagate ip #121
Conversation
I haven't been able to test this yet because I accidentally removed my local wire app config and I forgot what hacks I needed to get it working. Something with content security policies that didn't work properly. will try to set up again tomorrow. |
This works on my installation of wire! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine making that the default. Do I understand correctly that setting this to Another question: in the demo value, you set |
I was reluctant to change it there because for a "prod" environment running it as a I saw in our
# deployment.yaml
nodeSelector:
wire.com/role: ingress Maybe I should add a comment about that. This sounded like a more sane choice than running But again, every prod environment is going to be different with regards to load balancing, and it's hard to give a "right" answer for that usecase. So we should probably instead link to https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ or a more "wire-specific" doc on https://docs.wire.com/ that explains what choices a person should make whilst setting up a Loadbalancer and/or ingress for their k8s cluster |
Your assumption is correct. There is one pod running on each worker node. |
We can of course link to, or provide a brief description of choices available on the docs. Is there a way to do the |
It is exposed as a variable in # inventory.ini
[ingress]
node1
node2
node3 # group_vars.yaml
ingress:
node_labels:
wire.com/role: ingress then when replacing nodes or whatever, |
Sounds like a plan! |
Maybe then just do that by default? In the case that all nodes are in the Well it sort of does. Only problem I can think of is: We have a dumb loadbalancer on the outside, like a fixed set of DNS records that doesn't change, and then one of the nodes is not running the nginx ingress. Because the policy is |
Why slightly higher? As said, being a daemonset forces it to run on every node unless we ran out of resources, no? I also it's preferable to have a default which works out of the box, even if it's perhaps a little resource wasteful but nginx is pretty low on resources in general. You have documented it nicely on the examples and we can add it to our docs as well but for bare metal daemonset + nodePort seems like a sane thing to do. |
a deployment has a spread "attempt" but no guarantee. A daemonset has a spread guarantee but that doesn't mean the pods it has running on each node are actually functioning; it might be in a crashloop the entire time due to a wrong nginx config for example. The point I was trying to make is that you still need something "smart" in front that will not route traffic to dead pods (e.g. a loadbalancer, or a health-check aware DNS server), even if nginx is running on each node. |
Let's go with Tiago's suggestion: by default (in the chart) use a daemonset. In the docs, and overridable, allow to disable that and instead use a tagged-node approach. Would you be fine with that @arianvp ? |
Oh right I see, good point 👍
Yup, clear :) |
Sounds good to me! |
…ource IP addresses TODO: Maybe we should let the production config 'inherit' from the demo config, as now the comments are duplicated twice which is prone to error
Explains all the kinds of configuration that we think are useful. And adds links with extra information
b9ed4d4
to
f14d05c
Compare
OK I have added the new and shiney explanations to the |
Either way works - that file has not yet migrated to |
@@ -0,0 +1,80 @@ | |||
# Normally, NodePort will listen to traffic on all nodes, and uses kube-proxy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file now looks great, with all the explanations. Could you make this the default? By "making it the default", I mean actually moving this to charts/nginx-ingress-controller/values.yaml
. That way, we will not have any file under values/nginx-ingress-controller at all, and installing with helm install wire/nginx-ingress-controller
will provide one working solution out-of-the-box.
* Configure nginx-controller to run on each node, so that we preserve source IP addresses * Add a heavily commented example of values.yaml for nginx ingress * Add example of how to set up inventory file * Make our suggested nginx ingress config the default config
This makes sure that
nginx-controller
has access to the source IP address of a request.Fixes #119