Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate ip #121

Merged
merged 8 commits into from
Sep 23, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 78 additions & 7 deletions charts/nginx-ingress-controller/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,81 @@ nginx-ingress:
http2-max-header-size: 32k
proxy-buffer-size: 16k
proxy-body-size: 1024m

# service:
# type: NodePort
# nodePorts:
# # NOTE: This has to match the port forwarded from the firewall
# https: 31773
# http: 31772
# Normally, NodePort will listen to traffic on all nodes, and uses kube-proxy
# to redirect to the node that actually runs nginx-ingress-controller. However
# one problem with this is that this traffic is NAT'ed. This means that nginx
# will not have access to the source IP address from which the request
# originated. We want to have this source IP address for potentially logging
# and rate-limiting based on it. By setting externalTrafficPolicy: local,
# nodes will no longer forward requests to other nodes if they receive a
# request that they themselves can not handle. Upside is that the traffic is
# now not NAT'ed anymore, and we get access to the source IP address. Downside
# is that you need to know beforehand which nodes run a certain pod. However,
# with kubernetes a pod can be rescheduled to any node at any time so we can
# not trust this. We could do something with node affinities to decide apriori
# on what set of nodes will be publicly reachable and make sure the nginx
# controller pods are only ran on there but for now that sounds a bit overkill.
# Instead, we just simply run the ingress controller on each node using a
# daemonset. This means that any node in the cluster can receive requests and
# redirect them to the correct service, whilst maintaining the source ip
# address. The ingress controller is sort of taking over the role of what
# kube-proxy was doing before.
# More information:
# https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport
# https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
#
# There are also downsides to setting externalTrafficPolicy: Local
# Please look at the following blog post, which very clearly explains the upsides and
# downsides of this setting
# https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies
kind: DaemonSet
# By default, each node will now be configured to accept ingress traffic. You should add
# all the nodes to your external load balancer, or add them to DNS records.
#
# You can also decide to only run the nginx controller on a subset of workers
# nodes in your cluster. This is especially useful in large clusters, where you
# probably don't want all workers nodes reachable from the internet or behind
# your company's loadbalancer. Instead you probably have a subset of nodes
# that are able to receive traffic from the outside world.
#
# You can set node labels in an ad-hoc way with kubectl:
# $ kubectl label nodes mynode wire.com/role=ingress
#
# Or in ansible you can set the `node_labels` variable in a declarative way
# https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md#other-service-variables
#
# In your inventory file you could for example set:
#
# [ingress]
# mynode1
# mynode2
#
# [ingress:vars]
# node_labels = "wire.com/role=ingress"
#
# If you have labelled the nodes that are fit for receiving ingress, you
# can uncomment the following nodeSelector to make sure that only those
# nodes actually have the ingress controller running:
#
# nodeSelector:
# wire.com/role: ingress
service:
# If your kubernetes installation has support for LoadBalancers, set
# type: LoadBalancer
#
# This will then automatically add and remove health / unhealthy nodes
# from the set of servers that should receive traffic, and if you would
# add a new node to the cluster, it will automatically be added to the
# LoadBalancer. If you set it to NodePort, then you need to manually add
# the node IP addresses to the external load balancer when you add nodes,
# and remove them when you remove nodes
type: NodePort # or LoadBalancer
externalTrafficPolicy: Local
nodePorts:
# The nginx instance is exposed on ports 31773 (https) and 31772 (http)
# on the node on which it runs. You should add a port-forwarding rule
# on the node or on the loadbalancer that forwards ports 443 and 80 to
# these respective ports. See ansible/iptables.yml how to do this with
# ansible and Iptables
https: 31773
http: 31772
8 changes: 0 additions & 8 deletions values/nginx-ingress-controller/demo-values.example.yaml

This file was deleted.

21 changes: 0 additions & 21 deletions values/nginx-ingress-controller/prod-values.example.yaml

This file was deleted.