This is a guide to the different deployment styles of an Ingress controller.
GCP: On GCE/GKE, the Ingress controller runs on the master. If you wish to stop this controller and run another instance on your nodes instead, you can do so by following this example.
OSS: You can deploy an OSS Ingress controller by simply
running it as a pod in your cluster, as shown in the examples.
Please note that you must specify the ingress.class
annotation if you're running on a
cloudprovider, or the cloudprovider controller will fight the OSS controller
for the Ingress.
AWS: Until we have an AWS ALB Ingress controller, you can deploy the nginx Ingress controller behind an ELB on AWS, as shows in the next section.
Behind a LoadBalancer Service: You can deploy an OSS controller behind a
Service of Type=LoadBalancer
, by following this example.
More specifically, first create a LoadBalancer Service that selects the OSS
controller pods, then start the OSS controller with the --publish-service
flag.
Behind another Ingress: Sometimes it is desirable to deploy a stack of Ingresses, like the GCE Ingress -> nginx Ingress -> application. You might want to do this because the GCE HTTP lb offers some features that the GCE network LB does not, like a global static IP or CDN, but doesn't offer all the features of nginx, like url rewriting or redirects.
TODO: Write an example
Neither a single pod or bank of OSS controllers scales with the cluster size. If you create a daemonset of OSS Ingress controllers, every new node automatically gets an instance of the controller listening on the specified ports.
TODO: Write an example
Since OSS Ingress controllers run in pods, you can deploy them as intra-cluster
proxies by just not exposing them on a hostPort
and putting them behind a
Service of Type=ClusterIP
.
TODO: Write an example