Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove namespace from ingress-nginx, let user set it. #3994

Closed
phroiland opened this issue Apr 10, 2019 · 7 comments · Fixed by #4055
Closed

Remove namespace from ingress-nginx, let user set it. #3994

phroiland opened this issue Apr 10, 2019 · 7 comments · Fixed by #4055

Comments

@phroiland
Copy link

The problem I see is that if we install ingress-nginx using the two command line entries provided (Azure example), we have no way of changing the namespace. To most people this would imply that it is namespace agnostic.

Remove the namespace from ingress-nginx so the user can set it with -n <namespace>.

@aledbf
Copy link
Member

aledbf commented Apr 10, 2019

Closing. One of the reasons why we cannot do this is related to RBAC. Please check https://github.com/kubernetes/ingress-nginx/blob/master/deploy/rbac.yaml
You need to specify a namespace there and is not possible to use the -n flag.

@aledbf aledbf closed this as completed Apr 10, 2019
@phroiland
Copy link
Author

phroiland commented Apr 11, 2019

So everyone has to use this namespace? Then take out rbac!


kubectl create namespace <namesspace>
kubectl create serviceaccount <serviceaccount> --namespace <namespace>
kubectl create clusterrolebinding <serviceaccount> --clusterrole cluster-admin --serviceaccount=<namespace>:<serviceaccount>

It's not mandatory to have rbac for every user, and if they need it, it's simple command line entries--WHERE THEY CAN SPECIFY A NAMESPACE. Imagine that! Wow! Such kube! Many namespace!

What isn't simple is having to go through each file and manually change namespaces and names, but that's only after you've figured out that it has to be specific to a namespace. At no point in https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md, is it clear that you can only use it for one namespace.

Do you not see how mind-numbingly stupid this is? You can have the user specify the namespace, or it can be default. It doesn't need to be hard-coded in to some weirdly specific namespace name, that is only meant to serve one namespace.

This just perpetuates the idea that ingress is namespace agnostic. Your docs imply it, tutorials imply it, and your dismissive response implies it.

It's been almost four years since this: kubernetes/kubernetes#17088
And people picking up kubernetes today are still having issues. This is clearly YOUR problem. And in every issue I seem to come across, it's some member, like yourself, offering little help and basically saying "that's just how it is".

Either make the change, provide more clarification on how a user can apply it to a specific namespace--and that they pretty much should. I mean, LITERALLY APPLY IT, not --watch-namespace

@kfox1111
Copy link

Configurability in static kubernetes yaml files has always been a problem. Its been kind of rare in my experience that static yaml's provided by a project are directly usable without changes.

There have been many tools built on top of k8s to try and address this problem.
kustomize being the most recent one that comes to mind that may work here.
I typically use helm to deploy ingress-nginx as it solves this problem and have many ingress controller instances watching specific namespaces deployed configurably with helm.

The goal I think of the static manifests are to, as much as possible, let the user just kubectl apply -f thingy

you could document stuff like:

kubectl create namespace <namesspace>
kubectl create serviceaccount <serviceaccount> --namespace <namespace>
kubectl create clusterrolebinding <serviceaccount> --clusterrole cluster-admin --serviceaccount=<namespace>:<serviceaccount>

but you get down the rabbit hole of, maybe that should be documented bash:

namespace=<namespace>
serviceaccount=<serviceaccount>
kubectl create namespace $namespace
kubectl create serviceaccount $serviceaccount --namespace $namespace
kubectl create clusterrolebinding $serviceaccount --clusterrole cluster-admin --serviceaccount=$namespace:$serviceaccount

and then if your going there, why not just provide a bash script in the git repo, deploy.sh

#!/bin/bash
namespace="$1"
serviceaccount="$2"
kubectl create namespace $namespace
kubectl create serviceaccount $serviceaccount --namespace $namespace
kubectl create clusterrolebinding $serviceaccount --clusterrole cluster-admin --serviceaccount=$namespace:$serviceaccount

#usage
deploy.sh <namespace> <serviceaccount>

and on and on...

At some point, we just need to stop reinventing the wheel and just start standardizing on some packaging tools.

@aledbf
Copy link
Member

aledbf commented Apr 11, 2019

So everyone has to use this namespace?

No, you can change that.

Then take out rbac!

For most of the users this is not an option

It's been almost four years since this: kubernetes/kubernetes#17088

There is nothing we can do about it in ingress-nginx. That issue only can be solved if the Ingress spec, defined in the Kubernetes project changes, and that is out of the scope of this project.

Do you not see how mind-numbingly stupid this is? You can have the user specify the namespace, or it > can be default. It doesn't need to be hard-coded in to some weirdly specific namespace name, that is only meant to serve one namespace.

Why? Yaml files are static, there is no way to make things dynamics. For that you have things like kustomize or helm. Something like:

helm install --name nginx-ingress stable/nginx-ingress \
    --namespace myns \
    --set rbac.create=true \
    --set controller.service.type=LoadBalancer \
    --set controller.service.externalTrafficPolicy=Local \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol"="*" \
    --set controller.config.use-proxy-protocol="true" \
    --set controller.publishService.enabled=true

Have you seen the number of options you can change? Plain yaml files are not enough

@phroiland
Copy link
Author

Configurability in static kubernetes yaml files has always been a problem. Its been kind of rare in my experience that static yaml's provided by a project are directly usable without changes.

There have been many tools built on top of k8s to try and address this problem.
kustomize being the most recent one that comes to mind that may work here.
I typically use helm to deploy ingress-nginx as it solves this problem and have many ingress controller instances watching specific namespaces deployed configurably with helm.

The goal I think of the static manifests are to, as much as possible, let the user just kubectl apply -f thingy

you could document stuff like:

kubectl create namespace <namesspace>
kubectl create serviceaccount <serviceaccount> --namespace <namespace>
kubectl create clusterrolebinding <serviceaccount> --clusterrole cluster-admin --serviceaccount=<namespace>:<serviceaccount>

but you get down the rabbit hole of, maybe that should be documented bash:

namespace=<namespace>
serviceaccount=<serviceaccount>
kubectl create namespace $namespace
kubectl create serviceaccount $serviceaccount --namespace $namespace
kubectl create clusterrolebinding $serviceaccount --clusterrole cluster-admin --serviceaccount=$namespace:$serviceaccount

and then if your going there, why not just provide a bash script in the git repo, deploy.sh

#!/bin/bash
namespace="$1"
serviceaccount="$2"
kubectl create namespace $namespace
kubectl create serviceaccount $serviceaccount --namespace $namespace
kubectl create clusterrolebinding $serviceaccount --clusterrole cluster-admin --serviceaccount=$namespace:$serviceaccount

#usage
deploy.sh <namespace> <serviceaccount>

and on and on...

At some point, we just need to stop reinventing the wheel and just start standardizing on some packaging tools.

You literally just described Helm.

@nicknovitski
Copy link
Contributor

Closing. One of the reasons why we cannot do this is related to RBAC. Please check https://github.com/kubernetes/ingress-nginx/blob/master/deploy/rbac.yaml
You need to specify a namespace there and is not possible to use the -n flag.

kustomize does this properly: a namespace set in a kustomization.yaml is propagated correctly to subjects.

So, for example, users can currently download the resource manifests from this repository (or make a submodule or whatever else), and then tie them together with a kustomization.yaml like so:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deploy/mandatory.yaml
- deploy/provider/cloud-generic.yaml

With a file like this, kubectl apply -k (or kustomize build | kubectl apply -f -) will deploy a complete, working nginx ingress in the my-namespace namespace.

It's also possible to make this even simpler by providing bases in this repository, so people could just make a file like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
bases:
- github.com/kubernetes/ingress-nginx/deploy/bases/cloud-generic/ref=nginx-0.24.1

I'll make a pull request to demonstrate.

@nicknovitski
Copy link
Contributor

Ah, the only thing that approach cannot do automatically is create the correct namespace. So my example would actually need to be changed to this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deploy/mandatory.yaml
- deploy/provider/cloud-generic.yaml
- namespace.yaml

And you'd need to also create a namespace.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace

And even then, deployment would still create the ingress-nginx namespace resource described in deploy/mandatory.yaml. Which is not a huge problem for my purposes, but I'll still try to propose a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants