-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes clusters should disable automounting API credentials #9735
Comments
/remove-kind bug |
@SunilDSK The PR to add a value is straightforward, and there are many examples of how to add that looking at the current helm chart. /triage accepted |
@strongjz: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@strongjz could you review my PR? |
If you disable auto mount are you adding the token a different way? The controller still needs to access the k9s api. |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
@longwuyuan @strongjz kubernetes documentation link provides option to opt out of automounting which is part of ServiceAccount template file. In the same link there is a option to manually create the token instead of automount. Guess similar option is expected by the OP as part of ingress-nginx Service account config. |
yep, that is what is required so that the controller still has the token available through manual mount and e.g. mount mode can be adjusted so that there are no write permissions on that token (default mount mode is 644, but 444 is usually sufficient). |
@longwuyuan @strongjz: @rajagopalsom @SunilDSK
type: kubernetes.io/service-account-token
|
ATM there's a /close |
@Gacko: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this solution working ?? I too face similar issue on my AKS with nginx ingress controller |
@Gacko when
What is the expectation on how users can make this work and disable auto-mounting the service account token? |
Apart from the fact that you're running a quite old version of the controller, Ingress NGINX needs to communicate with the API server and therefore requires a service account token. So you either auto-mount it or have it mounted manually, but you can not run Ingress NGINX without API access. |
@Gacko yes that makes sense, but I guess my question is what is the point of allowing that to be specified in the Helm chart if the controller does not work when it is set? Thanks for the reminder on upgrading the ingress. Will do! |
Well, that's also something I can not explain. I only found the PR introducing the flag and it doesn't have any reasoning. |
Apparently you can also manually mount the service account token and other related information with custom settings. So in case one wants to do so using the extra volume mounts in the chart, they would need to disable the auto-mounting. |
Yeah that makes sense. However I am experiencing the same issue as the original issue on AKS with Azure Policy and the only way to satisfy the policy would be if the pod itself had auto-mounting disabled, not the ServiceAccount. For now I’ve excluded ingress-nginx from Azure Policy as I see no way with the helm chart to satisfy its requirements. Thanks for the help! |
That's an interesting insight, thank you for sharing this! Actually I don't think you can simply disable that and guess this Azure Policy in particular is targeted for workload not requiring API server access, which probably matches the most of their customer workloads. |
For those that came here with the Azure Policy complaining about automount token.... Update your Helm chart values to include the following: # Disable automount for the ServiceAccount
serviceAccount:
automountServiceAccountToken: false
controller:
# Add extra volumes to mount the ServiceAccount token
extraVolumes:
- name: serviceaccount-token
projected:
defaultMode: 0444
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
# Mount the volume to the appropriate directory
extraVolumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: serviceaccount-token
readOnly: true Notes:This configuration ensures the |
This is all well and good for user deployments. Problem is, DfC complains about NMI and MIC pods in the default namespace. Are we supposed to be patching Microsoft-deployed pods now? |
What happened:
We deployed nginx ingress controller on AKS cluster. We need to disable automount serviceaccount token. We need to set this
automountServiceAccountToken: false
flag on pod spec, but, there is no parameter in to set this flag from values file.What you expected to happen:
We want to disable automounting of serviceaccount token using
automountServiceAccountToken: false
flag. Please provide a way to setautomountServiceAccountToken: false
in pod spec.NGINX Ingress controller version - 1.5.1
Kubernetes version - 1.24.6
Environment:
Azure AKS:
Basic cluster related info:
kubectl version
-kubectl get nodes -o wide
helm ls -A | grep -i ingress
helm -n tools get values ingress-nginx
kubectl describe ingressclasses
kubectl -n tools get all -A -o wide
kubectl -n tools describe po ingress-nginx-controller-59cffe56y9-gx5m4
kubectl -n tools describe svc ingress-nginx-controller
The text was updated successfully, but these errors were encountered: