Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get the pods up when trying to run akv2k8s as non-root #111

Open
archittsc opened this issue Jun 23, 2023 · 1 comment
Open

Unable to get the pods up when trying to run akv2k8s as non-root #111

archittsc opened this issue Jun 23, 2023 · 1 comment

Comments

@archittsc
Copy link

          > Sorry for the late reply.

My AKS is setup using managed identities (--enable-managed-identity). To get AKV2K8S running with customAuth, I had to specify the client id of the "<cluster_name>-agentpool" identity to the chart:

helm upgrade -i akv2k8s spv-charts/akv2k8s \
          --namespace akv2k8s \
          --set controller.keyVault.customAuth.enabled=true \
          --set controller.env.AZURE_CLIENT_ID={{.AKS_USER_MANAGED_IDENTITY}} \
          --set env_injector.enabled=false 

@tschuettel do you have any idea know how can we achieve this using the values in helm charts? I am struggling to get the pods up as non root, after adding my MSI details. I am getting the error -
"failed to create cloud config provider for azure key vault" err="Failed reading azure config from /etc/kubernetes/azure.json, error: failed reading cloud config, error: read /etc/kubernetes/azure.json: is a directory" file="/etc/kubernetes/azure.json"

I can see that cloudConfig is defined as "/etc/kubernetes/azure.json" in the values.yaml and its being picked up as an argument for the container. Now as the container is trying to start as non root it is obvious the path - /etc/kubernetes/azure.json won't be accessible by it, so how do I mitigate this? Am i missing something here?

Originally posted by @archittsc in #25 (comment)

@georgejdli
Copy link

I got akv2k8s-2.6.0 chart working with runAsNonRoot=false, allowPrivilegedEscalation=false, readOnlyRootFilesystem=false
There is a typo in the values.yaml that states global.userDefinedMSI.msi is the object_id when in fact it should be client_id

In my case I'm using MSI on the AKS cluster

  • AKS needs to have managed identity enabled
  • Get the User assigned client_id for the Identity on the aks-agentpool--vmss from the MC_* resource group (nodepool)
  • make sure that Identiy has GET permissions to Certs, Secrets, and Keys
helm upgrade --install akv2k8s spv-charts/akv2k8s \
  --namespace extensions \
  --set global.userDefinedMSI.enabled=true \
  --set global.userDefinedMSI.msi=$CLIENT_ID \
  --set global.userDefinedMSI.subscriptionId=$SUB_ID \
  --set global.userDefinedMSI.tenantId=#TENANT_ID \
  --set global.userDefinedMSI.azureCloudType=AzurePublicCloud \
  --set controller.keyVaultAuth=azureCloudConfig \
  --set controller.securityContext.allowPrivilegeEscalation=false \
  --set controller.securityContext.runAsNonRoot=true \
  --set controller.securityContext.runAsUser=65534 \
  --set controller.securityContext.readOnlyRootFilesystem=true \
  --set env_injector.keyVaultAuth=azureCloudConfig \
  --set env_injector.securityContext.allowPrivilegeEscalation=false \
  --set env_injector.securityContext.runAsNonRoot=true \
  --set env_injector.securityContext.runAsUser=65534 \
  --set env_injector.securityContext.readOnlyRootFilesystem=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants