Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automountServiceAccountToken #1189

Closed
jinxingwang opened this issue Mar 3, 2021 · 10 comments · Fixed by #2298
Closed

automountServiceAccountToken #1189

jinxingwang opened this issue Mar 3, 2021 · 10 comments · Fixed by #2298

Comments

@jinxingwang
Copy link
Contributor

Anyone want me to support this automountServiceAccountToken feature into spark operator?

@hopper-signifyd
Copy link

@jinxingwang That would be great to have!

@sparsh-95
Copy link

Can any maintainer share the update on when this will be prioritized? TIA :)

sparsh-95 added a commit to sparsh-95/spark-on-k8s-operator that referenced this issue Aug 23, 2023
Need support for   automountServiceAccountToken: true by default to access the kube APIs.
Fixes kubeflow#1189
@Aransh
Copy link
Contributor

Aransh commented Sep 16, 2024

Any updates on this issue? we need this for compliance with Azure Government cloud.
@jinxingwang @sparsh-95

CC @EliranTurgeman

@jacobsalway
Copy link
Member

Could you go into a bit more detail on the specific compliance requirements? Is credential auto-mounting disabled in these clusters?

@Aransh
Copy link
Contributor

Aransh commented Sep 16, 2024

@jacobsalway Sure
Azure Microsoft defender reported a High Severity security finding as below

"Kubernetes clusters should disable automounting API credentials" - Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see https://aka.ms/kubepolicydoc.

So in order to mitigate the above finding need to disable automounting of service account token by setting automountServiceAccountToken = false on all pods.
This is not possible with the chart currently, or with the operator in general (for generated spark app pods).

I will mention the idea of the policy is "while obviously some apps require serviceAccount tokens, it shouldn't be mounted by default to avoid misuse, and instead should only be manually (and explicitly) mounted as a volume when needed", example of manually mounting to a pod:

    volumes:
    - name: kube-api-access
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access
      readOnly: true

Anyway, this is a required policy for work with Azure Government cloud

@jacobsalway
Copy link
Member

jacobsalway commented Sep 18, 2024

@Aransh Thanks appreciate the details and the links. It would be easy enough to add this as a configurable field to the controller and webhook deployment specs in the Helm chart, however for the actual Spark driver pod to have this field it would require a change to Spark core or to the webhook in the operator or in a pod template spec.

Are both required for compliance in this environment? I would imagine so given the driver also needs a service account in order to request and watch executor pods.

@Aransh
Copy link
Contributor

Aransh commented Sep 19, 2024

@jacobsalway Yup, both are required for compliance

@EliranTurgeman
Copy link

Hi Team, Any update on this case?

@jacobsalway
Copy link
Member

jacobsalway commented Oct 19, 2024

On the controller side: I'd suggest modifying the chart to add the automountServiceAccountToken field e.g. with a Kustomize patch. I'm happy to give this a go and provide a patch YAML in another comment on this issue.

On the Spark app side: I would suggest solving this with a pod template. We will support this within the CR in whichever release #2141 ends up in.

@Aransh
Copy link
Contributor

Aransh commented Oct 28, 2024

Thanks @jacobsalway
I'm working on a PR for the chart now to allow this on the operator itself without workarounds, should be straight-forward.
Will take a look at the pod template feature when it's out, sounds interesting!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants