Skip to content

kmassada/gke-serviceaccount-test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 

Repository files navigation

GKE with serviceaccount

Goals

  • create a service account NODE_SA instead of using default service account for creating GKE cluster, practice downloading credential file and activating service account
  • create a service account DEPLOY_SA used for deploying applications, practice get clusters and setting context automatically (also manually replacing provider by a token); give that service account restricted access and test permissions boundaries.
  • The service account NODE_SA was never granted permissions to download objects from gcr, this example goes as far as setting acls on a remote bucket that leaves in another project.
  • create a service account APP_SA, give that service account permissions to interacts with GCP. Download the key file to the service account, create a workload mounting that service account, make calls to GCP API from the pod

Create Node's Service Account

export PROJECT=`gcloud config get-value project`

export NODE_SA=gke-node-sa

gcloud iam service-accounts create $NODE_SA --display-name "Node Service Account" \
&& sleep 5 && \
export NODE_SA_ID=`gcloud iam service-accounts list --format='value(email)' --filter='displayName:Node Service Account'`

gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:${NODE_SA_ID} --role=roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:${NODE_SA_ID} --role=roles/monitoring.viewer
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:${NODE_SA_ID} --role=roles/logging.logWriter

Create Cluster

export CLUSTER_NAME=serviceaccount-test
export ZONE=us-west1-c
export VERSION=`gcloud container get-server-config --zone=$ZONE --format="value(validMasterVersions[0])"`

gcloud container clusters create $CLUSTER_NAME \
  --service-account=$NODE_SA_ID \
  --zone=$ZONE \
  --cluster-version=$VERSION

Create Deploy's Service Account

# Bind service account policy
export PROJECT=`gcloud config get-value project`

# Create service account
export DEPLOY_SA=gke-deploy-sa
gcloud iam service-accounts create $DEPLOY_SA --display-name "Deploy Service Account" \
&& sleep 5 && \
export DEPLOY_SA_ID=`gcloud iam service-accounts list --format='value(email)' --filter='displayName:Deploy Service Account'`

gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:${DEPLOY_SA_ID} --role=roles/container.developer

# Create service account key
gcloud iam service-accounts keys create \
    /home/$USER/$DEPLOY_SA-key.json \
    --iam-account $DEPLOY_SA_ID

NOTE: Export created key $DEPLOY_SA-key.json can now be exported.

gcloud auth activate-service-account $DEPLOY_SA_ID --key-file=/home/$USER/$DEPLOY_SA-key.json

NOTE: get-credentials appends to the .kube/config file, the context for the GKE master we want to auth against, it's formatted like this gke_$PROJECT_$ZONE_$CLUSTER_NAME

GOOGLE_APPLICATION_CREDENTIALS="/home/$USER/$DEPLOY_SA-key.json" gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE --project $PROJECT

Test Service Account permissions

$DEPLOY_SA does only have roles/container.developer, which is known to not have containers.clusters.update permissions... Testing this boundary

gcloud container clusters update $CLUSTER_NAME --zone $ZONE --project $PROJECT --maintenance-window=12:43

As expected error occurs.

$ gcloud container clusters update $CLUSTER_NAME --zone $ZONE --project $PROJECT --maintenance-window=12:43
ERROR: (gcloud.container.clusters.update) ResponseError: code=403, message=Required "container.clusters.update" permission(s) for "projects/$PROJECT/zones/$ZONE/clusters/$CLUSTER_NAME". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info.

Token Alternative

This section is only for demonstrating how to use tokens instead of an auth provider. Proceed with the rest of the tutorial here: Testing Context

# Generate token from key
export GOOGLE_APPLICATION_CREDENTIALS="/home/$USER/$DEPLOY_SA-key.json"
gcloud auth application-default print-access-token > /home/$USER/$DEPLOY_SA-token

gcloud container clusters get-credentials in the previous section would have set the $context_user. it can be confirmed by running

export context_user=`kubectl config current-context`
echo $context_user

add the token to your /home/$USER/.kube/config, remove auth-provider from your config file

Then add the content of your token, $context_user in this case, is the user that was generated by get-context

kubectl config unset users.$context_user.auth-provider
kubectl config set-credentials $context_user --token=$(cat /home/$USER/$DEPLOY_SA-token)

it should have changed your config file this way

18,26c18
<                 "auth-provider": {
<                     "name": "gcp",
<                     "config": {
<                         "cmd-args": "config config-helper --format=json",
<                         "cmd-path": "/google/google-cloud-sdk/bin/gcloud",
<                         "expiry-key": "{.credential.token_expiry}",
<                         "token-key": "{.credential.access_token}"
<                     }
<                 }
---
>                 "token": "<TOKEN>"

Testing Context

context was generated using gcloud container clusters or by manually setting the token above.

GOOGLE_APPLICATION_CREDENTIALS="/home/$USER/$DEPLOY_SA-key.json" gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE --project $PROJECT

there's no need to activate the context since gcloud container clusters get-credentials did it for you, but just to be explicit, list context avaialable to you first

kubectl config get-contexts

activate the context you want this way, where $context_name is the context generated by get credentials, at the time of this tutorial, it's formatted like this gke_$PROJECT_$ZONE_$CLUSTER_NAME using variables already set.

kubectl config use-context $context_name

use the token to delete a random item

kubectl get pods
kubectl delete pods/$podname --namespace $namespace

verify in stackdriver

resource.type="k8s_cluster"
resource.labels.location="$ZONE"
resource.labels.cluster_name="$CLUSTER_NAME"
protoPayload.methodName:"delete"
protoPayload.resourceName="core/v1/namespaces/$namespace/pods/$podname"

Run Workload

This is to practice, setting the right pull permissions on the NODE_SA. We never set the correct permissions to pull images from gcr. If you do not use gcr, skip to [#create-applications-service-account](Create Application's Service Account)

this part of the tutorial is assuming workload is pulling an image from gcr inside the same project gcr.io/$PROJECT/$PREFIX/$APPLICATION

In shell set this variable

export APPLICATION=web-app
gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE --project $PROJECT

kubectl run $APPLICATION --image=gcr.io/$PROJECT/$PREFIX/$APPLICATION

Workload Fails (ImagePullBackOff)

export POD_NAME=`kubectl get pods -o jsonpath='{.items[?(@.metadata.labels.run=='\""$APPLICATION"\"')].metadata.name}'`

$ kubectl describe pod $POD_NAME
...
Events:
  Type     Reason                 Age               From                                                          Message
  ----     ------                 ----              ----                                                          -------
  Normal   Scheduled              7m                default-scheduler                                             Successfully assigned $APPLICATION-764784b488-kcgvv to $CLUSTER_NAME-default-pool-a262a520-7dw5
  Normal   SuccessfulMountVolume  7m                kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  MountVolume.SetUp succeeded for volume "default-token-t8sg8"
  Normal   Pulling                5m (x4 over 7m)   kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  pulling image "gcr.io/$PROJECT/$PREFIX/$APPLICATION"
  Warning  Failed                 5m (x4 over 7m)   kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  Failed to pull image "gcr.io/$PROJECT/$PREFIX/$APPLICATION": rpc er
ror: code = Unknown desc = Error response from daemon: repository gcr.io/$PROJECT/$PREFIX/$APPLICATION not found: does not exist or no pull access
  Warning  Failed                 5m (x4 over 7m)   kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  Error: ErrImagePull
  Normal   BackOff                5m (x6 over 7m)   kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  Back-off pulling image "gcr.io/$PROJECT/$PREFIX/$APPLICATION"
  Warning  Failed                 2m (x20 over 7m)  kubelet, $CLUSTER_NAME-default-pool-a262a520-7dw5  Error: ImagePullBackOff

Setting right pull permissions

NOTE: DEPLOY_SA was not configured to be able to set iam permissions, these steps are done via the same admin account that created DEPLOY_SA

BUCKET_NAME=artifacts.$PROJECT.appspot.com/
gsutil iam ch serviceAccount:$NODE_SA_ID:objectViewer gs://$BUCKET_NAME

POD_NAME=`kubectl get pods -o jsonpath='{.items[?(@.metadata.labels.run=="$APPLICATION")].metadata.name}'`

kubectl describe pod $POD_NAME

Create Application's Service Account

there are 2 schools of thoughts here, RBAC is more granular when it comes to permissions to cluster resources, see kmassada/gke-rbac-test. Essentially it mounts your service account into a namespace and allows for kubectl access via the default token that is mounted.

The second school of thought is to use gcloud container get-clusters, like in the DEPLOY example.

However For the sake of this exercise, we want permissions to access other GCP resources. example:

Set the correct admin account before proceeding

$ gcloud auth list
                 Credentialed Accounts
ACTIVE  ACCOUNT
*       NODE_SA@$PROJECT.iam.gserviceaccount.com
        you@example.com
$ gcloud config set account you@example.com
Updated property [core/account].

proceed to create the GCP service account

export APPLICATION=web-app

# Create service account
export APP_SA=gke-$APPLICATION-sa
gcloud iam service-accounts create $APP_SA --display-name "GKE $APPLICATION Application Service Account" \
&& sleep 5 && \
export APP_SA_ID=`gcloud iam service-accounts list --format='value(email)' --filter="displayName:GKE $APPLICATION Application Service Account"`

# Bind service account policy
export PROJECT=`gcloud config get-value project`

gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:${APP_SA_ID} --role=roles/compute.viewer

# Create service account key and activate it
gcloud iam service-accounts keys create \
    /home/$USER/$APP_SA-key.json \
    --iam-account $APP_SA_ID

Configure application

kubectl create configmap project-id --from-literal "project-id=${PROJECT}"
kubectl create configmap $APPLICATION-zone --from-literal "$APPLICATION-zone=${ZONE}"
kubectl create configmap $APPLICATION-sa --from-literal "sa-email=${APP_SA_ID}"
kubectl create secret generic $APPLICATION --from-file /home/$USER/$APP_SA-key.json

deploymnent.yaml adds 3 env variables

  • GOOGLE_APPLICATION_CREDENTIALS
  • PROJECT_ID
  • APP_SA_ID
  • ZONE

those environment variables are local to the container

envsubst < deployment.template.yaml > deployment.yaml
kubectl apply -f deployment.yaml
export POD_NAME=`kubectl get pods -o jsonpath='{.items[?(@.metadata.labels.run=='\""$APPLICATION"\"')].metadata.name}'`

kubectl describe pod $POD_NAME

drop into shell

kubectl exec -it  $POD_NAME -- bash

in our pod, we install google-cloud-sdk

apt-get update -qy && apt-get -qy install curl dnsutils gnupg lsb-release && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && apt-get install -qy google-cloud-sdk

now we see our container is active using NODE_SA

gcloud auth list
                   Credentialed Accounts
ACTIVE  ACCOUNT
*       $NODE_SA@$PROJECT.iam.gserviceaccount.com

we force instead our container to auth with APP_SA

gcloud auth activate-service-account $APP_SA_ID --key-file=$GOOGLE_APPLICATION_CREDENTIALS

here so we can now test, we've added the role roles/compute.viewer this user can list nodes

# gcloud compute instances list
NAME                                       ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
builder                                    $ZONE  n1-standard-1 ......

but can't list clusters in a zone.

# gcloud container clusters list --zone $ZONE
ERROR: (gcloud.container.clusters.list) ResponseError: code=403, message=Required "container.clusters.list" permission for "projects/$PROJECT"

In the auth list we can see the proper service account is selected

# gcloud auth list
                     Credentialed Accounts
ACTIVE  ACCOUNT
        $NODE_SA@makz-support-eap.iam.gserviceaccount.com
*       $APP_SA@makz-support-eap.iam.gserviceaccount.com

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published