Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pod-identity): add option for pod-identity #1459

Merged
merged 2 commits into from
Aug 14, 2024

Conversation

haarchri
Copy link
Member

@haarchri haarchri commented Aug 13, 2024

Description of your changes

Fixes #1249 #1254 #1308 #1252

I have:

  • Read and followed Crossplane's contribution process.
  • Run make reviewable to ensure this PR is ready for review.
  • Added backport release-x.y labels to auto-backport this PR if necessary.

How has this code been tested

spin up a Network + EKS Cluster:

kubectl apply -f examples/pat/network-xr.yaml
kubectl apply -f examples/pat/eks-xr.yaml

connect to your EKS Cluster:

aws sso login --profile login
aws eks update-kubeconfig --region us-west-2 --name  configuration-aws-eks-v2lbr --profile AdministratorAccess-12345678910

apply the following resources:

kubectl apply -f examples/provider.yaml
kubectl apply -f examples/providerconfig.yaml
kubectl apply -f examples/vpc.yaml
kubectl get providerconfig.aws -o yaml
apiVersion: v1
items:
- apiVersion: aws.upbound.io/v1beta1
  kind: ProviderConfig
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"aws.upbound.io/v1beta1","kind":"ProviderConfig","metadata":{"annotations":{},"name":"default"},"spec":{"credentials":{"source":"PodIdentity"}}}
    creationTimestamp: "2024-08-13T15:07:56Z"
    finalizers:
    - in-use.crossplane.io
    generation: 1
    name: default
    resourceVersion: "11161"
    uid: e6eb0224-8300-4568-b617-68aa6ee35b82
  spec:
    credentials:
      source: PodIdentity
  status:
    users: 1
kind: List
metadata:
  resourceVersion: ""
kubectl get vpc -o yaml
apiVersion: v1
items:
- apiVersion: ec2.aws.upbound.io/v1beta1
  kind: VPC
  metadata:
    annotations:
      crossplane.io/external-create-pending: "2024-08-13T15:07:58Z"
      crossplane.io/external-create-succeeded: "2024-08-13T15:07:58Z"
      crossplane.io/external-name: vpc-0e424d8d4ab0015db
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"ec2.aws.upbound.io/v1beta1","kind":"VPC","metadata":{"annotations":{},"name":"vpc"},"spec":{"forProvider":{"cidrBlock":"10.1.0.0/16","region":"eu-west-1"}}}
    creationTimestamp: "2024-08-13T15:07:56Z"
    finalizers:
    - finalizer.managedresource.crossplane.io
    generation: 3
    name: vpc
    resourceVersion: "11202"
    uid: 08785497-5747-481a-b81d-9af709abf679
  spec:
    deletionPolicy: Delete
    forProvider:
      cidrBlock: 10.1.0.0/16
      enableDnsSupport: true
      instanceTenancy: default
      region: eu-west-1
      tags:
        crossplane-kind: vpc.ec2.aws.upbound.io
        crossplane-name: vpc
        crossplane-providerconfig: default
    initProvider: {}
    managementPolicies:
    - '*'
    providerConfigRef:
      name: default
  status:
    atProvider:
      arn: arn:aws:ec2:eu-west-1:123456789101:vpc/vpc-0e424d8d4ab0015db
      assignGeneratedIpv6CidrBlock: false
      cidrBlock: 10.1.0.0/16
      defaultNetworkAclId: acl-0b90aeb9eae7d1855
      defaultRouteTableId: rtb-0166e1e7e0903e59a
      defaultSecurityGroupId: sg-0408394cac0e658b6
      dhcpOptionsId: dopt-77034211
      enableDnsHostnames: false
      enableDnsSupport: true
      enableNetworkAddressUsageMetrics: false
      id: vpc-0e424d8d4ab0015db
      instanceTenancy: default
      ipv6AssociationId: ""
      ipv6CidrBlock: ""
      ipv6CidrBlockNetworkBorderGroup: ""
      ipv6IpamPoolId: ""
      ipv6NetmaskLength: 0
      mainRouteTableId: rtb-0166e1e7e0903e59a
      ownerId: "123456789101"
      tags:
        crossplane-kind: vpc.ec2.aws.upbound.io
        crossplane-name: vpc
        crossplane-providerconfig: default
      tagsAll:
        crossplane-kind: vpc.ec2.aws.upbound.io
        crossplane-name: vpc
        crossplane-providerconfig: default
    conditions:
    - lastTransitionTime: "2024-08-13T15:07:58Z"
      reason: ReconcileSuccess
      status: "True"
      type: Synced
    - lastTransitionTime: "2024-08-13T15:08:04Z"
      reason: Available
      status: "True"
      type: Ready
    - lastTransitionTime: "2024-08-13T15:08:00Z"
      reason: Success
      status: "True"
      type: LastAsyncOperation
kind: List
metadata:
  resourceVersion: ""

Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
@haarchri
Copy link
Member Author

to use this you need some prerequisites which are implemented here: https://github.com/haarchri/configuration-aws-eks-uxp-podidentity

you need to run deploy an EKS AddOn like:

apiVersion: eks.aws.upbound.io/v1beta1
kind: Addon
spec:
  forProvider:
    addonName: eks-pod-identity-agent
    clusterNameSelector:
      matchControllerRef: true

prepare the IAM Role + PodIdentity:

apiVersion: iam.aws.upbound.io/v1beta1
kind: Role
metadata:
  labels:
    role: provider
spec:
  forProvider:
    forceDetachPolicies: true
    managedPolicyArns:
      - arn:aws:iam::aws:policy/AdministratorAccess
    assumeRolePolicy: |
      {
        "Version":"2012-10-17",
        "Statement":[
          {
            "Effect":"Allow",
            "Principal":{
              "Service":"pods.eks.amazonaws.com"
            },
            "Action":[
              "sts:AssumeRole",
              "sts:TagSession"
            ]
          }
        ]
      }
apiVersion: eks.aws.upbound.io/v1beta1
kind: PodIdentityAssociation
spec:
  forProvider:
    clusterNameSelector:
      matchControllerRef: true
    namespace: upbound-system
    serviceAccount: provider-aws
    roleArnSelector:
      matchLabels:
        role: provider

that the provider can use the PodIdentity feature you need a DeploymentRuntimeConfig like:

apiVersion: pkg.crossplane.io/v1beta1
kind: DeploymentRuntimeConfig
metadata:
  name: upbound-provider-aws
spec:
  serviceAccountTemplate:
    metadata:
      name: provider-aws
  deploymentTemplate: {}

when you install the providers you need to reference the DeploymentRuntimeConfig:

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
  name: provider-aws-ec2
spec:
  package: index.docker.io/haarchri/provider-aws-ec2:v0.18.0-1847.g5ae95a12e
  skipDependencyResolution: true
  runtimeConfigRef:
    name: upbound-provider-aws

and then create simple a ProviderConfig:

apiVersion: aws.upbound.io/v1beta1
kind: ProviderConfig
metadata:
  name: default
spec:
  credentials:
    source: PodIdentity

Copy link
Collaborator

@erhancagirici erhancagirici left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @haarchri for taking this. LGTM in general.
Do you think we need a caching mechanism for PodIdentity like with IRSA? As you might already know, previously IRSA suffered from excessive calls to STS before the cache implementation.

Also a side note, we were on the verge of merging #1320 which had e2e tests for different provider configs. We should adapt your test setup to the e2e tests as a follow-up

@haarchri
Copy link
Member Author

haarchri commented Aug 13, 2024

@erhancagirici compared to official documentation i can see:

The EKS Pod Identity agent (which is running in the EKS Cluster as an AddOn) will do SigV4 signing and make a call to the new EKS Auth API AssumeRoleForPodIdentity to exchange the projected token for temporary IAM credentials, which are then made available to the pod.

and on the pods we will see:

  volumes:
  - name: eks-pod-identity-token 
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: pods.eks.amazonaws.com
          expirationSeconds: 86400
          path: eks-pod-identity-token

so i don't see a need for cache atm - wdyt

@erhancagirici
Copy link
Collaborator

exchange the projected token for temporary IAM credentials, which are then made available to the pod.

  1. does the projected volume (or some other directory) include the resulting temporary credentials? In other words, is the exchange is handled outside provider code?
  2. or the exchange for a temp credential is done by the providerconfig code via the aws client?

If 1, I think we are good. If 2, that seems like the IRSA case where AWS injects a similar token for exchanging, and we do the exchange at each reconcile. In that case, we need to "cache the credentials cache" so that we do not lose them between reconciles.

If you have some setup already, it would be nice to check the contents of the projected volume and possibly check CloudTrail logs for AssumeRoleForPodIdentity operations after triggering several manual reconciles.

Also realized some UX edge cases, IRSA and PodIdentity should be mutually exclusive. The code loads the default AWS client config for both IRSA and PodIdentity, which leaves the config resolution to the AWS code.

@haarchri
Copy link
Member Author

https://docs.aws.amazon.com/eks/latest/userguide/pod-id-how-it-works.html

so what we get injected in the Container is the following:

...
AWS_CONTAINER_CREDENTIALS_FULL_URI=http://169.254.170.23/v1/credentials
AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE=/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
...

Kubernetes selects which node to run the pod on. Then, the Amazon EKS Pod Identity Agent on the node uses the AssumeRoleForPodIdentity action to retrieve temporary credentials from the EKS Auth API.

The EKS Pod Identity Agent makes these credentials available for the AWS SDKs that you run inside your containers.

means we calling in every loop the EKS Pod Identity Agent and the caching for Credentials happening in this agent - nothing need to take care in my point of view

the agent which is running inside EKS Cluster is located here: https://github.com/aws/eks-pod-identity-agent/

Copy link
Collaborator

@erhancagirici erhancagirici left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation! I approve the current state with follow-ups.

IRSA and PodIdentity should be mutually exclusive. The code loads the default AWS client config for both IRSA and PodIdentity, which leaves the config resolution to the AWS code.

We should follow-up with a documentation regarding IRSA and PodIdentity configuration cannot co-exist in a provider.

Also #1320 should be updated with the PodIdentity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support Pod Identity Controller associations for provider IAM permissions
2 participants