-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eks: authentication mode failed to update #31032
Comments
Thank you. Yes this could happen. I will discuss this with the team today. |
OK I tried to reproduce this scenario using the native eks.CfnCluster L1 and see how it behaves. Looks like CFN would just not do anything when updating from I guess we should implement that as well in CDK and gracefully ignore the SDK error. |
We should implement similar check like this aws-cdk/packages/@aws-cdk/custom-resource-handlers/lib/aws-eks/cluster-resource-handler/cluster.ts Lines 290 to 296 in 7eae4d1
|
Comments on closed issues and PRs are hard for our team to see. |
1 similar comment
Comments on closed issues and PRs are hard for our team to see. |
The cluster resource handler would fail when updating the authMode with exactly the same mode. This could happen as described in aws#31032 We need to check if the cluster is already at the desired authMode and gracefully ignore the update. ### Issue # (if applicable) Closes aws#31032 ### Reason for this change ### Description of changes ### Description of how you validated changes This PR is essentially to address a very special case described in aws#31032 and not easy to have a unit test or integ test for that. Instead, I validated it using manual deployment. step 1: initial deployment of a default eks cluster with undefined authenticationMode step 2: update the cluster and add a s3 bucket that would fail and trigger the rollback. At this point, eks auth mode would update but can't be rolled back. This makes the resource state out of sync with CFN. step 3: re-deploy the same stack without the s3 bucket but with the same auth mode in step 2. As the cluster has already modified its auth mode, this step should gracefully succeed. ```ts import { App, Stack, StackProps, aws_ec2 as ec2, aws_s3 as s3, } from 'aws-cdk-lib'; import * as eks from 'aws-cdk-lib/aws-eks'; import { getClusterVersionConfig } from './integ-tests-kubernetes-version'; interface EksClusterStackProps extends StackProps { authMode?: eks.AuthenticationMode; withFailedResource?: boolean; } class EksClusterStack extends Stack { constructor(scope: App, id: string, props?: EksClusterStackProps) { super(scope, id, { ...props, stackName: 'integ-eks-update-authmod', }); const vpc = new ec2.Vpc(this, 'Vpc', { maxAzs: 2, natGateways: 1, restrictDefaultSecurityGroup: false }); const cluster = new eks.Cluster(this, 'Cluster', { vpc, ...getClusterVersionConfig(this, eks.KubernetesVersion.V1_30), defaultCapacity: 0, authenticationMode: props?.authMode, }); if (props?.withFailedResource) { const bucket = new s3.Bucket(this, 'Bucket', { bucketName: 'aws' }); bucket.node.addDependency(cluster); } } } const app = new App(); // create a simple eks cluster for the initial deployment // new EksClusterStack(app, 'create-stack'); // 1st attempt to update with an intentional failure new EksClusterStack(app, 'update-stack', { authMode: eks.AuthenticationMode.API_AND_CONFIG_MAP, withFailedResource: true, }); // // 2nd attempt to update using the same authMode new EksClusterStack(app, 'update-stack', { authMode: eks.AuthenticationMode.API_AND_CONFIG_MAP, withFailedResource: false, }); ``` And it's validated in `us-east-1`. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Describe the bug
CloudFormation stack (created using the AWS CDK) that manage an EKS cluster that is no longer in synch due to a failed deployment.
Steps that caused this issue:
accessConfig: {}
;accessConfig: {"authenticationMode":"API_AND_CONFIG_MAP"}
, and some other resources updates;accessConfig: {}
;Additional points :
Resources:
#https://github.com/aws/aws-cdk/blob/main/packages/%40aws-cdk/custom-resource-handlers/lib/aws-eks/cluster-resource-handler/cluster.ts
Expected Behavior
to handle errors gracefully in the custom resource. If a setting or configuration is already set or identical in the target resource, the custom resource should send a success signal to the CloudFormation (CFN) service.
Current Behavior
Now trying to deploy again the stack (without any other resources updates) with the new EKS cluster configuration fails with "Unsupported authentication mode update from API_AND_CONFIG_MAP to API_AND_CONFIG_MAP", because the actual resources already has the new configuration.
Reproduction Steps
Mentioned in the description
Possible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.148.0
Framework Version
No response
Node.js Version
NA
OS
NA
Language
TypeScript, .NET
Language Version
No response
Other information
No response
The text was updated successfully, but these errors were encountered: