-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(aws-eks): kubectl layer is not compatible with k8s v1.22.0 #19843
Comments
@akefirad Yesterday I had the same issue. As a temporary solution, you can create your own lambda layer version and pass it as a parameter to the Cluster construct. Here is my solution in python. It's just a combination of AwsCliLayer and KubectlLayer My code building layer.zip every synth, but you can build it once you need it and save layer.zip in your repository. assets/kubectl-layer/build.sh #!/bin/bash
set -euo pipefail
cd $(dirname $0)
echo ">> Building AWS Lambda layer inside a docker image..."
TAG='kubectl-lambda-layer'
docker build -t ${TAG} .
echo ">> Extrating layer.zip from the build container..."
CONTAINER=$(docker run -d ${TAG} false)
docker cp ${CONTAINER}:/layer.zip layer.zip
echo ">> Stopping container..."
docker rm -f ${CONTAINER}
echo ">> layer.zip is ready" assets/kubectl-layer/Dockerfile # base lambda image
FROM public.ecr.aws/sam/build-python3.7
#
# versions
#
# KUBECTL_VERSION should not be changed at the moment, see https://github.com/aws/aws-cdk/issues/15736
# Version 1.21.0 is not compatible with version 1.20 (and lower) of the server.
ARG KUBECTL_VERSION=1.22.0
ARG HELM_VERSION=3.8.1
USER root
RUN mkdir -p /opt
WORKDIR /tmp
#
# tools
#
RUN yum update -y \
&& yum install -y zip unzip wget tar gzip
#
# aws cli
#
COPY requirements.txt ./
RUN python -m pip install -r requirements.txt -t /opt/awscli
# organize for self-contained usage
RUN mv /opt/awscli/bin/aws /opt/awscli
# cleanup
RUN rm -rf \
/opt/awscli/pip* \
/opt/awscli/setuptools* \
/opt/awscli/awscli/examples
#
# Test that the CLI works
#
RUN yum install -y groff
RUN /opt/awscli/aws help
#
# kubectl
#
RUN mkdir -p /opt/kubectl
RUN cd /opt/kubectl && curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
RUN chmod +x /opt/kubectl/kubectl
#
# helm
#
RUN mkdir -p /tmp/helm && wget -qO- https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar -xvz -C /tmp/helm
RUN mkdir -p /opt/helm && cp /tmp/helm/linux-amd64/helm /opt/helm/helm
#
# create the bundle
#
RUN cd /opt \
&& zip --symlinks -r ../layer.zip * \
&& echo "/layer.zip is ready" \
&& ls -alh /layer.zip;
WORKDIR /
ENTRYPOINT [ "/bin/bash" ] assets/kubectl-layer/requirements.txt awscli==1.22.92 kubectl_layer.py import builtins
import typing
import subprocess
import aws_cdk as cdk
from aws_cdk import (
aws_lambda as lambda_
)
from constructs import Construct
class KubectlLayer(lambda_.LayerVersion):
def __init__(self, scope: Construct, construct_id: builtins.str, *,
compatible_architectures: typing.Optional[typing.Sequence[lambda_.Architecture]] = None,
compatible_runtimes: typing.Optional[typing.Sequence[lambda_.Runtime]] = None,
layer_version_name: typing.Optional[builtins.str] = None,
license: typing.Optional[builtins.str] = None,
removal_policy: typing.Optional[cdk.RemovalPolicy] = None
) -> None:
subprocess.check_call("<path to assets/kubectl-layer/build.sh>")]) # build layer.zip every run
super().__init__(scope, construct_id,
code=lambda_.AssetCode(
path=asset_file("<path to created assets/kubectl-layer/layer.zip>"),
asset_hash=cdk.FileSystem.fingerprint(
file_or_directory=asset_dir("<path to assets/kubectl-layer/ dir>"),
exclude=["*.zip"]
)
),
description="/opt/awscli/aws, /opt/kubectl/kubectl and /opt/helm/helm",
compatible_architectures=compatible_architectures,
compatible_runtimes=compatible_runtimes,
layer_version_name=layer_version_name,
license=license,
removal_policy=removal_policy
) |
@peterwoodworth Check out commit message on #20000. After talking this over with Rico we've decided that it's a much greater effort, thus it would break backward compatibility with |
|
Reopening as a feature request |
Linking aws/containers-roadmap#1595... EKS |
FYI, a workaround is to set Prune to false. This of course has some side effects, but you can mitigate that by ensuring there's only one kubernetes object per manifest. |
hitting the same issue 😢 |
same for me |
* remove prune setting for eks cluster see aws/aws-cdk#19843 * empty commit to trigger CI
same here. CDK version 2.37.0 |
Solution is announced for Mid-September, see this issue. |
Have been struggling with this error for the last two days! Just today noticed this issue log. I would appreciate if AWS can provide solution. for mean time i will destroy my 1.23 stack and deploy with 1.21. I hope that works ! |
Hitting the same issue. Can anyone from AWS please tell us when this issue will be resolved? Since we upgraded EKS to version 1.22.0 we have been facing this issue. The workaround by @chlunde works well but not on all cases. Currently we cannot create new Nodegroups because these requires an update of the aws-auth resource which keeps failing with an error message
currently we are blocked and can't proceed with our deployment. |
I have used the instructions posted by @Obirah and that works so far. See here |
Release this week should have a way to use an updated kubectl layer. |
@cgarvis Thank you for the update. We are waiting impatiently for the Release. |
Hello I see 1.23 support has been merged! 🎉 Thanks for the effort there. Re: KubectlV23Layer - is this still an experimental feature? We'd like to implement a |
Hello, Thank you for the new release to support EKS 1.23. But when I deploy the stack to create EKS 1.23, I got the warning:
Then I try to follow the document:
But there seems no package lambda-layer-kubectl-v23 under aws-cdk-lib v2.50.0. |
Hi, you need to add the package |
@Obirah |
I am using aws-cdk-go and couldn't able to find lambda-layer-kubectl-v23 in go pkg dependencies |
I also couldn't able to import lambda_layer_kubectl_v23 in the python package ( |
There is a separate module you need to install |
|
@samhopwell Will there be a v2 of this coming soon or would need to just use the v1 or build our own? |
Any docs/guidance on how to proceed using Golang? Can't find a proper module to import... update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22 |
Thanks @jaredhancock31 ! This helped me a lot. ^_^ If anyone needs it, here is my example implementation in Go that I tweaked using the original cdk init file complete code here: https://gist.github.com/andrewbulin/e23c313008372d4e5149899817bebe32 snippet here: cluster := awseks.NewCluster(
stack,
jsii.String("UpgradeMe"),
&awseks.ClusterProps{
Version: awseks.KubernetesVersion_V1_22(),
KubectlLayer: kubectlv22.NewKubectlV22Layer(stack, jsii.String("kubectl")),
ClusterName: jsii.String("upgrade-me"),
ClusterLogging: &[]awseks.ClusterLoggingTypes{
awseks.ClusterLoggingTypes_AUDIT,
},
},
) |
Describe the bug
Running an empty update on an empty EKS cluster fails while updating the resource
EksClusterAwsAuthmanifest12345678
(Custom::AWSCDK-EKS-KubernetesResource
).Expected Behavior
The update should succeed.
Current Behavior
It's fails with error:
Reproduction Steps
This is what I did:
Possible Solution
No response
Additional Information/Context
I checked the version of
kubectl
in the lambda handler and it's1.20.0
which AFAIK is not compilable with cluster version1.22.0
. I'm not entirely sure how the lambda is created. I thought it matches thekubectl
with whatever version the cluster has.But it seems it's notIt is not the case indeed (#15736).CDK CLI Version
2.20.0 (build 738ef49)
Framework Version
No response
Node.js Version
v16.13.0
OS
Darwin 21.3.0
Language
Typescript
Language Version
3.9.10
Other information
Similar to #15072?
The text was updated successfully, but these errors were encountered: