-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For EKS, CAPI truncates the version
field in AWSManagedControlPlane spec after apply
#3594
Comments
@yogeek: This issue is currently awaiting triage. If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
MachinePool version looks like having a validation check for updates as well, so should not be changeable to v1.22 looking at this: Could you confirm if MachinePool.spec.template.spec.version is set to v1.22? |
Hi @sedefsavas thanks for your help Here is the GIT manifest :
and the result after cluster api created the cluster /
|
version
field in MachinePool spec after applyversion
field in AWSManagedControlPlane spec after apply
I'm going to re-create this so: /assign |
If a version number is specified on the We only need to do this normalizing of the version number at the point of creating resources in EKS (i.e. EKS cluster). My suggestion is that we do the following:
@sedefsavas, @Skarlso, @pydctw , @Ankitasw - wdyt? |
@richardcase I just noticed this. :OOOO This was back in July 😁. I shell read this and answer. |
Yeah, I agree with that fix. I can take that on. I'm sure that there was a good reason for that normalization? |
/assign Skarlso |
So, the validation also has to be updated, right? Otherwise, you can't set any value other than MAJOR.MINOR. Which might be okay, because we would do that anyways... I can't pass in a different value during validation because that's an internal process of the webhook service using the regex defined on the field:
So, what should we do about that? |
This is this part in the Handler: // Default the object
obj.Default()
marshalled, err := json.Marshal(obj)
if err != nil {
return Errored(http.StatusInternalServerError, err)
}
// Create the patch
return PatchResponseFromRaw(req.Object.Raw, marshalled) It fails during marhasling. So I can't really influence that unless I overwrite the spec.Version which we don't want. So we will have to actually require our restrictions. :D |
It appears, just removing the normalise function, works.... gonna run some tests. At least there are no diffs.... So the managed controlplane will now require the version to be passed in correctly. |
Yep, cluster appears to be working:
With following config: ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "managed-cluster"
spec:
infrastructureRef:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
name: "managed-cluster-control-plane"
controlPlaneRef:
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
name: "managed-cluster-control-plane"
---
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "managed-cluster-control-plane"
spec:
region: "eu-central-1"
sshKeyName: "capa-key"
version: "v1.22"
addons:
- name: "vpc-cni"
version: "v1.11.0-eksbuild.1"
conflictResolution: "overwrite"
- name: "coredns"
version: "v1.8.7-eksbuild.1"
- name: "kube-proxy"
version: "v1.22.6-eksbuild.1"
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: "managed-cluster-md-0"
spec:
clusterName: "managed-cluster"
replicas: 1
selector:
matchLabels:
template:
spec:
clusterName: "managed-cluster"
version: "v1.22.0"
bootstrap:
configRef:
name: "managed-cluster-md-0"
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: EKSConfigTemplate
infrastructureRef:
name: "managed-cluster-md-0"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
metadata:
name: "managed-cluster-md-0"
spec:
template:
spec:
instanceType: "t3.small"
iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "capa-key"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: EKSConfigTemplate
metadata:
name: "managed-cluster-md-0"
spec:
template: {}
Will create a PR later on. |
Another option is to relax the regex and edit the version after it has been provided. I think that's what you suggested Richard but I might not have understood it. So, changing the regex in the cluster config for the version to be different so existing configs don't break. And then we normalise the version internally whenever it's needed. |
I'm going to relax the version regex to accommodate 4 scenarios:
These will make sure no existing configs will break, and internally, we normalise it to the needed format anyways. |
/kind bug
What steps did you take and what happened:
When setting k8s version for nodegroups in CAPI YAML manifest (
AWSManagedControlPlane.spec.template.spec.version
), EKS only allows MAJOR.MINOR (e.g.1.22
) but ClusterAPI requires a "semantic version" MAJOR.MINOR.PATCH (e.g. 1.22.10)invalid: spec.template.spec.version: Invalid value: "v1.22": must be a valid semantic version
1.22
which causes a diff in gitops tool like ArgoCD (and if we ignore this diff, we will not be able to detect a version change later)What did you expect to happen:
I expected ClusterAPI to not change a resource after it has been applied and to let version the same as declared in the initial manifest (in GIT).
Anything else you would like to add:
This issue has already been discussed orally with @richardcase
Environment:
The text was updated successfully, but these errors were encountered: