-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A blockDeviceMapping change in EC2NodeClass does not trigger drift replace #5447
Comments
This looks like a regression related to #3330 |
So, given that updating the hash here to capture the volume size (since it wasn't being captured before) is going to be a breaking change, we are going to have to rely on some in-progress work that allows us to version the hash that we are using to evaluate drift to ensure that we don't drift all of the existing nodes on the cluster at once as soon as we introduce #5454. That work is going to take a bit to go in, so until that point, I'd recommend making another change to the EC2NodeClass that forces the drift of all of the NodeClaims associated with it or change a field in the NodePool (such as the |
There's some discussion in this issue here around versioning the hash that we use for evaluating drift: kubernetes-sigs/karpenter#909 |
Description
Observed Behavior:
When changing
spec.blockDeviceMapping[0].ebs.volumeSize
no drift replace is triggered. Also, thekarpenter.k8s.aws/ec2nodeclass-hash
annotation remains unchanged.However, when I flip
spec.detailedMonitoring
a drift replace is triggered. And thekarpenter.k8s.aws/ec2nodeclass-hash
annotation gets updated.Since the hash value does not get updated for block device mapping changes , I assume the bug is inside the hashing code.
Expected Behavior:
Changing
spec.blockDeviceMapping[0].ebs.volumeSize
triggers a drift replace.Reproduction Steps (Please include YAML):
karpenter.k8s.aws/ec2nodeclass-hash
annotation on the EC2NodeClass resourceTo ensure Karpenter drift does work:
Versions:
kubectl version
): v1.27.7The text was updated successfully, but these errors were encountered: