-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: switch to health checks for init package waits #2964
Conversation
Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
✅ Deploy Preview for zarf-docs canceled.
|
Codecov ReportAttention: Patch coverage is
|
- name: zarf-docker-registry | ||
namespace: zarf | ||
kind: Deployment | ||
apiVersion: apps/v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One interesting thing I'm seeing now that health checks are added is that sometimes we see the following error on image push for the agent (the component right after the registry). The failure happens nearly instantaneously and we retry right away so there is no effect on the user besides seeing the text. It seems that the service is trying to forward to the pod right as it's dying. This didn't happen before because we didn't wait for the seed registry to terminate. This still happens if I add a health check on the service.
Pushing ghcr.io/zarf-dev/zarf/agent:local 0sE0904 09:28:20.759620 3387486 portforward.go:413] an error occurred forwarding 39239 -> 5000: error forwarding port 5000 to pod 8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476, uid : failed to execute portforward in network namespace "/var/run/netns/cni-41dd6540-8191-9e97-19ed-a1ca0d90316c": failed to connect to localhost:5000 inside namespace "8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476", IPv4: dial tcp4 127.0.0.1:5000: connect: connection refused IPv6 dial tcp6 [::1]:5000: connect: connection refused
✔ Pushed 1 images
I don't think this should stop us from merging, but wanted to take a note of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this because we are now too fast to do a port-forward?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking that it's trying to do a port-forward on the old pod after it died, but the timing is very tight. It always works on the second try, if it fails at all. The UID in the error doesn't match the new pod, so I'm guessing it would match the old one, not sure if there's an easy way to get deleted pod UIDs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is strange, I did re-runs and watched the UID of the registry pods. The UID in the error message is neither the old registry pod or the new registry pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current theory is that the deployment and service can be ready but the old endpoint slice for the seed registry still exists. This would be consistent with what's written in the kstatus docs about resources that create other resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving this here, this is what the error message looks like. It originates from this line in the Kubernetes port-forward code. I think it gets put on the progress bar somehow, since we aren't returning the error here. The retry doesn't error because it always works on the next attempt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the code to get the pods instead of using the service. As far as I can tell from running a script that inits on my local PC 10 times, we don't end up with the above error message, but it can still error once, it just works on the next try and since the retry function returns nil we don't ever see the error message.
Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
Closing this as #3043 is going to add this functionality to all deployments by default |
Description
By using health checks we can ensure that resources are fully reconciled before continuing to the next component. This is especially important for the registry component. Since the registry is a re-deploy of the same helm chart as the seed-registry the deployment is already available with extra pods and the current wait condition passes immediately.
Related Issue
Fixes #2855
Checklist before merging