Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: switch to health checks for init package waits #2964

Closed
wants to merge 3 commits into from

Conversation

AustinAbro321
Copy link
Contributor

Description

By using health checks we can ensure that resources are fully reconciled before continuing to the next component. This is especially important for the registry component. Since the registry is a re-deploy of the same helm chart as the seed-registry the deployment is already available with extra pods and the current wait condition passes immediately.

Related Issue

Fixes #2855

Checklist before merging

Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
@AustinAbro321 AustinAbro321 requested review from a team as code owners September 4, 2024 13:33
Copy link

netlify bot commented Sep 4, 2024

Deploy Preview for zarf-docs canceled.

Name Link
🔨 Latest commit 5b16e94
🔍 Latest deploy log https://app.netlify.com/sites/zarf-docs/deploys/66d8ad55a197300008ea3350

Copy link

codecov bot commented Sep 4, 2024

Codecov Report

Attention: Patch coverage is 0% with 12 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/pkg/cluster/tunnel.go 0.00% 12 Missing ⚠️
Files with missing lines Coverage Δ
src/pkg/cluster/tunnel.go 11.35% <0.00%> (-0.48%) ⬇️

- name: zarf-docker-registry
namespace: zarf
kind: Deployment
apiVersion: apps/v1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One interesting thing I'm seeing now that health checks are added is that sometimes we see the following error on image push for the agent (the component right after the registry). The failure happens nearly instantaneously and we retry right away so there is no effect on the user besides seeing the text. It seems that the service is trying to forward to the pod right as it's dying. This didn't happen before because we didn't wait for the seed registry to terminate. This still happens if I add a health check on the service.

    Pushing ghcr.io/zarf-dev/zarf/agent:local  0sE0904 09:28:20.759620 3387486 portforward.go:413] an error occurred forwarding 39239 -> 5000: error forwarding port 5000 to pod 8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476, uid : failed to execute portforward in network namespace "/var/run/netns/cni-41dd6540-8191-9e97-19ed-a1ca0d90316c": failed to connect to localhost:5000 inside namespace "8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476", IPv4: dial tcp4 127.0.0.1:5000: connect: connection refused IPv6 dial tcp6 [::1]:5000: connect: connection refused 
  ✔  Pushed 1 images

I don't think this should stop us from merging, but wanted to take a note of it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because we are now too fast to do a port-forward?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking that it's trying to do a port-forward on the old pod after it died, but the timing is very tight. It always works on the second try, if it fails at all. The UID in the error doesn't match the new pod, so I'm guessing it would match the old one, not sure if there's an easy way to get deleted pod UIDs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is strange, I did re-runs and watched the UID of the registry pods. The UID in the error message is neither the old registry pod or the new registry pod.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current theory is that the deployment and service can be ready but the old endpoint slice for the seed registry still exists. This would be consistent with what's written in the kstatus docs about resources that create other resources

Copy link
Contributor Author

@AustinAbro321 AustinAbro321 Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving this here, this is what the error message looks like. It originates from this line in the Kubernetes port-forward code. I think it gets put on the progress bar somehow, since we aren't returning the error here. The retry doesn't error because it always works on the next attempt.
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the code to get the pods instead of using the service. As far as I can tell from running a script that inits on my local PC 10 times, we don't end up with the above error message, but it can still error once, it just works on the next try and since the retry function returns nil we don't ever see the error message.

Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
Signed-off-by: Austin Abro <AustinAbro321@gmail.com>
@AustinAbro321 AustinAbro321 marked this pull request as draft September 6, 2024 16:01
@AustinAbro321
Copy link
Contributor Author

Closing this as #3043 is going to add this functionality to all deployments by default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Image Race Condition Between Zarf Seed Registry and Zarf Permanent Registry With Multiple Registry Replicas
2 participants