-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[werft] Add sweeper cleanup logic for k3s ws cluster #4746
Conversation
cba6362
to
60412e9
Compare
.werft/wipe-devstaging.ts
Outdated
|
||
|
||
async function wipeDevstaging() { | ||
async function wipeDevstaging(pathToKubeConfig: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I won't find time for a full review today, but: please let's try to find another way to switch between kubectl contexts. This really decreases the signal to noise ratio a lot.
I have not coherent view on this or what might be better, but will try to come up with sth tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One way I can think of is to merge all the kubeconfigs we have into the default one and then use context switch. That would involve downloading a kubeconfig -> updating the context name in that and then adding it to the default kubeconfig.
Since pathToKubeConfig is how I have done for all other places I would let this PR merge and raise a separate PR to use a new approach for all the files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since pathToKubeConfig is how I have done for all other places I would let this PR merge and raise a separate PR to use a new approach for all the files.
Ok, don't want to block this PR as it fixes sweeper. But let's not forget this.
One way I can think of is to merge all the kubeconfigs we have into the default one and then use context switch.
This was the way we used to do it in the past albeit it was simpler back then because clusters were static. But we can also leave the separate kubectl configs, but pass a more generic env
around which exec
supports (we already have that in some places).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the way we used to do it in the past albeit it was simpler back then because clusters were static. But we can also leave the separate kubectl configs, but pass a more generic env around which exec supports (we already have that in some places).
This seems like a much better solution! I will try this out and raise a PR
/werft run 👍 started the job as gitpod-build-prs-sweeper-n-meta-ws-disable.3 |
.werft/wipe-devstaging.ts
Outdated
async function deleteExternalIp(k3sWsProxyIP: string, namespace: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: In general it would be good to find a common place for code like this, like a k3s-cluster.ts
or deployment.ts
or so. wipe-staging.ts
should be just calling that stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense. Let me move this to another file.
/werft run 👍 started the job as gitpod-build-prs-sweeper-n-meta-ws-disable.7 |
/werft run 👍 started the job as gitpod-build-prs-sweeper-n-meta-ws-disable.8 |
58d69da
to
73c5811
Compare
/werft run 👍 started the job as gitpod-build-prs-sweeper-n-meta-ws-disable.10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
* Update sweeper logic to delete k3s ws preview env and refactor methods and files
What?
When running wipe job do the following:
Testing
To test this out I manually edited the sweeper deploy and added arg
--timeout=5m
to trigger wipe job instantly. The cleanup succeededReference job in a different branch