-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[werft] Install gitpod in k3s ws cluster #4664
Conversation
a427d66
to
687463d
Compare
/werft run 👍 started the job as gitpod-build-prs-install-gitpod-k3s.4 |
/werft run 👍 started the job as gitpod-build-prs-install-gitpod-k3s.5 |
Prior to merging this PR I'd love to see it in action, i.e. have the aforementioned change in "Gitpod infra repo" merged/applied. Lastly, I'm a bit surprised to see this hybrid workspace cluster approach. It strikes me that the design we had originally discussed (DNS entries for all preview-environments, just change kube context for k3s installation) is
(** the ingress problem: because all our traffic goes through the core-dev ingress we're not seeing the same behaviour in core-dev as we'd see in prod, because the additional nginx imposes its own behaviour. We need this core-dev ingress nginx for some paths like authentication or payment, but the vast majority of requests doesn't). |
I am manually editing the labels to check if the pods get scheduled. Will update the repo post that.
ATM we only have workspace module for gitpod installation. Deploying only ws component on k3s cluster will give us same behaviour that we anticipate on staging/production cluster. The core-dev preview env makes use of ingress to route traffic to ws components. We want to do the same here. The k3s cluster has a n ingress which will route traffic directly to the ws components. About having one cluster per preview env, we need to test deploying meta component on k3s cluster. It is possible to try that out here but then it makes it more complex to figure out if there is a problem i.e. whether it is because how we have setup preview env or because on inherent issue of some compatibility of k3s with meta components. |
🙏
I'm not sure I follow. Does that affect more than the labels on the node pool?
That is a good point indeed, yet it adds considerable complexity in core-dev/during development and also increases the effort to get k3s working in core-dev.
My point exactly: we do not want that ingress where it isn't strictly necessary (everywhere except for auth and payment). We don't run it in staging/prod and it has caused many a problem in the past.
Meta doesn't really care what Kubernetes cluster it runs on. In fact all meta components would do just fine without Kubernetes to begin with. |
1f7f1d6
to
3bfbac6
Compare
and
I tried meta installation on the k3s and did encounter some issues wrt volume mounts (e.g. minio). I am skeptical to build an env which is not close to what we have in prod/staging.
Ack. I will get rid of ingress in the k3s ws cluster. I in fact faced issues while doing this.
I have made a slow progress because of some complexity but I believe this complexity needs to be solved only once. This gets us closer to a prod like setup, thus, solving this would be a good idea IMHO. |
/werft run 👍 started the job as gitpod-build-prs-install-gitpod-k3s.54 |
I made a copy of this branch (for testing purposes; name is
|
When I run
and later during
|
I think this is because cert is taking time to get created. I am adding a wait to see if that is infact the case |
41ac7d8
to
63c2e50
Compare
/werft run 👍 started the job as gitpod-build-prs-install-gitpod-k3s.119 |
I have fixed all the issues wrrt circular dependency. I have also tested it in a separate branch which passed in one go: https://werft.gitpod-dev.com/job/gitpod-build-prs-test-2.0/results I was able to create a ws too. |
dc93b08
to
2fe02f8
Compare
I have rebased the branch and tested both flags enabled and flag disabled cases. Refer to https://werft.gitpod-dev.com/job/gitpod-build-prs-rebased-k3s.1/raw and https://werft.gitpod-dev.com/job/gitpod-build-prs-rebased-k3s.0/raw |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried it and worked for me.
NIT: I needed to run the job twice, because on the first run ws-manager-bridge
wasn't ready for the gpctl clusters register
. Since you said you (@princerachit ) said you want to fix that in a folow-up-PR I'm approving this one.
…ate k3s cluster (gitpod-io#4664) * Support workspace deployment in a separate k3s cluster using flag k3s-ws
What?
This PR does the following two things broadly:
How ?
The code has been refactored in such a way that if you provide werft flag
k3s-ws
then the deployment will occur in two different clusters:Dev cluster
The meta component will be deployed along with the ws components in the dev cluster. However, the static config which used to add the self cluster as a workspace target would be disabled, thus, the meta component would only work as meta clsuter.
K3s ws cluster
A workspace cluster deployment will occur in a dedicated namespace in the k3s cluster (similar to the dev cluster). In this deployment we use external IP for ws-proxy. Hence, there is no ingress involved whilst accessing the workspace. The external IP will be created by the werft using gcloud command . I have added relevant create, get and delete permissions to the gitpod-deployer SA.
The static external IP created is named same as the namespace.
Registration
Once the deployments have succeeded, we explicitly build the gpctl binary and then use it to register our workspace cluster to the meta cluster.
I have introduced the subdomain
*.ws-k3s.
for k3s ws cluster. For the meta i.e. dev cluster the subdomain*.ws-dev.
remains the same.What's next
Unlike the deployments of dev cluster, the k3s cluster deployments are not cleaned up. I will raise another PR for this.
--werft k3s-ws