-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment Issues on Existing Kubernetes cluster #518
Comments
Pasting my response from gitter here:
|
Regarding the namespace stuff, this is likely a permissions issue but also seems like something that can be worked around depending on how terraform does things. In my case I had a
So depending on how qhub/terraform check for the existence of a namespace on this cluster, they might get a permission error. QHub Deploy Error
|
@djhoese we are a bit swamped right now tracking down a few bugs in the 0.3 release and on internal work. Once we get through that, we'll see how we can help you on this issue. |
Small update on this: I was given more permissions on our test cluster where I'm playing with this. I pulled the current main branch and followed the updated local-testing instructions. Unfortunately, terraform is still made that my namespace already exists:
Does anyone more familiar with terraform know of a way I can workaround this by letting know terraform that yes the namespace exists, but don't worry about it? |
Ok, so I got a little further on this. I let QHub/terraform create the namespace and it got into actually creating resources. The problem I ran into then is that QHub assumes use of MetalLB (or some other external load balancer) to give it external IP addresses. My cluster is using haproxy externally and does load balancing by passing traffic to different nodes (again, this is a test cluster). Our other cluster where we're experimenting with MetalLB uses it in Level 2 Mode and MetalLB is only giving one IP address out to ingress-nginx and passing all traffic to that. This MLB instance is not configured with any additional IP addresses so even if I was to deploy QHub on this second cluster there wouldn't be any IP addresses for it to request. I could modify all the terraform modules to do ClusterIPs, but given the work in #577 I'm not sure that is worth the time right now. Is there anything that requires the various QHub services to have different IP addresses? Regarding the above namespace issue: hashicorp/terraform-provider-kubernetes#613 (comment) Edit:I see in the terraform modules the hard dependency on a traefik ingress controller. So the nginx ingress controller that already exists on my cluster won't work. Darn. It seems like it only really makes sense to deploy QHub on a single purpose cluster. I had hoped that everything would be dumped into a single namespace and use existing cluster infrastructure (ex. ingress controller). It may be time to just close this and I stop trying to use QHub on this particular cluster. Edit 2: Correction. I see the terraform modules actually create the traefik ingress controller. Maybe this isn't completely impossible, but more than I care to customize at this point. |
This issue has been automatically marked as stale because there was no recent activity in 60 days. Remove the stale label or add a comment, otherwise, this issue will automatically be closed in 7 days if no further activity occurs. |
This issue was closed because it has been stalled for 7 days with no activity. |
Theoretically if we can deploy on Minikube for testing, we should be able to deploy to an existing Kubernetes cluster as well.
https://docs.qhub.dev/en/latest/source/06_developers_contrib_guide/04_tests.html#local-testing
@djhoese raised some issues he had trying to do the same, here are his comments:
Complications faced by him
Some complicates with me trying to do this:
The text was updated successfully, but these errors were encountered: