-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodePort magnum deployment #43
Comments
On jetstream we don't have a load balancer either, I think I should also switch to use NodePort, and that should work. Soon I am going to start working again on the Magnum deployment and I will test this. About your second question, you need an NGINX Ingress to expose the services you run inside Kubernetes (for example JupyterHub) to the internet. |
Thanks! Well, for some reason when I follow your tutorial, after
I check it with:
and I get:
rather than:
So I naively thought that I had to setup something else. Would you know why? Best regards, |
no sorry |
Hi, My issue was that I did not enable pod scheduling on the kubernetes master at the beginning of the deployment and the ingress controller was deployed on a worker node instead. Once I allowed pod scheduling on the master:
It worked. Thanks again for sharing your notes! Best regards, |
Hi, I scaled up my kubernetes cluster with several worker nodes and realized that user pods scheduled on nodes other than master were not able to communicate with the hub. Users can login, select their profile and create containers but after several minutes they get a Do you know why only user pods scheduled in the master node work properly? Is this related to the Many thanks, |
If they can login, then the nginx ingress is working fine |
Thanks. In case it helps, switching from using nginx-ingress to NodePort solved the issue in my JupyterHub deployment and now user pods scheduled on kubernetes workers are able to communicate correctly. Many thanks, |
documentation about nginx configuration with host network is at: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network Also in the kubespray deployment I use host network, see: |
Thanks! I did try out the steps in https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network However, I ended up having issues with multiple pods trying to bind to the same port. On second thoughts, I think I am going to manually deploy a separate VM to act as a reverse proxy in front of the Kubernetes cluster. We are currently running OpenStack train and in principle we could deploy Octavia but it doesn't look like an easy task, and I don't have the time right now to explore. |
@sebastian-luna-valero I was just thinking about a reverse proxy in #45, it would be really helpful if you wanted to share the configuration (or just tips) on how to deploy that separate VM there. |
Hi,
Via the "Gateways 2020" conference I found your great talk at: https://youtu.be/D5ZrbB2KtXw
Thank you very much for sharing all the details to work with JupyterHub on OpenStack Magnum.
We do not have a load balancer service in our OpenStack deployment and therefore I would be interested in deploying JupyterHub without one. I wanted to ask, have you ever tried deploying the nginx ingress with NodePort instead?
https://github.com/zonca/jupyterhub-deploy-kubernetes-jetstream/blob/master/kubernetes_magnum/nginx_ingress.yaml#L216
Also, I am very new to kubernetes, could you please confirm whether I need an
nginx
reverse proxy for these to work?Many thanks,
Sebastian
The text was updated successfully, but these errors were encountered: