Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NodePort magnum deployment #43

Closed
sebastian-luna-valero opened this issue Nov 18, 2020 · 10 comments
Closed

NodePort magnum deployment #43

sebastian-luna-valero opened this issue Nov 18, 2020 · 10 comments
Assignees

Comments

@sebastian-luna-valero
Copy link

Hi,

Via the "Gateways 2020" conference I found your great talk at: https://youtu.be/D5ZrbB2KtXw

Thank you very much for sharing all the details to work with JupyterHub on OpenStack Magnum.

We do not have a load balancer service in our OpenStack deployment and therefore I would be interested in deploying JupyterHub without one. I wanted to ask, have you ever tried deploying the nginx ingress with NodePort instead?

https://github.com/zonca/jupyterhub-deploy-kubernetes-jetstream/blob/master/kubernetes_magnum/nginx_ingress.yaml#L216

Also, I am very new to kubernetes, could you please confirm whether I need an nginx reverse proxy for these to work?

Many thanks,
Sebastian

@zonca
Copy link
Owner

zonca commented Nov 18, 2020

On jetstream we don't have a load balancer either, I think I should also switch to use NodePort, and that should work.
Currently instead I am setting hostNetwork: true and I think this has a very similar effect to using NodePort.

Soon I am going to start working again on the Magnum deployment and I will test this.

About your second question, you need an NGINX Ingress to expose the services you run inside Kubernetes (for example JupyterHub) to the internet.

@sebastian-luna-valero
Copy link
Author

Thanks!

Well, for some reason when I follow your tutorial, after

bash install_nginx_ingress.sh

I check it with:

curl localhost

and I get:

curl: (7) Failed to connect to localhost port 80: Connection refused

rather than:

Default backend: 404

So I naively thought that I had to setup something else.

Would you know why?

Best regards,
Sebastian

@zonca
Copy link
Owner

zonca commented Nov 18, 2020

no sorry

@sebastian-luna-valero
Copy link
Author

Hi,

My issue was that I did not enable pod scheduling on the kubernetes master at the beginning of the deployment and the ingress controller was deployed on a worker node instead. Once I allowed pod scheduling on the master:

# make the master node schedulable
kubectl edit node <master-node>

# remove
spec:
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master

It worked.

Thanks again for sharing your notes!

Best regards,
Sebastian

@sebastian-luna-valero
Copy link
Author

Hi,

I scaled up my kubernetes cluster with several worker nodes and realized that user pods scheduled on nodes other than master were not able to communicate with the hub. Users can login, select their profile and create containers but after several minutes they get a 504 gateway time-out.

Do you know why only user pods scheduled in the master node work properly? Is this related to the hostNetwork: true config in the NGINX ingress?

Many thanks,
Sebastian

@zonca
Copy link
Owner

zonca commented Nov 23, 2020

If they can login, then the nginx ingress is working fine

@sebastian-luna-valero
Copy link
Author

Thanks.

In case it helps, switching from using nginx-ingress to NodePort solved the issue in my JupyterHub deployment and now user pods scheduled on kubernetes workers are able to communicate correctly.

Many thanks,
Sebastian

@zonca
Copy link
Owner

zonca commented Feb 5, 2021

@sebastian-luna-valero
Copy link
Author

Thanks!

I did try out the steps in https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network However, I ended up having issues with multiple pods trying to bind to the same port.

On second thoughts, I think I am going to manually deploy a separate VM to act as a reverse proxy in front of the Kubernetes cluster.

We are currently running OpenStack train and in principle we could deploy Octavia but it doesn't look like an easy task, and I don't have the time right now to explore.

@zonca
Copy link
Owner

zonca commented Feb 9, 2021

@sebastian-luna-valero I was just thinking about a reverse proxy in #45, it would be really helpful if you wanted to share the configuration (or just tips) on how to deploy that separate VM there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants