-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for NetworkPolicy ingress/egress #534
Comments
I 100% think we should apply network policies here. The one thing I was bitten by in jupyterhub/mybinder.org-deploy#146 is that kubernetes doesn't check at all for using unsupported features, so you can happily load an egress network policy that is completely ignored without any warnings or errors. I haven't checked in a bit, but I assume gke-1.9 will have sufficiently recent calico (>= 2.6.1) to enable egress policies. We should start by sketching out a network policy of who needs to talk to whom, and what the default should be. To get started: ingress:
egress:
Figuring out the right labeling scheme / default policies for that would be great! |
Thanks, I'll try and come up with something. A barrier I've already run into is that the DNS server is in the kube-system namespace so the obvious thing is to allow egress to port 53 in kube-system, however |
This is what I've come up with so far based on @minrk's outline: https://gist.github.com/manics/ccc96169e4ce22f4cf434c7c7f9b9630 I've been testing on openstack (deployed with kubespray, canal network plugin) proxy
hub
singleuser-servers
|
This is great, since I need something like this for a course starting in a month :D I agree this belongs in z2jh. I think the only knob we should turn to begin with is 'who can single-user server talk to?', since you might have plenty of other in-cluster services that you might wanna expose to the singleuser server. What you have sounds like a good start, @manics! |
@manics awesome! Feel free to make a PR with what you have, and we can iron it out. My only question based on your gist is about best practices in terms of how to organize the information: when giving pods access to the proxy, should we be putting those pods in the proxy's network policy explicitly, or should we be using a single label like 'proxy-api-access' in the network policy and applying that label to all of the pods that need access? i.e. when granting a new pod access to the proxy, do I modify the pod's labels, or the proxy's network policy? It seems like the former makes the most sense for ingress. I'm not sure if egress is best treated the same or not. |
I think ingress is fairly easy policy:
Egress is definitely going to be more complicated, so I suggest we tackle ingress first and then go from there? |
Thanks for the feedback, I'll open a PR today/tomorrow. |
Implemented in #546 |
@manics @minrk @yuvipanda awesome!!! I learned a lot by reading this thread! @manics I think the pod will talk to the k8s-api through kube-proxy pods (one per node), so if we manage to target that we are good I think. Also I think they communicate with HTTP by default.
|
Do you know that how we define eggress for the singleuser to access outside the world ? |
Hello @menendes ! To be clear: I'm trying to deploy a jupyter hub on a kubernetes cluster. I then used the official helm chart (version 2.0.0) of jupyter hub. I can access my notebook but when working with it, I can't communicate with the world (can't pip install, can't make requests, etc...). Did someone faced this kind of issue please? |
Please ask it at discourse.jupyter.org, but my best guess is that you dont have networking setup in a way that make any pod able to send traffic to the internet - because for example not having public IPs on nodes, and then a need for some public IP, that could ve provided via a service like google's Cloud NAT. Please refrain from following up here though, open a discourse.jupyter.org question about it instead please. |
I'm interested in using a singleuser pod NetworkPolicy to limit egress. The use-case I have in mind is providing a public but very locked-down jupyterhub deployment to give people a taster with minimal barriers, and to provide full access behind a second authenticated deployment.
I found a related discussion on jupyterhub/mybinder.org-deploy#146 and can see pros and cons to including it in this chart:
If you think it's worthwhile here's a proposal:
applied to
The text was updated successfully, but these errors were encountered: