-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add common labels for Pods & PVCs #161
Conversation
kubespawner/spawner.py
Outdated
'heritage': 'jupyterhub', | ||
'component': 'singleuser-server', | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heritage were already defined in _build_common_labels
and component
was added specifically anyhow in get_pvc_manifest
and _build_pod_labels
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh... But perhaps they were utilized by something outside the kubespawner project that I didn't know of? I did not find an internal reference to them though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah... This should be fine I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The labels
field in the PodReflector
overrides the NamespacedResourceReflector
field and I didn't realize that fully until recently. As they fill the purpose of selecting the pods within the namespace with those labels, they must remain.
kubespawner/spawner.py
Outdated
'release': os.getenv('LABEL_RELEASE', 'unknown') | ||
'chart': os.getenv('LABEL_CHART', 'unknown') | ||
'heritage': os.getenv('LABEL_HERITAGE', 'jupyterhub') | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I make sure to add these after the extra labels are added.
labels.update(self.pod_reflector.labels) | ||
return self._build_common_labels(labels) | ||
}) | ||
return labels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part contained redundant labels before, first component
was added, then the extra labels, then heritage
and component
again from PodReflector
, and finally heritage
again and app
from _build_common_labels
.
@@ -1008,6 +1003,9 @@ def get_pvc_manifest(self): | |||
Make a pvc manifest that will spawn current user's pvc. | |||
""" | |||
labels = self._build_common_labels(self._expand_all(self.user_storage_extra_labels)) | |||
labels.update({ | |||
'component': 'singleuser-storage' | |||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call to _build_common_labels
here is the only other place _build_common_labels
is used. So the PVCs will get all the recommended labels as well.
kubespawner/spawner.py
Outdated
labels.update(extra_labels) | ||
labels.update({ | ||
'app': os.getenv('LABEL_APP', 'jupyterhub'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should be using enviornment variables directly. We should use traitlets here. KubeSpawner can also be used without helm, so chart and release shouldn't be here (as they are helm specific).
I think a generic way to do this would be to have the concept of 'identifying labels', which will be added to every pod / PVC. This can be set as a traitlet, and we can add appropriate values for it in the helm chart.
We must also be extremely careful here about backwards compatibility. We must test the following scenarios:
Pod started with older version of KubeSpawner (without this patch). Then KubeSpawner is updated to include this patch. KubeSpawner can still see the old pod, and is able to start / poll / stop it appropriately.
It might also be useful to introduce a 'kubespawner labeling scheme version' label, which will allow us to switch labeling schemes in an easier fashion in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @yuvipanda for the feedback on this! I'll put in effort to think this through this, make sure we are backwards compatible without requiring a restart of all singleuser-server pods and stop using environment variables!
Should the pod culler also support traitlets ? It was using the environment variable LABEL_SELECTOR in order to know what pods were singleuser-server pods.
I have some learning to do regarding traitlets, but If you talk to me as if i get it all, I'll eventually do :)
Was made as preparation for [z2jh#625](jupyterhub/zero-to-jupyterhub-k8s#625).
Label selection on all labels may end up a bit much, so in the zero-to-jupyterhub-k8s repo I reduced it to `component`, `app`, and `release`. Kubespawner is not helm dependent and should not be linked to `release` though, and having custom configuration of label selection could be troublesome, as all singleuser-servers could require a restart or get their labels updated if the kubespawner setting was updated. In the future, perhaps we will tackle this?
@yuvipanda since your last review:
This PR is a prerequisite for z2jh#625. I want to to additional testing before I consider this merge ready. |
Tested this PR on my cluster for a while without issues, also verified labeling works as it should in conjunction with z2jh#625. If this kubespawner prerequisite PR is merged, I'll bump to utilize it in z2jh#625 and remove WIP for that also. |
Thanks, @consideRatio! |
Summary
component=singleuser-server
and still does not support multiple deployments in the same namespace.common_labels
, this allows z2jh to set thechart
andrelease
labels on spawned Pods and PVCs.Background
In jupyterhub/zero-to-jupyterhub-k8s#625, I wanted to make all objects created within the z2jh helm chart follow Helms Best Practices for labels. This PR helps ensure that and thereby fixes #159.
Details
These are the labels of relevance in this PR. Note that the
PodReflector
is hardcoded to find only the pods with asingleuser-server
label.component
'singleuser-storage'
/'singleuser-server'
app
common_labels
'jupyterhub'
heritage
common_labels
'jupyterhub'
History
I thought of using environment variables to configure certain labels. But configuring specific variables would couple kubespawner unnecessarily (to z2jh and Helm for example).
Implementation notes
common_labels
couldn't be overridden by extra labels provided to the kubespawnerPR TODO
common_labels
and how to avoid it. Update: I decided to keep a hardcoded pod-selection based on the namespace,heritage=jupyter,component=singleuser-server
for now.start
,poll
andstop
, using thePodReflector
to find the pods.Excluded from this PR
common_labels
trait with KubeIngressProxy