Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable custom pod labels for helm chart #326

Merged

Conversation

MarkDPierce
Copy link
Contributor

Hey, we leverage tooling that requires certain labels to be present. While deploying into our ecosystem I noticed the pods don't have the capability to have custom labels. This PR is meant to include that annotation and feature for others that require specific pods labels

@CLAassistant
Copy link

CLAassistant commented Oct 30, 2023

CLA assistant check
All committers have signed the CLA.

@yorugac
Copy link
Collaborator

yorugac commented Oct 30, 2023

Hi @MarkDPierce, why customLabels cannot be used? They should be applied to Deployment as well, here.

@MarkDPierce
Copy link
Contributor Author

MarkDPierce commented Oct 30, 2023

When I use a custom label of environment: test and generate the manifest I get the following.

# Source: k6-operator/charts/k6-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-k6-operator-controller-manager
  namespace: release-name-system
  labels:
    control-plane: "controller-manager"
    app.kubernetes.io/component: controller
    helm.sh/chart: k6-operator-1.2.0
    app.kubernetes.io/name: k6-operator
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.0.11"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: k6-operator
    environment: test
  annotations:

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: k6-operator
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: k6-operator
        app.kubernetes.io/instance: release-name
    spec:
      containers:
        - name: kube-rbac-proxy
          image: IMAGE LOCATION
          imagePullPolicy: IfNotPresent
          args:
            - --secure-listen-address=0.0.0.0:8443
            - --upstream=http://127.0.0.1:8080/
            - --logtostderr=true
            - --v=10
          ports:
            - containerPort: 8443
              name: https
        - name: manager
          image: IMAGE LOCATION
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 50m
              memory: 50Mi
            requests:
              cpu: 50m
              memory: 50Mi
          command:
            - /manager
          args:
            - --enable-leader-election
            - --metrics-addr=127.0.0.1:8080
      serviceAccount: k6-operator-controller
      terminationGracePeriodSeconds: 10

These customLabels are not included in the template metadata. Due to security reasons we need labels at the pod level. So every templated pod will get these custom labels.

Thanks for actually asking because I totally forgot to add {{- include "k6-operator.podLabels" . | nindent 8 }} to the deployment.yaml template.

My change should have it looking like

# Source: k6-operator/charts/k6-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-k6-operator-controller-manager
  namespace: release-name-system
  labels:
    control-plane: "controller-manager"
    app.kubernetes.io/component: controller
    helm.sh/chart: k6-operator-1.2.0
    app.kubernetes.io/name: k6-operator
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.0.11"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: k6-operator
    environment: test
  annotations:

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: k6-operator
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: k6-operator
        app.kubernetes.io/instance: release-name
        environment: test
    spec:
      containers:
        - name: kube-rbac-proxy
          image: IMAGE LOCATION
          imagePullPolicy: IfNotPresent
          args:
            - --secure-listen-address=0.0.0.0:8443
            - --upstream=http://127.0.0.1:8080/
            - --logtostderr=true
            - --v=10
          ports:
            - containerPort: 8443
              name: https
        - name: manager
          image: IMAGE LOCATION
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 50m
              memory: 50Mi
            requests:
              cpu: 50m
              memory: 50Mi
          command:
            - /manager
          args:
            - --enable-leader-election
            - --metrics-addr=127.0.0.1:8080
      serviceAccount: k6-operator-controller
      terminationGracePeriodSeconds: 10

@MarkDPierce
Copy link
Contributor Author

Sorry, that was a long winded explanation to just say deployment.yaml only has

spec:
  replicas: 1
  selector:
    matchLabels:
      {{- include "k6-operator.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "k6-operator.selectorLabels" . | nindent 8 }}
    spec:

So customLabels wouldnt work in this scenario.

@yorugac
Copy link
Collaborator

yorugac commented Oct 31, 2023

Thanks for full explanation. Now after some looking around, I'm not fully certain we should have customLabels at all 😕 E.g. these charts don't seem to propagate user-defined labels at the level of Deployments or StatefulSets:
https://github.com/grafana/mimir/blob/main/operations/helm/charts/mimir-distributed/values.yaml
https://github.com/grafana/loki/blob/main/production/helm/loki/values.yaml

In k6-operator case, there's only one Deployment which makes it easier; though I suspect in the future we might need to make further breaking changes here.

I'd appreciate some feedback from community here, about how everyone uses labels in their charts 🙂

@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.0.11"
description: A Helm chart to install the k6-operator
name: k6-operator
version: 1.2.0
version: 2.0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it should be a breaking change? This seems like a "pure" addition: old deployments should continue to work as is. I.e. why not 1.3.0?

FYI, I'll need to test it a bit and probably return to this PR ~ tomorrow.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing your experience with versioning of other Helm charts. The chart in k6-operator was added very recently and we're figuring out the "right way" still 😅
Let's go with major bump this time indeed 👍

@yorugac yorugac added this to the 0.12 milestone Oct 31, 2023
@MarkDPierce
Copy link
Contributor Author

MarkDPierce commented Oct 31, 2023 via email

Copy link
Collaborator

@yorugac yorugac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay!
The chart generated looks fine and working 🙌 I have a small change request: could you please add podLabels to example in charts/k6-operator/samples/customAnnotationsAndLabels.yaml? It could be an additional sample (file) too, btw.

And another thing. I've noticed that there is an additional empty line in manifest when one omits podLabels like this:

  ...
    metadata:
      labels:
        app.kubernetes.io/name: k6-operator
        app.kubernetes.io/instance: release-name
        
    spec:
      ...

😕 The same empty line is inserted with custom labels which use similar definition and default {} so it's not just change in this PR. This is a minor thing but if you or someone knows how to fix it, I'd appreciate a PR here 😄

@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.0.11"
description: A Helm chart to install the k6-operator
name: k6-operator
version: 1.2.0
version: 2.0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing your experience with versioning of other Helm charts. The chart in k6-operator was added very recently and we're figuring out the "right way" still 😅
Let's go with major bump this time indeed 👍

@MarkDPierce
Copy link
Contributor Author

MarkDPierce commented Nov 7, 2023

Hey sorry for the delay. Work had prio.

I took a stab at the line breaks in your labels and annotations but it might require a bit of refactoring. At least with both you have | default "" | which means it's by design. With a bit of refactoring I managed to achieve the following

---
# Source: k6-operator/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: release-name-system
  labels:
    app.kubernetes.io/name: release-name-k6-operator
    control-plane: "controller-manager"
    foo: bar
  annotations:
    bar: foo
---
# Source: k6-operator/templates/serviceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k6-operator-controller
  namespace: release-name-system
  labels:
    app.kubernetes.io/component: controller
    helm.sh/chart: k6-operator-2.0.0
    app.kubernetes.io/name: k6-operator
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.0.11"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: k6-operator
    foo: bar
  annotations:
    bar: foo

But I do think the changes would be out of scope for this change.

@yorugac
Copy link
Collaborator

yorugac commented Nov 8, 2023

@MarkDPierce yes, the line break would be out of scope, but thanks for taking a look. If you see the solution, please feel free to open a PR 🙂 ATM, I don't think default "" is a must so long as the values are passed correctly or are omitted correctly 🤔

Either way, the podLabels change looks good, thank you for this!

@yorugac yorugac merged commit 696fead into grafana:main Nov 8, 2023
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants