Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to override default volumeclaimtemplate for statefulsets #2554

Conversation

ThelonKarrde
Copy link

@ThelonKarrde ThelonKarrde commented Jul 26, 2022

What this PR does

Adding an option to helm chart to be able to override the default volumeClaimTemplates which is very limited in terms of customization and you can't apply for example local-volume storage or so.

This is not a breaking change and it's fully compatible with current installations.

Which issue(s) this PR fixes or relates to

Fixes #

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

@CLAassistant
Copy link

CLAassistant commented Jul 26, 2022

CLA assistant check
All committers have signed the CLA.

@krajorama krajorama added the helm label Jul 27, 2022
@krajorama
Copy link
Contributor

Hi,

thank you for the contribution, what is it that you're trying to achieve ? Is it adding an extra PVC or replacing the current "storage" claim with something different ?

thanks, krajo

@ThelonKarrde
Copy link
Author

Hi,

thank you for the contribution, what is it that you're trying to achieve ? Is it adding an extra PVC or replacing the current "storage" claim with something different ?

thanks, krajo

Yes, the idea is to have the ability to replace the original claims and/or be able to add more claims directly into the sts template. I do understand that there is a possibility to make an override for directories where Mimir writes + to make additional attachments + self-made claims, but I believe if it's all nested into original sts it's far easier to manage since everything is in one place.

@dimitarvdimitrov
Copy link
Contributor

An alternative I see to this is to extend the configuration set under *.persistentVolume to support your use case. I think it's worth considering this because then when using the chart you only need to override what you care about wrt PVCs, and not have to redefine the whole PVC block. These are the fields of a PersistentVolumeClaim (taken from the API spec):

* accessModes ([]string)                     # configurable

* selector (LabelSelector)                   # not configurable

* resources (ResourceRequirements)

  * resources.limits (map[string]Quantity)   # not configurable, but does not seem related to your use case

  * resources.requests (map[string]Quantity) # configurable

* volumeName (string)                        # not configurable

* storageClassName (string)                  # configurable

* volumeMode (string)                        # not configurable, but I doubt someone would want to run with a Block volume mode instead of Filesystem. Mimir requires a filesystem

@ThelonKarrde in your PR description you mention that you cannot use local volume storage. Is it the volumeName or the selectors that you need to be able to use local volumes? Or is it something I missed?

@@ -18,6 +18,10 @@ spec:
{{- toYaml .Values.alertmanager.statefulStrategy | nindent 4 }}
serviceName: {{ template "mimir.fullname" . }}-alertmanager
{{- if .Values.alertmanager.persistentVolume.enabled }}
{{- if .Values.alertmanager.persistentVolume.volumeClaimTemplates }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be more consistent (with extraEnv, extraEnvFrom, etc, etc) the new value (if we decide to do add it) should be called extraVolumeClaimTemplates and it should be added after the existing conditional section.
If could be added from line 42 (original) as something like:

    {{- with .Values.alertmanager.persistentVolume.extraVolumeClaimTemplates }}
    {{- toYaml . | nindent 4 }}
    {{- end }}

also for the other components.

@ThelonKarrde
Copy link
Author

@dimitarvdimitrov actually that's my bad. I mixed PVs and PVCs so with an initial goal this PR is pointless since I'm able to do what I need to. Essentially generate local-volume PVs for PVCs as I can in a separate template. But maybe I should include it here as an option to generate one with correct naming aka $pv-name-$pod-name-$count?

Other than that, I can refactor PR to what @krajorama suggested and just keep it as extraPersistentVolumeTemplates option if needed.

@dimitarvdimitrov
Copy link
Contributor

Essentially generate local-volume PVs for PVCs as I can in a separate template. But maybe I should include it here as an option to generate one with correct naming aka $pv-name-$pod-name-$count?

Aren't PVs resources that k8s operators manage outside of PVCs? My impression was that the separation between PVs and PVCs exists so that they can be decoupled. Having said that, I don't think PVs are in scope of the mimir helm chart. But my k8s understanding is not super up-to-date, I'm happy to be corrected.

@ThelonKarrde
Copy link
Author

Aren't PVs resources that k8s operators manage outside of PVCs? My impression was that the separation between PVs and PVCs exists so that they can be decoupled. Having said that, I don't think PVs are in scope of the mimir helm chart. But my k8s understanding is not super up-to-date, I'm happy to be corrected.

Yes, generally PVs are managed outside of the installation. I'm only considering a specific case when it's a local mount with directories from the host.
Something like

{{- range until (int .Values.mimir-distributed.ingester.replicas) }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: storage-store-ingester-{{ . }}
spec:
  capacity:
    storage: {{ .Values.mimir-distributed.ingester.persistentVolume.size }} 
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mimir/ingester
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: role
          operator: In
          values:
          - mimir 
---
{{- end }}

The only reason why it's nice to have in the main chart it's the ability to reuse naming strings {{ include "mimir.resourceName" (dict "ctx" . "component" "ingester") }} so it's easier to predict names for PVs.

But that is a very specific use case I think so overall this PR can be closed if not to just use as add option extraPersistentVolumeClaims.

@ThelonKarrde
Copy link
Author

@dimitarvdimitrov @krajorama just a follow-up here if this should be closed or remain open with changes to discuss?

@dimitarvdimitrov
Copy link
Contributor

I think we are ok to close this since it affects your use case only marginally. If it turns out that there's a bigger need for it, we can consider reopening and merging. Thanks for the proposal and patience :)

@ThelonKarrde ThelonKarrde deleted the feature/option_to_override_volumeclaimtemplates branch August 10, 2022 15:07
@matagyula
Copy link

Please excuse the necro-bump, this was the only remotely related topic I could find for an issue that I am facing.

I am struggling to configure our Mimir deployment on EKS with Fargate profiles - EBS volumes can't be used, and so I have set-up EFS. I have also created the PVs in advance for the Mimir components that need it (alert manager, compactor, ingester, and store gateway).

The problem is, that I am unable to provide the volumeName and storageClassName parameters to the aforementioned components via the values.yaml file. The helm install command processes the parameters (they are visible when I do a dry-run), but the resulting Kubernetes resources (PVCs) disregard them completely. Looking at the discussion thread I believe it's because this part of the configuration (stateful sets) isn't "templated".

Thank you for any and all inputs in advance.

Snippet of the values.yaml:

alertmanager:
  persistentVolume:
    enabled: true
    storageClassName: efs-sc
    volumeName: mimir-alertmanager

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants