Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clickhousekeeperinstallation dont have dedicated data volume (PVC) #1565

Closed
meektechie opened this issue Nov 19, 2024 · 13 comments
Closed

Clickhousekeeperinstallation dont have dedicated data volume (PVC) #1565

meektechie opened this issue Nov 19, 2024 · 13 comments

Comments

@meektechie
Copy link

meektechie commented Nov 19, 2024

Clickhousekeeperinstallation dont have dedicated data volume (PVC) and it uses Overlay volume when pod rotated It lost the clickhouse metadata. Is there a particular reason for running the clickhousekeeper pods without dedicated volumes?
Screenshot 2024-11-19 at 2 45 59 PM

Error:
last_error_message: Table is in readonly mode since table metadata was not found in zookeeper:

@bakavets
Copy link

bakavets commented Nov 19, 2024

Hi! Try to add dataVolumeClaimTemplate: default here:

...
  defaults:
    templates:
      # Templates are specified as default for all clusters
      podTemplate: default
+     dataVolumeClaimTemplate: default

  templates:
    podTemplates:
...

@meektechie
Copy link
Author

@bakavets Got below response. my operator chart version is 0.23.7

Error from server (BadRequest): error when creating "chk.yaml": ClickHouseKeeperInstallation in version "v1" cannot be handled as a ClickHouseKeeperInstallation: strict decoding error: unknown field "spec.defaults"

@Slach
Copy link
Collaborator

Slach commented Nov 20, 2024

@meektechie upgrade your CRDS separatelly if you use helm, and upgrade operator helm chart to 0.24.0

  kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml
  kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml
  kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
  kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/clickhouse-operator/crds/CustomResourceDefinition-clickhousekeeperinstallations.clickhouse-keeper.altinity.com.yaml

@alex-zaitsev
Copy link
Member

DO NOT use CHK with operator 0.23.7. It was experimental in that release (as stated in release notes). The GA support is in 0.24.x, and it is not-compatible with a previous version. See migration guide, it is not trivial to migrate: https://github.com/Altinity/clickhouse-operator/blob/master/docs/keeper_migration_from_23_to_24.md

@meektechie
Copy link
Author

@alex-zaitsev Whether the CRD having a facility of using private images?. With the previous CRD we had it.

@Slach
Copy link
Collaborator

Slach commented Nov 27, 2024

@meektechie did you try

apiVersion: v1
kind: Secret
metadata:
  name: my-registry-secret
type: kubernetes.io/dockerconfigjson
stringData:
  .dockerconfigjson: |
    {
      "auths": {
        "<registry-url>": {
          "username": "<your-username>",
          "password": "<your-password>",
          "email": "<your-email>",
          "auth": "<base64-encoded-credentials>"
        }
      }
    }

---
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseKeeperInstallation
meta:
 name: custom-image
spec:
  defaults:
    templates:
      podTemplate: private-image
  templates:
    podTemplates:
    - name: private-image
      spec:
        imagePullSecrets:   
        - name: image-pull-secret  
        containers:
        - name: clickhouse
          image: you-registry/repo/clickhouse-server:tag

?

@meektechie
Copy link
Author

@Slach This is my pod template, with 0.23.7 it was working perfectly but with 0.24.0 it is not working. I went through the CRD but with the CRD also not there,

    podTemplates:
    - metadata:
        creationTimestamp: null
      name: default
      spec:
        containers:
        - image: pvt/clickhouse-keeper:24.3.13.40-alpine
          imagePullPolicy: IfNotPresent
          name: clickhouse-keeper
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 501Mi
        imagePullSecrets:
        - name: dockerhub
      ------

From STS generated by clickhousekeeprinstallation
spec:
containers:
- env:
- name: CLICKHOUSE_DATA_DIR
value: /var/lib/clickhouse-keeper
image: clickhouse/clickhouse-keeper:latest
imagePullPolicy: Always

@Slach
Copy link
Collaborator

Slach commented Nov 27, 2024

@meektechie try 0.24.1, i checked manifest and got

  imagePullSecrets:
  - name: image-pull-secret

as expected

@meektechie
Copy link
Author

@Slach I hope 0.24.0 the latest release. While trying 0.24.1 it throws me an error.

Error: can't get a valid version for repositories altinity-clickhouse-operator. Try changing the version constraint in Chart.yaml

@Slach
Copy link
Collaborator

Slach commented Nov 27, 2024

@meektechie

git checkout https://github.com/Altinity/clickhouse-operator.git
cd clickhouse-operator
git fetch
git checkout 0.24.1
helm install -n <your-namespace> <your-release-name> ./deploy/helm/clickhouse-operator/

@meektechie
Copy link
Author

@Slach Thanks for the immediate response. Let me check & update here

@meektechie
Copy link
Author

meektechie commented Nov 28, 2024

@Slach I have deployed 0.24.1, but still the clickhouse-keeper pods gets "clickhouse/clickhouse-keeper:latest" image. Below the configuration for "chk" and the STS generated through the CR "chk".

kubectl -n ch get chk clickhouse-keeper -o yaml | grep image

    - image: mypvtrepo/clickhouse-keeper:24.3.13.40-alpine
      imagePullPolicy: IfNotPresent

kubectl -n ch get sts chk-clickhouse-keeper-keeper-0-0 -o yaml | grep image
image: clickhouse/clickhouse-keeper:latest
imagePullPolicy: Always

chk.yaml

apiVersion: clickhouse-keeper.altinity.com/v1
kind: ClickHouseKeeperInstallation
  name: clickhouse-keeper
spec:
  configuration:
    clusters:
    - layout:
        replicasCount: 3
      name: keeper
    settings:
      keeper_server/coordination_settings/raft_logs_level: information
      keeper_server/four_letter_word_white_list: '*'
      keeper_server/raft_configuration/server/port: "9444"
      keeper_server/storage_path: /var/lib/clickhouse-keeper
      keeper_server/tcp_port: "2181"
      listen_host: 0.0.0.0
      logger/console: "true"
      logger/level: trace
      prometheus/asynchronous_metrics: "true"
      prometheus/endpoint: /metrics
      prometheus/events: "true"
      prometheus/metrics: "true"
      prometheus/port: "7000"
      prometheus/status_info: "false"
  defaults:
    templates:
      dataVolumeClaimTemplate: data-volume
  templates:
    podTemplates:
    - name: default
      spec:
        containers:
        - image: mypvtrepo/clickhouse-keeper:24.3.13.40-alpine
          imagePullPolicy: IfNotPresent
          name: clickhouse-keeper
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 501Mi
        imagePullSecrets:
        - name: dockerhub
    volumeClaimTemplates:
    - name: data-volume
      reclaimPolicy: Retain
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: gp3

@Slach
Copy link
Collaborator

Slach commented Nov 29, 2024

try explicitly link podTemplates to defailt.templates

spec:
 defaults:
    templates:
      podTemplate: default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants