Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update helm release cluster to v0.1.3 #904

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Dec 7, 2024

This PR contains the following updates:

Package Update Change
cluster (source) patch 0.1.0 -> 0.1.3

Release Notes

cloudnative-pg/charts (cluster)

v0.1.3

Compare Source

Deploys and manages a CloudNativePG cluster and its associated resources.

What's Changed

Full Changelog: cloudnative-pg/charts@cluster-v0.1.2...cluster-v0.1.3

v0.1.2

Compare Source

Deploys and manages a CloudNativePG cluster and its associated resources.

What's Changed
New Contributors

Full Changelog: cloudnative-pg/charts@cluster-v0.1.1...cluster-v0.1.2

v0.1.1

Compare Source

Deploys and manages a CloudNativePG cluster and its associated resources.

What's Changed
In the next cloudnative-pg release
New Contributors

Full Changelog: cloudnative-pg/charts@cluster-v0.1.0...cluster-v0.1.1


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Dec 7, 2024

Changes Default Values
diff -U 4 -r out-default-values/target/backstage_cluster_default-values.out out-default-values/pr/backstage_cluster_default-values.out
--- out-default-values/target/backstage_cluster_default-values.out	2024-12-07 18:43:58.780763399 +0000
+++ out-default-values/pr/backstage_cluster_default-values.out	2024-12-07 18:43:29.636891957 +0000
@@ -200,8 +200,12 @@
   # and then blank the password of the postgres user by setting it to NULL.
   enableSuperuserAccess: true
   superuserSecret: ""
 
+  # -- Allow to disable PDB, mainly useful for upgrade of single-instance clusters or development purposes
+  # See: https://cloudnative-pg.io/documentation/current/kubernetes_upgrade/#pod-disruption-budgets
+  enablePDB: true
+
   # -- This feature enables declarative management of existing roles, as well as the creation of new roles if they are not
   # already present in the database.
   # See: https://cloudnative-pg.io/documentation/current/declarative_role_management/
   roles: []
@@ -280,8 +284,11 @@
     #   - CREATE EXTENSION IF NOT EXISTS vector;
     # postInitApplicationSQL: []
     # postInitTemplateSQL: []
 
+  # -- Configure the metadata of the generated service account
+  serviceAccountTemplate: {}
+
   additionalLabels: {}
   annotations: {}
 
 

Copy link

github-actions bot commented Dec 7, 2024

Changes Rendered Chart
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-07 18:43:58.424765034 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-07 18:43:29.284893614 +0000
@@ -6,9 +6,9 @@
   name: release-name-cluster
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.1
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -35,8 +35,9 @@
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
     shared_preload_libraries:
     pg_hba:
       []
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-07 18:43:58.424765034 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-07 18:43:29.284893614 +0000
@@ -3,9 +3,9 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.1
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -163,9 +163,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-07 18:43:58.424765034 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-07 18:43:29.288893594 +0000
@@ -31,9 +31,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-07 18:43:58.564764391 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-07 18:43:29.428892936 +0000
@@ -6,9 +6,9 @@
   name: release-name-cluster
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.1
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -35,8 +35,9 @@
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
     shared_preload_libraries:
     pg_hba:
       []
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-07 18:43:58.564764391 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-07 18:43:29.428892936 +0000
@@ -3,9 +3,9 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.1
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -163,9 +163,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-07 18:43:58.564764391 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-07 18:43:29.428892936 +0000
@@ -31,9 +31,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'

@renovate renovate bot changed the title chore(deps): update helm release cluster to v0.1.1 chore(deps): update helm release cluster to v0.1.2 Dec 17, 2024
@renovate renovate bot force-pushed the renovate/cluster-0.x branch from 95f3246 to 80f863a Compare December 17, 2024 15:16
Copy link

Changes Default Values
diff -U 4 -r out-default-values/target/backstage_cluster_default-values.out out-default-values/pr/backstage_cluster_default-values.out
--- out-default-values/target/backstage_cluster_default-values.out	2024-12-17 15:17:20.074841399 +0000
+++ out-default-values/pr/backstage_cluster_default-values.out	2024-12-17 15:16:50.753340258 +0000
@@ -1,8 +1,10 @@
 # -- Override the name of the chart
 nameOverride: ""
 # -- Override the full name of the chart
 fullnameOverride: ""
+# -- Override the namespace of the chart
+namespaceOverride: ""
 
 ###
 # -- Type of the CNPG database. Available types:
 # * `postgresql`
@@ -200,8 +202,12 @@
   # and then blank the password of the postgres user by setting it to NULL.
   enableSuperuserAccess: true
   superuserSecret: ""
 
+  # -- Allow to disable PDB, mainly useful for upgrade of single-instance clusters or development purposes
+  # See: https://cloudnative-pg.io/documentation/current/kubernetes_upgrade/#pod-disruption-budgets
+  enablePDB: true
+
   # -- This feature enables declarative management of existing roles, as well as the creation of new roles if they are not
   # already present in the database.
   # See: https://cloudnative-pg.io/documentation/current/declarative_role_management/
   roles: []
@@ -255,8 +261,12 @@
   postgresql:
     # -- PostgreSQL configuration options (postgresql.conf)
     parameters: {}
       # max_connections: 300
+    # -- Quorum-based Synchronous Replication
+    synchronous: {}
+     # method: any
+     # number: 1
     # -- PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file)
     pg_hba: []
       # - host all all 10.244.0.0/16 md5
     # -- PostgreSQL User Name Maps rules (lines to be appended to the pg_ident.conf file)
@@ -280,8 +290,11 @@
     #   - CREATE EXTENSION IF NOT EXISTS vector;
     # postInitApplicationSQL: []
     # postInitTemplateSQL: []
 
+  # -- Configure the metadata of the generated service account
+  serviceAccountTemplate: {}
+
   additionalLabels: {}
   annotations: {}
 
 

Copy link

Changes Rendered Chart
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 15:17:19.693834898 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 15:16:50.387334284 +0000
@@ -3,12 +3,13 @@
 apiVersion: postgresql.cnpg.io/v1
 kind: Cluster
 metadata:
   name: release-name-cluster
+  namespace: default
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.2
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -20,33 +21,22 @@
   postgresUID: 26
   postgresGID: 26
   storage:
     size: 8Gi
-    storageClass: 
   walStorage:
     size: 1Gi
-    storageClass: 
   affinity:
     topologyKey: topology.kubernetes.io/zone
-  priorityClassName: 
 
   primaryUpdateMethod: switchover
   primaryUpdateStrategy: unsupervised
   logLevel: info
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
-    shared_preload_libraries:
-    pg_hba:
-      []
-    pg_ident:
-      []
-    parameters:
-      {}
-    
-
   managed:
     roles:
       - comment: backstage-admin-user
         createdb: true
@@ -65,5 +55,4 @@
   
   
   bootstrap:
     initdb:
-      postInitApplicationSQL:
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 15:17:19.693834898 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 15:16:50.387334284 +0000
@@ -3,14 +3,15 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.2
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
   name: release-name-cluster-alert-rules
+  namespace: default
 spec:
   groups:
     - name: cloudnative-pg/release-name-cluster
       rules:
@@ -163,9 +164,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 15:17:19.693834898 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 15:16:50.387334284 +0000
@@ -3,8 +3,9 @@
 apiVersion: batch/v1
 kind: Job
 metadata:
   name: release-name-cluster-ping-test
+  namespace: default
   labels:
     app.kubernetes.io/component: database-ping-test
   annotations:
     "helm.sh/hook": test
@@ -31,9 +32,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 15:17:19.775836297 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 15:16:50.466335573 +0000
@@ -3,12 +3,13 @@
 apiVersion: postgresql.cnpg.io/v1
 kind: Cluster
 metadata:
   name: release-name-cluster
+  namespace: default
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.2
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -20,33 +21,22 @@
   postgresUID: 26
   postgresGID: 26
   storage:
     size: 8Gi
-    storageClass: 
   walStorage:
     size: 1Gi
-    storageClass: 
   affinity:
     topologyKey: topology.kubernetes.io/zone
-  priorityClassName: 
 
   primaryUpdateMethod: switchover
   primaryUpdateStrategy: unsupervised
   logLevel: info
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
-    shared_preload_libraries:
-    pg_hba:
-      []
-    pg_ident:
-      []
-    parameters:
-      {}
-    
-
   managed:
     roles:
       - comment: backstage-admin-user
         createdb: true
@@ -65,5 +55,4 @@
   
   
   bootstrap:
     initdb:
-      postInitApplicationSQL:
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 15:17:19.775836297 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 15:16:50.466335573 +0000
@@ -3,14 +3,15 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.2
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
   name: release-name-cluster-alert-rules
+  namespace: default
 spec:
   groups:
     - name: cloudnative-pg/release-name-cluster
       rules:
@@ -163,9 +164,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 15:17:19.775836297 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 15:16:50.466335573 +0000
@@ -3,8 +3,9 @@
 apiVersion: batch/v1
 kind: Job
 metadata:
   name: release-name-cluster-ping-test
+  namespace: default
   labels:
     app.kubernetes.io/component: database-ping-test
   annotations:
     "helm.sh/hook": test
@@ -31,9 +32,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'

@renovate renovate bot changed the title chore(deps): update helm release cluster to v0.1.2 chore(deps): update helm release cluster to v0.1.3 Dec 17, 2024
@renovate renovate bot force-pushed the renovate/cluster-0.x branch from 80f863a to 89f8fec Compare December 17, 2024 19:41
Copy link

Changes Default Values
diff -U 4 -r out-default-values/target/backstage_cluster_default-values.out out-default-values/pr/backstage_cluster_default-values.out
--- out-default-values/target/backstage_cluster_default-values.out	2024-12-17 19:42:44.143438299 +0000
+++ out-default-values/pr/backstage_cluster_default-values.out	2024-12-17 19:42:14.273219938 +0000
@@ -1,8 +1,10 @@
 # -- Override the name of the chart
 nameOverride: ""
 # -- Override the full name of the chart
 fullnameOverride: ""
+# -- Override the namespace of the chart
+namespaceOverride: ""
 
 ###
 # -- Type of the CNPG database. Available types:
 # * `postgresql`
@@ -200,8 +202,12 @@
   # and then blank the password of the postgres user by setting it to NULL.
   enableSuperuserAccess: true
   superuserSecret: ""
 
+  # -- Allow to disable PDB, mainly useful for upgrade of single-instance clusters or development purposes
+  # See: https://cloudnative-pg.io/documentation/current/kubernetes_upgrade/#pod-disruption-budgets
+  enablePDB: true
+
   # -- This feature enables declarative management of existing roles, as well as the creation of new roles if they are not
   # already present in the database.
   # See: https://cloudnative-pg.io/documentation/current/declarative_role_management/
   roles: []
@@ -255,8 +261,12 @@
   postgresql:
     # -- PostgreSQL configuration options (postgresql.conf)
     parameters: {}
       # max_connections: 300
+    # -- Quorum-based Synchronous Replication
+    synchronous: {}
+     # method: any
+     # number: 1
     # -- PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file)
     pg_hba: []
       # - host all all 10.244.0.0/16 md5
     # -- PostgreSQL User Name Maps rules (lines to be appended to the pg_ident.conf file)
@@ -280,8 +290,11 @@
     #   - CREATE EXTENSION IF NOT EXISTS vector;
     # postInitApplicationSQL: []
     # postInitTemplateSQL: []
 
+  # -- Configure the metadata of the generated service account
+  serviceAccountTemplate: {}
+
   additionalLabels: {}
   annotations: {}
 
 

Copy link

Changes Rendered Chart
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 19:42:43.783435781 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 19:42:13.912217542 +0000
@@ -3,12 +3,13 @@
 apiVersion: postgresql.cnpg.io/v1
 kind: Cluster
 metadata:
   name: release-name-cluster
+  namespace: default
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.3
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -20,33 +21,22 @@
   postgresUID: 26
   postgresGID: 26
   storage:
     size: 8Gi
-    storageClass: 
   walStorage:
     size: 1Gi
-    storageClass: 
   affinity:
     topologyKey: topology.kubernetes.io/zone
-  priorityClassName: 
 
   primaryUpdateMethod: switchover
   primaryUpdateStrategy: unsupervised
   logLevel: info
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
-    shared_preload_libraries:
-    pg_hba:
-      []
-    pg_ident:
-      []
-    parameters:
-      {}
-    
-
   managed:
     roles:
       - comment: backstage-admin-user
         createdb: true
@@ -65,5 +55,4 @@
   
   
   bootstrap:
     initdb:
-      postInitApplicationSQL:
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 19:42:43.783435781 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 19:42:13.912217542 +0000
@@ -3,14 +3,15 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.3
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
   name: release-name-cluster-alert-rules
+  namespace: default
 spec:
   groups:
     - name: cloudnative-pg/release-name-cluster
       rules:
@@ -163,9 +164,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 19:42:43.783435781 +0000
+++ out/pr/backstage/values-demo-metalstack.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 19:42:13.913217549 +0000
@@ -3,8 +3,9 @@
 apiVersion: batch/v1
 kind: Job
 metadata:
   name: release-name-cluster-ping-test
+  namespace: default
   labels:
     app.kubernetes.io/component: database-ping-test
   annotations:
     "helm.sh/hook": test
@@ -31,9 +32,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 19:42:43.857436298 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/cluster.yaml	2024-12-17 19:42:13.991218065 +0000
@@ -3,12 +3,13 @@
 apiVersion: postgresql.cnpg.io/v1
 kind: Cluster
 metadata:
   name: release-name-cluster
+  namespace: default
   annotations:
     argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.3
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
@@ -20,33 +21,22 @@
   postgresUID: 26
   postgresGID: 26
   storage:
     size: 8Gi
-    storageClass: 
   walStorage:
     size: 1Gi
-    storageClass: 
   affinity:
     topologyKey: topology.kubernetes.io/zone
-  priorityClassName: 
 
   primaryUpdateMethod: switchover
   primaryUpdateStrategy: unsupervised
   logLevel: info
   enableSuperuserAccess: true
   superuserSecret:
     name: cnpg-superuser-secret
   
+  enablePDB: true
   postgresql:
-    shared_preload_libraries:
-    pg_hba:
-      []
-    pg_ident:
-      []
-    parameters:
-      {}
-    
-
   managed:
     roles:
       - comment: backstage-admin-user
         createdb: true
@@ -65,5 +55,4 @@
   
   
   bootstrap:
     initdb:
-      postInitApplicationSQL:
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 19:42:43.858436305 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/prometheus-rule.yaml	2024-12-17 19:42:13.991218065 +0000
@@ -3,14 +3,15 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   labels:
-    helm.sh/chart: cluster-0.1.0
+    helm.sh/chart: cluster-0.1.3
     app.kubernetes.io/name: cluster
     app.kubernetes.io/instance: release-name
     app.kubernetes.io/part-of: cloudnative-pg
     app.kubernetes.io/managed-by: Helm
   name: release-name-cluster-alert-rules
+  namespace: default
 spec:
   groups:
     - name: cloudnative-pg/release-name-cluster
       rules:
@@ -163,9 +164,9 @@
         - alert: CNPGClusterOffline
           annotations:
             summary: CNPG Cluster has no running instances!
             description: |-
-              CloudNativePG Cluster "{{ $labels.job }}" has no ready instances.
+              CloudNativePG Cluster "default/release-name-cluster" has no ready instances.
           
               Having an offline cluster means your applications will not be able to access the database, leading to
               potential service disruption and/or data loss.
             runbook_url: https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/docs/runbooks/CNPGClusterOffline.md
diff -U 4 -r out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml
--- out/target/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 19:42:43.858436305 +0000
+++ out/pr/backstage/values-uibklab.yaml/sx-backstage/charts/cluster/templates/tests/ping.yaml	2024-12-17 19:42:13.991218065 +0000
@@ -3,8 +3,9 @@
 apiVersion: batch/v1
 kind: Job
 metadata:
   name: release-name-cluster-ping-test
+  namespace: default
   labels:
     app.kubernetes.io/component: database-ping-test
   annotations:
     "helm.sh/hook": test
@@ -31,9 +32,15 @@
               valueFrom:
                 secretKeyRef:
                   name: release-name-cluster-app
                   key: password
+            - name: PGDBNAME
+              valueFrom:
+                secretKeyRef:
+                  name: release-name-cluster-app
+                  key: dbname
+                  optional: true
           args:
             - "-c"
             - >-
               apk add postgresql-client &&
-              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432" -c 'SELECT 1'
+              psql "postgresql://$PGUSER:$PGPASS@release-name-cluster-rw.default.svc.cluster.local:5432/${PGDBNAME:-$PGUSER}" -c 'SELECT 1'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants