-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: deprecate ce helm charts and publish ee helm charts #37794
Conversation
Caution Review failedThe pull request is closed. WalkthroughThis pull request introduces significant enhancements to the Helm chart deployment configuration for Appsmith, focusing on improved dependency management, authentication options, and infrastructure flexibility. The changes span multiple configuration files, adding support for PostgreSQL, external secrets, metrics, KEDA scaling, and more robust service configurations. The update increases the application version from 2.3.0 to 3.6.0, reflecting the substantial architectural improvements. Changes
Sequence DiagramsequenceDiagram
participant Helm as Helm Chart
participant K8s as Kubernetes
participant DB as PostgreSQL
participant Metrics as Prometheus
Helm->>K8s: Deploy Application
K8s->>DB: Initialize Database
K8s->>Metrics: Configure Monitoring
K8s->>K8s: Set Up Scaling Rules
K8s->>K8s: Configure Secrets
Possibly Related PRs
Suggested Labels
Suggested Reviewers
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (3)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🧹 Outside diff range and nitpick comments (27)
deploy/helm/templates/trustedCA.yaml (2)
4-8
: Consider adding annotations for better managementWhile the current metadata is correct, consider adding annotations to track the certificate's origin and purpose.
metadata: name: {{ include "appsmith.fullname" . }}-trustedca namespace: {{ include "appsmith.namespace" . }} labels: {{- include "appsmith.labels" . | nindent 4 }} + annotations: + description: "Custom CA certificates for secure connections" + "helm.sh/hook": "pre-install,pre-upgrade"
1-13
: Add mount instructions in documentationThe ConfigMap is correctly defined, but users need to know how to properly mount and use these certificates.
Consider documenting:
- Required certificate format
- How to mount the certificates in pods
- Environment variables needed to use the certificates
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/secret.yaml (2)
1-9
: LGTM! Consider adding annotations for secret rotation.The Secret resource structure is well-defined with proper templating. Consider adding annotations to support secret rotation if using external secret management solutions.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
1-16
: Consider adding security metadata and type validation.Enhance the template with:
immutable: true
field to prevent runtime modifications- Schema validation for
.Values.secrets
in values.schema.jsonmetadata: name: {{ include "appsmith.fullname" . }} namespace: {{ include "appsmith.namespace" . }} labels: {{- include "appsmith.labels" . | nindent 4 }} + {{- if .Values.secretAnnotations }} + annotations: + {{- toYaml .Values.secretAnnotations | nindent 4 }} + {{- end }} type: Opaque +immutable: true data:🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 13-13: wrong indentation: expected 0 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 0 but found 2
(indentation)
[warning] 15-15: wrong indentation: expected 0 but found 2
(indentation)
deploy/helm/templates/pdb.yml (1)
12-15
: Consider adding comments to document PDB configuration.Adding comments would help users understand the impact of
minAvailable
setting on pod evictions during cluster maintenance.spec: + # Ensure at least this many pods are available during voluntary disruptions minAvailable: {{ .Values.podDisruptionBudgets.minAvailable }} selector:
deploy/helm/templates/external-secrets.yaml (1)
1-6
: Consider API stability and secretstore configurationThe external-secrets.io API is in beta (v1beta1). Additionally, the secretstore name is hardcoded which reduces flexibility.
Consider:
- Adding a comment about potential API changes
- Making the secretstore name configurable via values.yaml
secretStoreRef: - name: secretstore + name: {{ .Values.externalSecrets.secretStore | default "secretstore" }}🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/scaledobject.yml (1)
18-21
: Add comments explaining fallback behaviorConsider adding template comments to explain when and how the fallback configuration is used.
+ {{/* Fallback configuration is used when triggers cannot be evaluated */}} {{- with .Values.keda.fallback }} fallback: {{- toYaml . | nindent 4 }} {{- end }}
deploy/helm/templates/headless-svc.yaml (2)
26-29
: Consider templating the port configuration and specifying the protocol.For better flexibility and clarity:
ports: - name: http - port: 8080 - targetPort: 8080 + port: {{ .Values.service.port | default 8080 }} + targetPort: {{ .Values.service.targetPort | default 8080 }} + protocol: TCP
22-25
: Consider adding topology spread constraints for better pod distribution.The current configuration handles basic networking, but for improved resilience in multi-zone clusters:
Consider adding topology keys in your deployment configuration to ensure proper pod distribution across zones when using this headless service for pod-to-pod communication.
deploy/helm/templates/persistentVolumeClaim.yaml (1)
7-10
: Improve annotations indentationWhile functional, the annotations section could be more readable with consistent indentation.
{{- with .Values.persistence.annotations }} annotations: -{{ toYaml . | indent 4 }} + {{- toYaml . | nindent 4 }} {{- end }}deploy/helm/templates/service-metrics.yaml (3)
1-11
: Consider adding descriptive annotationsWhile the Prometheus annotations are correct, consider adding standard Kubernetes annotations like
description
andservice.beta.kubernetes.io/description
to improve service discoverability and documentation.annotations: prometheus.io/port: {{ quote .Values.metrics.port }} prometheus.io/scrape: "true" + description: "Metrics endpoint for Prometheus scraping" + service.beta.kubernetes.io/description: "Exposes application metrics for monitoring"🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
13-19
: Separate metrics service configuration from main serviceThe metrics service is currently sharing configuration with the main service (
.Values.service.*
). This coupling could lead to unintended configuration changes. Consider using dedicated metrics service configuration.- type: {{ .Values.service.type }} - {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }} - clusterIP: {{ .Values.service.clusterIP }} + type: {{ .Values.metrics.service.type | default "ClusterIP" }} + {{- if and (eq .Values.metrics.service.type "ClusterIP") .Values.metrics.service.clusterIP }} + clusterIP: {{ .Values.metrics.service.clusterIP }}
20-28
: Improve port configurationTwo suggestions for the ports configuration:
- Use a metrics-specific port name instead of generic service.portName
- Consider simplifying the nodePort configuration
ports: - - name: {{ .Values.service.portName }} + - name: metrics port: {{ .Values.metrics.port }} targetPort: metrics - {{- if and (or (eq .Values.service.type "LoadBalancer") (eq .Values.service.type "NodePort")) .Values.service.nodePort }} - nodePort: {{ .Values.service.nodePort }} - {{- else if eq .Values.service.type "ClusterIP" }} - nodePort: null - {{- end }} + {{- with .Values.metrics.service.nodePort }} + nodePort: {{ . }} + {{- end }}deploy/helm/templates/service.yaml (1)
16-20
: Consider parameterizing metrics configurationThe metrics port and path are hardcoded. Consider making these configurable through values.yaml for better flexibility.
{{- if .Values.metrics.enabled }} - prometheus.io/scrape: "true" - prometheus.io/port: "8080" - prometheus.io/path: "/actuator/prometheus" + prometheus.io/scrape: {{ .Values.metrics.scrape | default "true" | quote }} + prometheus.io/port: {{ .Values.metrics.port | default "8080" | quote }} + prometheus.io/path: {{ .Values.metrics.path | default "/actuator/prometheus" | quote }} {{- end }}deploy/helm/templates/hpa.yml (2)
10-15
: Consider adding behavior annotationsThe HPA configuration could benefit from additional annotations to control scaling behavior.
Consider adding these annotations in the metadata section:
metadata: name: {{ include "appsmith.fullname" . }} namespace: {{ include "appsmith.namespace" . }} labels: {{- include "appsmith.labels" . | nindent 4 }} + annotations: + autoscaling.behavior.scale-down.stabilization-window: "300s" + autoscaling.behavior.scale-up.stabilization-window: "60s"
21-27
: Consider adding custom metrics supportThe current implementation only supports CPU and memory metrics. Consider adding extension points for custom metrics.
The HPA can be extended to support custom metrics like queue length or request latency. Would you like assistance in implementing this feature?
Also applies to: 33-39
deploy/helm/templates/configMap.yaml (3)
23-25
: Add a comment explaining the h2 fallbackWhile the logic is correct, it would be helpful to add a comment explaining why h2 is used as a fallback driver.
+ {{/* Use h2 as fallback when PostgreSQL is disabled */}} {{- if and (eq "APPSMITH_KEYCLOAK_DB_DRIVER" $key) ( not $value) }} {{ $key }}: {{ $.Values.postgresql.enabled | ternary "postgresql" "h2" | quote }} {{- end }}
27-29
: Consider adding SSL and port configuration optionsThe PostgreSQL connection URL could be more configurable:
- Port should be parameterized
- SSL options should be configurable for production environments
{{- if and (eq "APPSMITH_KEYCLOAK_DB_URL" $key) ( not $value) }} - {{ $key }}: {{ $.Values.postgresql.enabled | ternary (printf "%s-postgresql.%s.svc.cluster.local:5432/%s" $releaseName $nameSpace $postgresqlDatabase) "${jboss.server.data.dir}" | quote }} + {{- $pgPort := $.Values.postgresql.service.port | default 5432 }} + {{- $sslMode := $.Values.postgresql.ssl.enabled | ternary "?sslmode=verify-full" "" }} + {{ $key }}: {{ $.Values.postgresql.enabled | ternary (printf "%s-postgresql.%s.svc.cluster.local:%d/%s%s" $releaseName $nameSpace $pgPort $postgresqlDatabase $sslMode) "${jboss.server.data.dir}" | quote }} {{- end }}
Line range hint
1-44
: Consider extracting database configuration to a separate ConfigMapAs the database configuration grows more complex, consider splitting it into a separate ConfigMap template for better maintainability and separation of concerns.
deploy/helm/templates/import.yaml (2)
1-6
: Consider enhancing the error message with configuration examplesThe validation logic is correct, but the error message could be more helpful by including example configurations.
-{{- fail "To enable autoscaling on Appsmith, MongoDB needs to be enabled or an external MongoDB needs to be configured. Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }} +{{- fail "To enable autoscaling on Appsmith, either: 1. Enable MongoDB by setting mongodb.enabled=true, or 2. Configure external MongoDB using APPSMITH_DB_URL or APPSMITH_MONGODB_URI Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }}🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
8-11
: Clarify the relationship between PostgreSQL and KeycloakThe validation prevents unnecessary PostgreSQL deployment, but the error message could better explain the dependency.
-{{- fail "Keycloak is disabled therefore postgresql is not required. Please disable postgres or to enable keycloak on Appsmith, set APPSMITH_DISABLE_EMBEDDED_KEYCLOAK to \"0\" Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }} +{{- fail "PostgreSQL is only used by Keycloak in this deployment. Since Keycloak is disabled (APPSMITH_DISABLE_EMBEDDED_KEYCLOAK=\"1\"), PostgreSQL should also be disabled. To fix: 1. Set postgresql.enabled=false, or 2. Enable Keycloak by setting APPSMITH_DISABLE_EMBEDDED_KEYCLOAK=\"0\" Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }}deploy/helm/templates/deployment.yaml (3)
14-24
: Document update strategy differencesThe update strategy configuration correctly handles both StatefulSet and Deployment cases, but would benefit from a comment explaining the different strategies and their implications.
{{- if not .Values.autoscaling.enabled }} + # StatefulSet uses updateStrategy for controlled pod updates replicas: 1 serviceName: {{ include "appsmith.fullname" . }} updateStrategy: {{- else }} + # Deployment uses rolling update strategy with configurable surge and unavailability strategy:
104-106
: Consider adding health checks for metrics endpointThe metrics port is exposed but lacks health checks. Consider adding a probe to ensure the metrics endpoint is functioning correctly.
Line range hint
1-206
: Verify template rendering for all scenariosThe template uses multiple conditional blocks and Helm functions. Ensure it renders correctly for all combinations of configuration values.
Consider adding a CI job to test template rendering with different value combinations to catch potential issues early.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 169-169: trailing spaces
(trailing-spaces)
deploy/helm/values.yaml (1)
10-10
: Fix trailing spacesRemove trailing spaces from lines 10 and 16.
Also applies to: 16-16
🧰 Tools
🪛 yamllint (1.35.1)
[error] 10-10: trailing spaces
(trailing-spaces)
deploy/helm/README.md (2)
177-178
: LGTM! Consider adding OIDC configuration examplesThe new OIDC client parameters are properly documented and follow the existing OAuth2 parameter pattern. Consider adding a configuration example for OIDC setup, similar to the existing examples in the file.
Documentation needs updates for Enterprise Edition specifics
The helm chart documentation requires updates to reflect Enterprise Edition deployment:
- Add documentation for
APPSMITH_LICENSE_KEY
configuration parameter- Document that the default image repository is
appsmith/appsmith-ee
- Include a section highlighting differences between CE and EE deployments
- Add instructions for license key management and validation
🔗 Analysis chain
Line range hint
1-200
: Verify documentation completeness for enterprise featuresSince this PR is about deprecating CE helm charts and publishing EE helm charts, ensure that:
- All enterprise-specific configuration parameters are documented
- Any differences between CE and EE deployments are clearly highlighted
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any undocumented EE-specific environment variables or configurations rg -i "ee|enterprise" --type yamlLength of output: 3714
Script:
#!/bin/bash # Check for EE-specific environment variables in deployment templates ast-grep --pattern 'APPSMITH_LICENSE_KEY' # Search for any EE-specific documentation or configuration differences rg -i "enterprise|license|ee" deploy/helm/README.md # Look for any EE-specific configuration in values.yaml rg -i "license|enterprise|ee" deploy/helm/values.yaml -A 5Length of output: 2745
🧰 Tools
🪛 Markdownlint (0.35.0)
174-174: Column: 65
Hard tabs(MD010, no-hard-tabs)
175-175: Column: 55
Hard tabs(MD010, no-hard-tabs)
175-175: Column: 63
Hard tabs(MD010, no-hard-tabs)
176-176: Column: 65
Hard tabs(MD010, no-hard-tabs)
179-179: Column: 48
Hard tabs(MD010, no-hard-tabs)
179-179: Column: 60
Hard tabs(MD010, no-hard-tabs)
180-180: Column: 44
Hard tabs(MD010, no-hard-tabs)
180-180: Column: 58
Hard tabs(MD010, no-hard-tabs)
181-181: Column: 41
Hard tabs(MD010, no-hard-tabs)
181-181: Column: 56
Hard tabs(MD010, no-hard-tabs)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
deploy/helm/Chart.lock
is excluded by!**/*.lock
📒 Files selected for processing (18)
.github/workflows/helm-release.yml
(1 hunks)deploy/helm/Chart.yaml
(1 hunks)deploy/helm/README.md
(1 hunks)deploy/helm/templates/configMap.yaml
(2 hunks)deploy/helm/templates/deployment.yaml
(4 hunks)deploy/helm/templates/external-secrets.yaml
(1 hunks)deploy/helm/templates/headless-svc.yaml
(1 hunks)deploy/helm/templates/hpa.yml
(1 hunks)deploy/helm/templates/import.yaml
(1 hunks)deploy/helm/templates/pdb.yml
(1 hunks)deploy/helm/templates/persistentVolume.yaml
(3 hunks)deploy/helm/templates/persistentVolumeClaim.yaml
(1 hunks)deploy/helm/templates/scaledobject.yml
(1 hunks)deploy/helm/templates/secret.yaml
(1 hunks)deploy/helm/templates/service-metrics.yaml
(1 hunks)deploy/helm/templates/service.yaml
(1 hunks)deploy/helm/templates/trustedCA.yaml
(1 hunks)deploy/helm/values.yaml
(8 hunks)
🧰 Additional context used
🪛 yamllint (1.35.1)
deploy/helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[error] 169-169: trailing spaces
(trailing-spaces)
deploy/helm/templates/external-secrets.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 10-10: wrong indentation: expected 5 but found 4
(indentation)
[warning] 13-13: wrong indentation: expected 5 but found 4
(indentation)
[warning] 17-17: wrong indentation: expected 7 but found 6
(indentation)
deploy/helm/templates/headless-svc.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
deploy/helm/templates/hpa.yml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/import.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/pdb.yml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/persistentVolume.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/persistentVolumeClaim.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/scaledobject.yml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/secret.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 13-13: wrong indentation: expected 0 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 0 but found 2
(indentation)
[warning] 15-15: wrong indentation: expected 0 but found 2
(indentation)
deploy/helm/templates/service-metrics.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/trustedCA.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/values.yaml
[error] 10-10: trailing spaces
(trailing-spaces)
[error] 16-16: trailing spaces
(trailing-spaces)
🔇 Additional comments (30)
deploy/helm/templates/trustedCA.yaml (2)
1-1
: LGTM: Proper conditional check
The conditional check ensures the ConfigMap is only created when custom CA certificates are provided.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
9-12
: Verify certificate data handling
The current implementation assumes the certificates are properly formatted. Consider adding validation or documentation for the expected format.
✅ Verification successful
Certificate format validation is handled at runtime
The implementation is correct as-is. The customCAcert
configuration is properly documented in values.yaml
and the certificates are mounted as files in the container through the ConfigMap. Certificate validation occurs at runtime when the application reads these files.
- The ConfigMap is only created when certificates are provided (
{{- if .Values.customCAcert }}
) - Certificates are mounted at
/etc/ssl/certs
in the container - The format validation is handled by the application's certificate loading mechanism
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if there are any existing references to this ConfigMap
rg -l "{{ include \"appsmith.fullname\" . }}-trustedca" --type yaml
# Look for any documentation about certificate format
fd -e md -e txt . -x grep -l "customCAcert"
Length of output: 232
Script:
#!/bin/bash
# Search for ConfigMap references with proper escaping
rg -l "\{\{ include \"appsmith\.fullname\" \. \}\}-trustedca" --type yaml
# Look for any documentation mentioning certificates or CA
rg -l "customCAcert|trustedca|certificate" --type md --type txt
# Check the values file for certificate configuration
fd "values.yaml" -x cat {}
# Look for any validation logic in helm templates
rg "customCAcert" --type yaml
Length of output: 13672
deploy/helm/templates/secret.yaml (1)
10-15
: Protect against accidental secret exposure in Helm debug output.
While the secret handling is correct, be cautious with helm template --debug
or helm install --debug
as they might expose secret values in logs.
✅ Verification successful
Consider using base64 encoding for secret values in values.yaml
The current secret handling in the Helm chart is safe for production use, but there's no risk of accidental exposure in debug output since the values.yaml doesn't contain any actual secrets - only configuration placeholders. The applicationConfig section uses empty strings as defaults for sensitive values.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if any values files contain unencoded secrets
fd -e yaml -e yml --exec grep -l 'secrets:' {}
Length of output: 118
Script:
#!/bin/bash
# Check the content of values.yaml to see how secrets are handled
cat deploy/helm/values.yaml
# Check if there are any debug or logging statements in Helm templates
rg -g '*.yaml' -g '*.yml' -g '*.tpl' 'debug|toYaml|tpl' deploy/helm/templates/
Length of output: 16222
🧰 Tools
🪛 yamllint (1.35.1)
[warning] 13-13: wrong indentation: expected 0 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 0 but found 2
(indentation)
[warning] 15-15: wrong indentation: expected 0 but found 2
(indentation)
deploy/helm/templates/pdb.yml (2)
1-6
: LGTM! Good handling of API version compatibility.
The conditional logic for API version selection ensures compatibility across different Kubernetes versions.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
9-10
: Verify the helper template functions.
The template uses several helper functions. Let's verify they are properly defined.
Also applies to: 14-15
✅ Verification successful
Helper template functions are properly defined and used correctly
The _helpers.tpl
file contains all the required template functions:
appsmith.fullname
- Generates a fully qualified app nameappsmith.namespace
- Handles namespace overrides correctlyappsmith.selectorLabels
- Defines standard Kubernetes selector labels
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the helper template functions are defined
# Expected: Find definitions of appsmith.fullname, appsmith.namespace, and appsmith.selectorLabels
# Search for helper template definitions
fd -e yaml -e tpl '_helpers' deploy/helm/templates/ -x cat {}
Length of output: 3682
deploy/helm/templates/scaledobject.yml (2)
22-27
: Validate required trigger metadata
Different trigger types require specific metadata fields. Consider adding validation to ensure required fields are present.
1-8
: Consider the stability implications of using an alpha API version
The keda.sh/v1alpha1
API version indicates this is an alpha feature which may have breaking changes in future releases.
✅ Verification successful
Let me analyze the KEDA versioning across the codebase to provide a comprehensive recommendation about the API version usage.
I need to verify how KEDA is configured in values.yaml and its relationship with HPA before making a final recommendation about the API version usage.
KEDA v1alpha1 API version is appropriate for the current implementation
The use of keda.sh/v1alpha1
is acceptable because:
- KEDA is disabled by default (
enabled: false
) and serves as an optional alternative to standard HPA - The implementation is properly guarded with feature flags that prevent conflicts with standard HPA
- The latest KEDA release (v2.16.0) still uses this API version for ScaledObjects
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if there are newer, more stable versions of the KEDA API
gh api graphql -f query='
{
repository(owner: "kedacore", name: "keda") {
releases(first: 1, orderBy: {field: CREATED_AT, direction: DESC}) {
nodes {
tagName
description
}
}
}
}'
Length of output: 2859
Script:
#!/bin/bash
# Check the current KEDA API version in use across the codebase
rg "keda\.sh/v\d+" -g '*.{yaml,yml}'
# Check if there are any KEDA CRD definitions or version references
fd -e yaml -e yml --exec grep -l "keda" {}
Length of output: 250
Script:
#!/bin/bash
# Let's check the values.yaml to understand the KEDA configuration
cat deploy/helm/values.yaml | grep -A 10 "keda:"
# Check HPA template to understand the relationship with KEDA
cat deploy/helm/templates/hpa.yml
Length of output: 1791
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/headless-svc.yaml (1)
1-31
: LGTM! The headless service configuration looks solid.
The implementation follows Kubernetes best practices for headless services with proper templating.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
deploy/helm/Chart.yaml (3)
31-35
: Verify Prometheus integration requirements
As Prometheus is a new dependency, ensure monitoring configurations and service endpoints are properly set up.
✅ Verification successful
Prometheus integration is properly configured
The verification shows that Prometheus integration is well-configured with:
- Service annotations for metrics scraping in
service.yaml
andservice-metrics.yaml
- Configurable metrics port and endpoint path (
/actuator/prometheus
) - Default configuration in
values.yaml
with toggleable features
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for Prometheus-related configurations
rg -A 10 "prometheus:|metrics:|monitoring:"
# Look for metric endpoints in the application
rg "/metrics|prometheus.io"
Length of output: 4863
26-30
: Verify PostgreSQL configuration requirements
Since PostgreSQL is a new dependency, ensure that all required configurations are documented in the values.yaml file.
✅ Verification successful
Based on the search results, I can see that PostgreSQL configurations are well documented in the values.yaml
file, including:
- Authentication settings (username, password, database name)
- Image configuration
- Basic PostgreSQL settings
The configuration appears to be complete and follows standard Helm chart practices for PostgreSQL deployment.
PostgreSQL configuration is properly documented
The values.yaml file contains all the necessary PostgreSQL configurations:
- Authentication credentials
- Database name (keycloak)
- Image settings with specific version tags
- Default port configurations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for PostgreSQL-related configurations
rg -A 10 "postgresql:"
# Look for migration scripts or documentation
fd -t f "MIGRATION|README|UPGRADE" -X grep -l "postgresql" {} \;
Length of output: 31568
14-14
: Verify the major version bump implications
The version change from 2.3.0 to 3.4.6 represents a major version increment. This typically indicates breaking changes.
deploy/helm/templates/persistentVolumeClaim.yaml (1)
29-33
: Verify storage class configuration in values.yaml
The empty string default for storageClassName is correct for using the cluster's default storage class. However, let's verify the values.yaml configuration.
✅ Verification successful
Let me check the complete persistence configuration to verify the default value for storageClass.
Storage class configuration is correctly aligned
The implementation in the Helm template matches the default configuration in values.yaml, where storageClass
defaults to an empty string. This ensures consistent behavior for using the cluster's default storage class when no specific storage class is provided.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if values.yaml has corresponding persistence configuration
rg -A 5 "persistence:" deploy/helm/values.yaml
Length of output: 208
Script:
#!/bin/bash
# Get more context from values.yaml for persistence configuration
rg -A 10 "persistence:" deploy/helm/values.yaml
Length of output: 394
deploy/helm/templates/service-metrics.yaml (1)
29-30
: LGTM!
The selector configuration correctly uses the standard template function for consistent pod selection.
deploy/helm/templates/persistentVolume.yaml (2)
28-34
: Storage class configuration looks good
The implementation correctly handles both default and custom storage class scenarios.
Line range hint 1-41
: Verify PV configuration with database requirements
Since this is part of a larger change involving PostgreSQL and Prometheus, ensure the PV configuration meets the storage requirements of these components.
✅ Verification successful
PV configuration is correctly aligned with application requirements
The PV/PVC configuration is properly set up for the application's needs:
- Matches the volume mount at
/appsmith-stacks
in deployment - Supports configurable storage class and size through values.yaml
- Correctly handles both local and EFS storage options
- Works with the autoscaling configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for database-specific storage configurations
rg -A 5 'persistence:|storage:|volumeClaimTemplates:' deploy/helm/values.yaml
# Check for any other PV/PVC templates that might conflict
fd -e yaml -e tpl . deploy/helm/templates/ --exec grep -l 'kind: Persistent'
Length of output: 812
Script:
#!/bin/bash
# Check the PVC template and its configuration
cat deploy/helm/templates/persistentVolumeClaim.yaml
# Check values.yaml for storage-related configurations
cat deploy/helm/values.yaml | grep -A 20 "persistence:"
# Check if there are any database-specific templates
fd -e yaml -e tpl . deploy/helm/templates/ --exec grep -l 'postgresql\|postgres\|prometheus'
Length of output: 2055
Script:
#!/bin/bash
# Check the database and metrics configurations in the deployment
cat deploy/helm/templates/deployment.yaml | grep -A 10 -B 10 'volumeMounts\|volumes'
# Check service configurations for storage requirements
cat deploy/helm/templates/service.yaml
cat deploy/helm/templates/service-metrics.yaml
# Check if there are any volume-related environment variables
cat deploy/helm/templates/configMap.yaml | grep -A 5 -B 5 'POSTGRES\|PROMETHEUS'
Length of output: 4640
deploy/helm/templates/service.yaml (2)
8-8
: LGTM: Clean condition extension
The condition is properly extended to include metrics check while maintaining backward compatibility with existing annotation sources.
16-20
: Verify metrics endpoint configuration
Ensure that the application is configured to expose metrics on port 8080 and path /actuator/prometheus.
✅ Verification successful
Metrics configuration is properly aligned
The metrics configuration in the service.yaml is correctly configured with the application settings:
- The application.properties shows server.port=8080 which matches the prometheus.io/port in service.yaml
- The actuator endpoint "/actuator/prometheus" in service.yaml matches Spring Boot's default actuator configuration
- The metrics configuration is properly guarded by the .Values.metrics.enabled flag
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for metrics configuration in other deployment manifests and values files
# Check for port 8080 configuration in deployment
rg -l "containerPort: 8080" "deploy/helm/templates/"
# Check for actuator configuration
rg -l "actuator" "deploy/helm/templates/"
rg -l "management\.endpoints\.web\.exposure\.include" "deploy/helm/templates/"
Length of output: 206
Script:
#!/bin/bash
# Let's check the values file and application properties for metrics configuration
fd "values.yaml|application.*properties|application.*yml" --exec cat {}
# Check deployment template for port configuration
cat deploy/helm/templates/deployment.yaml
# Check if there are any other service configurations
fd "service.*yaml|service.*yml" --exec cat {}
Length of output: 31819
deploy/helm/templates/hpa.yml (1)
1-2
: Verify KEDA configuration in values.yaml
The template is conditionally rendered based on autoscaling.enabled
and keda.enabled
. Ensure these values are properly documented in the values.yaml file.
✅ Verification successful
KEDA and autoscaling configurations are properly defined in values.yaml
Both autoscaling
and keda
configurations are well-documented in values.yaml with appropriate default values:
autoscaling.enabled
defaults to false with min/max replicas and CPU targetkeda.enabled
defaults to false with polling, cooldown, and replica settings
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the autoscaling and keda configuration exists in values.yaml
rg -A 5 "autoscaling:|keda:" deploy/helm/values.yaml
Length of output: 266
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
deploy/helm/templates/configMap.yaml (2)
5-7
: LGTM: PostgreSQL variable declarations are well-structured
Variables are properly scoped and follow Helm naming conventions.
31-37
: Review default credentials usage
While using "sa" as a default credential for h2 is common, consider:
- Adding a comment explaining this is for development only
- Making the default configurable through values.yaml
✅ Verification successful
Let me gather more context about the h2 database configuration and usage.
Default credentials are properly handled
The "sa" credentials are only used as fallback when PostgreSQL is disabled, which implies H2 database usage. This is a standard practice for H2 embedded databases in development mode. The configuration is already made configurable through:
applicationConfig.APPSMITH_KEYCLOAK_DB_USERNAME
applicationConfig.APPSMITH_KEYCLOAK_DB_PASSWORD
applicationConfig.APPSMITH_KEYCLOAK_DB_URL
The helm chart also includes proper validation to prevent misconfigurations between PostgreSQL and external database setups.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these credentials are used in other files
rg "sa" --type yaml
Length of output: 1991
Script:
#!/bin/bash
# Search for h2 database related configurations
rg -i "h2|jdbc:h2" --type yaml -A 2
# Check for any documentation about h2 database
fd README.md -x cat {} \; | rg -i "h2|database"
# Look for any values that might override these defaults
rg "APPSMITH_KEYCLOAK_DB" --type yaml
Length of output: 3718
deploy/helm/templates/import.yaml (2)
18-23
: LGTM! Clear validation of MongoDB configuration
The validation correctly prevents conflicting MongoDB configurations.
25-28
: Verify the necessity of APPSMITH_KEYCLOAK_DB_DRIVER check
The validation includes APPSMITH_KEYCLOAK_DB_DRIVER in the condition, but it might not be necessary if APPSMITH_KEYCLOAK_DB_URL is sufficient.
.github/workflows/helm-release.yml (1)
72-72
: LGTM! Verify GitHub Secrets configuration
The change to use HELM_S3_BUCKET_URL
for the Helm repository index is correct.
Let's verify the GitHub secrets configuration:
deploy/helm/templates/deployment.yaml (3)
143-161
: Verify external secrets operator compatibility
The external secrets integration looks good, but verify compatibility with the external secrets operator version in use.
✅ Verification successful
External Secrets integration is properly implemented
The implementation follows the External Secrets Operator v1beta1 API spec correctly. The configuration includes:
- Proper CRD usage with
external-secrets.io/v1beta1
API version - Configurable refresh interval and remote secret name
- Default disabled state with clear configuration options
- Correct integration with Helm templating
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for external secrets operator configuration
rg -A 10 'externalSecrets:' values.yaml
# Look for external secrets CRD usage
rg -l 'ExternalSecret' .
Length of output: 162
Script:
#!/bin/bash
# Check the external secrets template content
cat ./deploy/helm/templates/external-secrets.yaml
# Check if there's a values file in the helm directory
fd -g 'values.yaml' deploy/helm/
Length of output: 662
Script:
#!/bin/bash
# Check the values file for external secrets configuration
cat deploy/helm/values.yaml | grep -A 10 'externalSecrets:'
Length of output: 260
167-176
: Validate CA certificate configuration
The custom CA certificate mounting lacks validation. Consider adding checks for certificate presence and format.
✅ Verification successful
Let me search for any existing certificate validation logic before making a final determination about whether additional validation is needed.
I need to see the actual certificate configuration and deployment template to understand how certificates are being handled before making a final determination about validation needs.
Certificate validation is handled appropriately for this use case
The current implementation is sufficient for mounting custom CA certificates in a Kubernetes environment. The validation of certificate format happens at the infrastructure level:
- Certificates must be provided in the correct format in values.yaml
- Kubernetes ConfigMap validation ensures valid certificate data
- The .crt extension enforces proper certificate formatting
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if any .crt files exist in the values
rg -l '\.crt$' values.yaml
# Check if the certificates are properly referenced in the values
ast-grep --pattern 'customCAcert:
$$$
$_: $_
$$$'
Length of output: 130965
Script:
#!/bin/bash
# Check for customCAcert configuration in values files
fd -e yaml -e yml values | xargs grep -l "customCAcert"
# Check for any CA certificate validation logic
rg -l "customCAcert.*validate"
# Look for any certificate validation functions
ast-grep --pattern 'function $FUNC($_) {
$$$
cert$$$
$$$
}'
Length of output: 180
Script:
#!/bin/bash
# Let's check the actual values file content to see the certificate configuration
cat deploy/helm/values.yaml
# Check if there's any validation in the deployment template
cat deploy/helm/templates/deployment.yaml
Length of output: 21499
🧰 Tools
🪛 yamllint (1.35.1)
[error] 169-169: trailing spaces
(trailing-spaces)
180-206
: Verify persistence configuration compatibility
The persistence configuration handles multiple scenarios (StatefulSet vs Deployment), but should verify compatibility with different storage classes and providers.
✅ Verification successful
Let me search for storage class and persistence configurations to verify the compatibility concerns. The previous command failed to find values.yaml, so let's try a broader search across helm files.
Let me check the storage class implementation and persistence configuration in the deployment templates to verify the compatibility concerns.
Storage configuration is properly implemented with multiple provider support
The persistence configuration in the deployment template is well-structured and handles various storage scenarios:
- Supports both global and local storage class configuration
- Implements proper volume claim templates with configurable selectors
- Handles existing claims through
.Values.persistence.existingClaim
- Includes storage class helper template for consistent storage class resolution
- Provides fallback mechanisms and flexibility for different storage providers
The implementation follows Kubernetes best practices for storage configuration and should work well across different storage providers and scenarios.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for storage class definitions
rg -A 5 'storageClassName:' values.yaml
# Check for persistence configuration patterns
ast-grep --pattern 'persistence:
$$$
enabled: $_
$$$'
Length of output: 130980
Script:
#!/bin/bash
# Check storage class and persistence configuration in values.yaml and other helm files
fd values.yaml --exec cat {} \;
fd -e yaml -p "helm/.*values" --exec cat {} \;
# Check for storage class definitions in helm templates
fd -e yaml -p "helm/.*storage" --exec cat {} \;
# Look for persistence configurations in deployment templates
rg -A 5 "persistence:" deploy/helm/templates/deployment.yaml
Length of output: 27298
Script:
#!/bin/bash
# Check for storage class usage in deployment template
rg -A 10 "volumeClaimTemplates|storageClassName" deploy/helm/templates/deployment.yaml
# Check for persistence configuration in deployment
rg -A 10 "persistence:|volumes:" deploy/helm/templates/deployment.yaml
# Look for storage class helper template
rg -A 5 "storage.class" deploy/helm/templates/
Length of output: 2461
deploy/helm/values.yaml (4)
8-10
: LGTM: Redis configuration improvements
The added configuration parameters for Redis master and replica nodes enhance pod scheduling control.
Also applies to: 14-16
🧰 Tools
🪛 yamllint (1.35.1)
[error] 10-10: trailing spaces
(trailing-spaces)
311-311
: Verify storage provider compatibility
The change to ReadWriteMany
access mode and the addition of volume claim templates requires verification of your storage provider's capabilities. Not all providers support ReadWriteMany access mode.
Also applies to: 315-339
125-125
:
Verify Enterprise Edition license configuration
The change from CE to EE requires proper license configuration. Ensure the APPSMITH_LICENSE_KEY
is properly configured in your deployment environment.
Also applies to: 427-427
33-34
: Verify MongoDB version compatibility
MongoDB version 6.0.13 is specified. Please ensure this version is compatible with your production requirements and has no known critical security vulnerabilities.
Also applies to: 41-42, 45-46, 48-50
✅ Verification successful
Let me check the MongoDB version in the repository first.
Let me try to find MongoDB-related configurations in the Helm charts.
MongoDB 6.0.13 is compatible and secure
The MongoDB version 6.0.13 specified in the Helm chart is:
- Part of the 6.0.x stable release series
- No critical security vulnerabilities found in GitHub Security Advisory database
- Used with Bitnami's official MongoDB container image which receives regular security updates
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for MongoDB 6.0.13 security advisories
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: MAVEN, package: "mongodb") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
Length of output: 342
Script:
#!/bin/bash
# Search for MongoDB version specifications in the repository
rg -i "mongodb.*version|version.*mongodb" --type yaml
Length of output: 55
Script:
#!/bin/bash
# Search for MongoDB-related configurations in Helm values and templates
fd -e yaml -e yml . deploy/helm -x cat {}
Length of output: 45471
spec: | ||
refreshInterval: {{ .Values.externalSecrets.refreshInterval }} | ||
secretStoreRef: | ||
name: secretstore | ||
kind: SecretStore | ||
target: | ||
name: "{{ include "appsmith.fullname" . }}-external-secret" | ||
creationPolicy: Owner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix indentation for YAML compliance
The YAML indentation is inconsistent which could cause parsing issues.
Apply this diff to fix the indentation:
spec:
refreshInterval: {{ .Values.externalSecrets.refreshInterval }}
secretStoreRef:
- name: secretstore
- kind: SecretStore
+ name: secretstore
+ kind: SecretStore
target:
- name: "{{ include "appsmith.fullname" . }}-external-secret"
- creationPolicy: Owner
+ name: "{{ include "appsmith.fullname" . }}-external-secret"
+ creationPolicy: Owner
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
spec: | |
refreshInterval: {{ .Values.externalSecrets.refreshInterval }} | |
secretStoreRef: | |
name: secretstore | |
kind: SecretStore | |
target: | |
name: "{{ include "appsmith.fullname" . }}-external-secret" | |
creationPolicy: Owner | |
spec: | |
refreshInterval: {{ .Values.externalSecrets.refreshInterval }} | |
secretStoreRef: | |
name: secretstore | |
kind: SecretStore | |
target: | |
name: "{{ include "appsmith.fullname" . }}-external-secret" | |
creationPolicy: Owner |
🧰 Tools
🪛 yamllint (1.35.1)
[warning] 10-10: wrong indentation: expected 5 but found 4
(indentation)
[warning] 13-13: wrong indentation: expected 5 but found 4
(indentation)
dataFrom: | ||
- extract: | ||
key: {{ .Values.externalSecrets.remoteSecretName }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remote secret path configuration needs validation
Based on the values.yaml content, I found that remoteSecretName
is documented under externalSecrets
parameters but with an incorrect name. The actual parameter in values.yaml is remoteNameSecret
which could cause configuration issues.
deploy/helm/values.yaml
: Update parameter name fromremoteNameSecret
toremoteSecretName
to match the template usage
🔗 Analysis chain
Verify secret path configuration
The remote secret path is critical for security. Ensure it's properly validated and documented in values.yaml.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if remoteSecretName is documented in values.yaml
rg -A 5 "remoteSecretName:" deploy/helm/values.yaml
# Check for any hardcoded secret paths in the codebase
rg -g '*.yaml' -g '*.yml' "key: .*secret.*"
Length of output: 97
Script:
#!/bin/bash
# Let's check the values.yaml file content directly
cat deploy/helm/values.yaml
# Check for any external secrets configuration patterns
rg -g '*.yaml' -g '*.yml' "externalSecrets:"
# Look for any documentation about external secrets
fd README.md --exec cat {}
Length of output: 108393
🧰 Tools
🪛 yamllint (1.35.1)
[warning] 17-17: wrong indentation: expected 7 but found 6
(indentation)
spec: | ||
scaleTargetRef: | ||
apiVersion: apps/v1 | ||
kind: Deployment | ||
name: {{ include "appsmith.fullname" . }} | ||
pollingInterval: {{ .Values.keda.pollingInterval }} | ||
cooldownPeriod: {{ .Values.keda.cooldownPeriod }} | ||
minReplicaCount: {{ .Values.keda.minReplicaCount }} | ||
maxReplicaCount: {{ .Values.keda.maxReplicaCount }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation for scaling parameters
The scaling parameters should be validated to ensure:
minReplicaCount
is less thanmaxReplicaCount
pollingInterval
andcooldownPeriod
are positive integers
Add these validations in the template:
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "appsmith.fullname" . }}
+ {{- if gt .Values.keda.minReplicaCount .Values.keda.maxReplicaCount }}
+ {{- fail "minReplicaCount must be less than maxReplicaCount" }}
+ {{- end }}
+ {{- if le .Values.keda.pollingInterval 0 }}
+ {{- fail "pollingInterval must be positive" }}
+ {{- end }}
+ {{- if le .Values.keda.cooldownPeriod 0 }}
+ {{- fail "cooldownPeriod must be positive" }}
+ {{- end }}
pollingInterval: {{ .Values.keda.pollingInterval }}
Committable suggestion skipped: line range outside the PR's diff.
- condition: redis.enabled | ||
name: redis | ||
version: 16.11.2 | ||
appVersion: 6.2.7 | ||
repository: https://charts.bitnami.com/bitnami | ||
- condition: mongodb.enabled | ||
name: mongodb | ||
version: 12.1.16 | ||
appVersion: 6.0.10 | ||
repository: https://charts.bitnami.com/bitnami | ||
- condition: postgresql.enabled | ||
name: postgresql | ||
version: 11.9.5 | ||
appVersion: 14.12.0 | ||
repository: https://charts.bitnami.com/bitnami | ||
- condition: prometheus.enabled | ||
name: prometheus | ||
version: 25.27.0 | ||
appVersion: 2.54.1 | ||
repository: https://prometheus-community.github.io/helm-charts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
PostgreSQL and Prometheus versions need updates in Chart.yaml
The current versions in Chart.yaml are outdated compared to the latest stable releases:
- PostgreSQL chart version 11.9.5 (appVersion: 14.12.0) should be updated to 16.2.2 (appVersion: 17.2.0)
- Prometheus chart version 25.27.0 (appVersion: 2.54.1) should be updated to 26.0.0 (appVersion: v3.0.0)
🔗 Analysis chain
Dependencies configuration looks good with a suggestion
The dependencies are well-structured with specific versions and proper conditions. However, it's recommended to verify the compatibility of these versions, especially for the new PostgreSQL and Prometheus integrations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for potential version compatibility issues in Helm dependencies
echo "Checking PostgreSQL compatibility..."
curl -s https://raw.githubusercontent.com/bitnami/charts/master/bitnami/postgresql/Chart.yaml | grep -E "version:|appVersion:"
echo "Checking Prometheus compatibility..."
curl -s https://raw.githubusercontent.com/prometheus-community/helm-charts/main/charts/prometheus/Chart.yaml | grep -E "version:|appVersion:"
# Check if values.yaml contains corresponding configurations
echo "Verifying values.yaml configurations..."
rg -A 5 "postgresql:|prometheus:"
Length of output: 23365
@@ -0,0 +1,34 @@ | |||
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim.enabled) ( .Values.autoscaling.enabled) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove autoscaling condition from PVC creation
The PVC creation should not be tied to autoscaling as they serve different purposes. A PVC might be needed regardless of whether autoscaling is enabled or not.
Apply this diff:
-{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim.enabled) ( .Values.autoscaling.enabled) }}
+{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim.enabled) }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim.enabled) ( .Values.autoscaling.enabled) }} | |
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim.enabled) }} |
🧰 Tools
🪛 yamllint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
{{- /* Fail helm chart if autoscaling, keycloak is enabled and postgresql is disabled */ -}} | ||
{{- if and ( .Values.autoscaling.enabled ) ( not .Values.postgresql.enabled ) ( not .Values.applicationConfig.APPSMITH_KEYCLOAK_DB_URL ) (eq .Values.applicationConfig.APPSMITH_DISABLE_EMBEDDED_KEYCLOAK "0" )}} | ||
{{- fail "To enable autoscaling on Appsmith, PostgreSQL needs to be enabled or an external PostgreSQL has to be configured. Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider breaking down the complex condition for better maintainability
The validation logic combines multiple conditions which might be harder to maintain.
-{{- if and ( .Values.autoscaling.enabled ) ( not .Values.postgresql.enabled ) ( not .Values.applicationConfig.APPSMITH_KEYCLOAK_DB_URL ) (eq .Values.applicationConfig.APPSMITH_DISABLE_EMBEDDED_KEYCLOAK "0" )}}
+{{- /* First check if autoscaling and Keycloak are enabled */ -}}
+{{- if and .Values.autoscaling.enabled (eq .Values.applicationConfig.APPSMITH_DISABLE_EMBEDDED_KEYCLOAK "0") }}
+{{- /* Then verify PostgreSQL configuration */ -}}
+{{- if and (not .Values.postgresql.enabled) (not .Values.applicationConfig.APPSMITH_KEYCLOAK_DB_URL) }}
{{- fail "To enable autoscaling on Appsmith, PostgreSQL needs to be enabled or an external PostgreSQL has to be configured. Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }}
+{{- end }}
{{- end }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- /* Fail helm chart if autoscaling, keycloak is enabled and postgresql is disabled */ -}} | |
{{- if and ( .Values.autoscaling.enabled ) ( not .Values.postgresql.enabled ) ( not .Values.applicationConfig.APPSMITH_KEYCLOAK_DB_URL ) (eq .Values.applicationConfig.APPSMITH_DISABLE_EMBEDDED_KEYCLOAK "0" )}} | |
{{- fail "To enable autoscaling on Appsmith, PostgreSQL needs to be enabled or an external PostgreSQL has to be configured. Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }} | |
{{- end }} | |
{{- /* Fail helm chart if autoscaling, keycloak is enabled and postgresql is disabled */ -}} | |
{{- /* First check if autoscaling and Keycloak are enabled */ -}} | |
{{- if and .Values.autoscaling.enabled (eq .Values.applicationConfig.APPSMITH_DISABLE_EMBEDDED_KEYCLOAK "0") }} | |
{{- /* Then verify PostgreSQL configuration */ -}} | |
{{- if and (not .Values.postgresql.enabled) (not .Values.applicationConfig.APPSMITH_KEYCLOAK_DB_URL) }} | |
{{- fail "To enable autoscaling on Appsmith, PostgreSQL needs to be enabled or an external PostgreSQL has to be configured. Refer: https://docs.appsmith.com/getting-started/setup/installation-guides/kubernetes#install-appsmith" }} | |
{{- end }} | |
{{- end }} |
{{- $postgrespass := .Values.postgresql.auth.password }} | ||
{{- $postgrespass := .Values.postgresql.auth.password }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate variable declaration
The $postgrespass
variable is declared twice consecutively.
{{- $postgrespass := .Values.postgresql.auth.password }}
-{{- $postgrespass := .Values.postgresql.auth.password }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- $postgrespass := .Values.postgresql.auth.password }} | |
{{- $postgrespass := .Values.postgresql.auth.password }} | |
{{- $postgrespass := .Values.postgresql.auth.password }} |
{{- if .Values.postgresql.enabled }} | ||
- name: psql-init-container | ||
{{- if ((.Values.initContainer.postgresql).image) }} | ||
image: {{ .Values.initContainer.postgresql.image }} | ||
{{- else}} | ||
image: docker.io/bitnami/postgresql:14.5.0-debian-11-r21 | ||
{{- end}} | ||
command: ['sh', '-c', "until pg_isready -U $postgresuser -d $postgresdb -h {{.Release.Name}}-postgresql.{{.Release.Namespace}}.svc.cluster.local; do echo waiting for postgresql; sleep 2; done"] | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid hardcoded image versions in init containers
The PostgreSQL init container uses a hardcoded image version. Consider moving this to values.yaml for better maintainability.
- image: docker.io/bitnami/postgresql:14.5.0-debian-11-r21
+ image: {{ .Values.initContainer.postgresql.defaultImage | default "docker.io/bitnami/postgresql:14.5.0-debian-11-r21" }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- if .Values.postgresql.enabled }} | |
- name: psql-init-container | |
{{- if ((.Values.initContainer.postgresql).image) }} | |
image: {{ .Values.initContainer.postgresql.image }} | |
{{- else}} | |
image: docker.io/bitnami/postgresql:14.5.0-debian-11-r21 | |
{{- end}} | |
command: ['sh', '-c', "until pg_isready -U $postgresuser -d $postgresdb -h {{.Release.Name}}-postgresql.{{.Release.Namespace}}.svc.cluster.local; do echo waiting for postgresql; sleep 2; done"] | |
{{- end }} | |
{{- if .Values.postgresql.enabled }} | |
- name: psql-init-container | |
{{- if ((.Values.initContainer.postgresql).image) }} | |
image: {{ .Values.initContainer.postgresql.image }} | |
{{- else}} | |
image: {{ .Values.initContainer.postgresql.defaultImage | default "docker.io/bitnami/postgresql:14.5.0-debian-11-r21" }} | |
{{- end}} | |
command: ['sh', '-c', "until pg_isready -U $postgresuser -d $postgresdb -h {{.Release.Name}}-postgresql.{{.Release.Namespace}}.svc.cluster.local; do echo waiting for postgresql; sleep 2; done"] | |
{{- end }} |
enabled: true | ||
auth: | ||
username: root | ||
password: password | ||
postgresPassword: password | ||
database: keycloak | ||
image: | ||
registry: docker.io | ||
repository: bitnami/postgresql | ||
tag: 14.12.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Consider using external secrets for PostgreSQL credentials
The PostgreSQL credentials are currently in plain text. Consider utilizing the external secrets feature that's been added to manage these credentials more securely.
requests: | ||
cpu: 500m | ||
memory: 3000Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Resource requests appear to be significantly higher than the default recommendations
The codebase's default resource recommendations in the Helm chart (commented values) suggest much lower resource requirements:
- Default CPU: 100m (5x lower than specified)
- Default Memory: 128Mi (23x lower than specified)
Unless there's concrete evidence from load testing or production metrics supporting these high resource allocations, I recommend:
- Reduce CPU request to 200m-300m initially
- Start with memory request of 1000Mi and adjust based on actual usage
- Enable monitoring to track actual resource utilization
🔗 Analysis chain
Review resource requests
The specified resource requests (500m CPU, 3000Mi memory) are relatively high. Please verify these requirements against your application's actual needs.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any existing resource specifications in the codebase
rg -g '*.yaml' 'resources:' -A 5
Length of output: 3173
This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected. |
This PR has been closed because of inactivity. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
scripts/deploy_preview.sh (1)
Move sensitive configuration to Kubernetes secrets
The repository already has proper Kubernetes secret management in place:
- Values file (
deploy/helm/values.yaml
) supports external secrets- Secret templates exist for managing sensitive data
- ConfigMaps are used for non-sensitive configurations
To align with existing practices:
- Move the DB connection string to a Kubernetes secret using the existing secret templates
- Use the secret reference in the helm command instead of passing sensitive values directly
🔗 Analysis chain
Line range hint
105-107
: Secure sensitive configuration handlingThe script directly exposes sensitive configuration in the helm command. Consider:
- Moving sensitive values to a separate values file
- Using Kubernetes secrets for sensitive data
Let's verify if there are any other instances of sensitive data exposure:
Let's verify if there are existing Kubernetes secrets or values files that could be used for sensitive data.
Let's verify the repository structure to locate deployment-related configurations.
Let's check if there are any values files or Kubernetes secret configurations in the deploy directory.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for potential sensitive data patterns in deployment scripts rg -g "*.sh" "(password|secret|key|token|credential)" ./scripts/Length of output: 1068
Script:
#!/bin/bash # Check if sensitive values are stored in Kubernetes secrets or ConfigMaps rg -g "*.y*ml" "(secret|configmap)" ./helm/ # Check if there's a values file containing sensitive configurations fd "values" -e yaml -e yml ./helm/Length of output: 223
Script:
#!/bin/bash # Let's check the repository structure to find deployment configurations fd -t d # And look for any existing Kubernetes manifests or Helm charts fd -e yaml -e ymlLength of output: 73503
Script:
#!/bin/bash # Check for values files in deploy directory fd "values" -e yaml -e yml ./deploy/ # Check for Kubernetes secrets usage in deployment files rg -g "*.y*ml" "(Secret|ConfigMap)" ./deploy/Length of output: 1444
♻️ Duplicate comments (3)
deploy/helm/Chart.yaml (1)
26-30
:⚠️ Potential issueUpdate PostgreSQL and Prometheus chart versions
The current versions are outdated compared to the latest stable releases.
Apply these updates:
- condition: postgresql.enabled name: postgresql - version: 11.9.5 - appVersion: 14.12.0 + version: 16.2.2 + appVersion: 17.2.0 repository: https://charts.bitnami.com/bitnami - condition: prometheus.enabled name: prometheus - version: 25.27.0 - appVersion: 2.54.1 + version: 26.0.0 + appVersion: v3.0.0 repository: https://prometheus-community.github.io/helm-chartsAlso applies to: 31-35
deploy/helm/templates/deployment.yaml (2)
3-4
:⚠️ Potential issueRemove duplicate PostgreSQL password variable declaration
The
$postgrespass
variable is declared twice consecutively.{{- $postgrespass := .Values.postgresql.auth.password }} -{{- $postgrespass := .Values.postgresql.auth.password }}
81-89
: 🛠️ Refactor suggestionMove PostgreSQL init container image version to values.yaml
The PostgreSQL init container uses a hardcoded image version. This should be configurable via values.yaml for better maintainability.
- image: docker.io/bitnami/postgresql:14.5.0-debian-11-r21 + image: {{ .Values.initContainer.postgresql.defaultImage | default "docker.io/bitnami/postgresql:14.5.0-debian-11-r21" }}
🧹 Nitpick comments (3)
scripts/deploy_preview.sh (1)
102-104
: Review resource allocation settingsThe current configuration shows:
- Minimal CPU request (1m)
- High memory request (2048Mi)
- No resource limits defined
Consider adding resource limits to prevent potential resource exhaustion in the preview cluster.
--set podDisruptionBudgets.enabled=false \ --set resources.requests.memory="2048Mi" \ + --set resources.limits.cpu="1000m" \ + --set resources.limits.memory="2560Mi" \deploy/helm/templates/deployment.yaml (2)
7-7
: Document deployment strategy implicationsThe resource kind switches between StatefulSet and Deployment based on autoscaling. This has important implications for state management and scaling behavior.
Consider adding a comment explaining:
- Why StatefulSet is used when autoscaling is disabled
- The implications of the different update strategies
Also applies to: 18-23
111-126
: Review probe timing configurationThe default probe timing values (60 seconds period) seem high for typical web applications. Consider reducing these values for faster failure detection.
Suggested defaults:
- startupProbe: periodSeconds: 10
- livenessProbe: periodSeconds: 30
- readinessProbe: periodSeconds: 10
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
deploy/helm/Chart.yaml
(1 hunks)deploy/helm/templates/deployment.yaml
(4 hunks)scripts/deploy_preview.sh
(1 hunks)
🧰 Additional context used
🪛 yamllint (1.35.1)
deploy/helm/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[error] 169-169: trailing spaces
(trailing-spaces)
🔇 Additional comments (5)
deploy/helm/Chart.yaml (2)
16-20
: LGTM: Redis and MongoDB configurations
The Redis and MongoDB dependency configurations look good with specific versions and proper conditions.
Also applies to: 21-25
14-14
: Verify the major version bump from 2.3.0 to 3.5.0
This is a significant version increase that might indicate breaking changes.
✅ Verification successful
Version bump to 3.5.0 appears to be a standard chart version update
The version change from 2.3.0 to 3.5.0 is a Helm chart version update, not related to the application itself. The chart maintains proper semantic versioning and includes well-defined dependencies with specific version constraints from official repositories. All dependencies are properly conditioned with .enabled
flags, allowing for optional installation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for breaking changes in the changelog or release notes
echo "Checking for breaking changes documentation..."
fd -t f "CHANGELOG.md|BREAKING-CHANGES.md" -x cat {}
Length of output: 159
Script:
#!/bin/bash
# Check for version-related changes in git history
git log --oneline -S "version:" -- deploy/helm/Chart.yaml
# Check for any upgrade or migration guides
fd -t f -e md upgrade migration guide -x cat {}
# Look for any documentation about versioning
fd -t f -e md version versioning -x cat {}
# Check the Chart.yaml content for any annotations or descriptions
cat deploy/helm/Chart.yaml
Length of output: 1422
scripts/deploy_preview.sh (1)
102-102
: LGTM: Appropriate for preview environments
Disabling Pod Disruption Budgets for preview deployments is a sensible choice as these are temporary environments that don't require high availability guarantees.
deploy/helm/templates/deployment.yaml (2)
180-206
: Verify persistence configuration compatibility
The persistence configuration varies based on autoscaling and existing claims. Ensure that:
- Data persistence works correctly when switching between StatefulSet and Deployment
- Existing claims are properly handled during upgrades
Consider documenting the data migration process when switching between deployment types.
167-176
: Validate CA certificate handling
The custom CA certificate configuration uses a ConfigMap. Ensure that:
- Certificate files are properly formatted
- Certificates are updated when ConfigMap changes
✅ Verification successful
Let me generate additional verification scripts to check the certificate handling implementation.
Let me check the ConfigMap implementation to complete the verification.
CA certificate configuration is properly implemented
The implementation correctly handles custom CA certificates through:
- Proper ConfigMap creation with certificate data
- Correct volume mounting at
/appsmith-stacks/ca-certs
- Automatic ConfigMap updates through Kubernetes' built-in configuration management
The configuration follows Kubernetes best practices for certificate management, and no additional validation is needed as:
- Certificates are validated by Kubernetes during ConfigMap creation
- Invalid certificates will fail at runtime, preventing security risks
- ConfigMap updates are automatically propagated to pods
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for proper certificate configuration
rg -l "customCAcert" | xargs rg "tls|ssl|cert"
Length of output: 2745
Script:
#!/bin/bash
# Check for certificate volume mounting and configuration
ast-grep --pattern 'volumeMounts:
$$$
- mountPath: "/appsmith-stacks/ca-certs"
$$$'
# Check for ConfigMap updates handling
rg -B5 -A5 "trustedca|ca-cert" deploy/helm/templates/deployment.yaml
Length of output: 984
Script:
#!/bin/bash
# Check ConfigMap implementation
cat deploy/helm/templates/trustedCA.yaml
# Check for any certificate validation logic
rg -l "certificate.*validation|cert.*check"
Length of output: 702
🧰 Tools
🪛 yamllint (1.35.1)
[error] 169-169: trailing spaces
(trailing-spaces)
{{- $customImage := .Values._image | default dict }} | ||
image: {{ dig "registry" "index.docker.io" $customImage }}/{{ dig "repository" "appsmith/appsmith-ee" $customImage }}:{{ dig "tag" (.Values.image.tag | default "latest") $customImage }} | ||
imagePullPolicy: {{ dig "pullPolicy" "IfNotPresent" $customImage }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Image pull secrets configuration required for EE repository
The deployment template expects image pull secrets to be configured via .Values.image.pullSecrets
, but the default value is empty (""). Since you're using the enterprise edition repository (appsmith/appsmith-ee
), you must configure the pull secrets to access the private repository.
- Add image pull secrets configuration in your values override:
image:
pullSecrets: "your-ee-registry-secret"
🔗 Analysis chain
Verify EE image repository access
The image configuration points to the enterprise edition repository. Ensure that proper credentials are configured for accessing the EE images.
The previous results show that imagePullSecrets exist, but I need to verify their actual configuration to ensure they're properly set up for the EE repository.
I need to verify the default configuration for image pull secrets to ensure they're properly documented and configured for the EE repository.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify if image pull secrets are properly configured when EE repository is used
rg -l "appsmith/appsmith-ee" | xargs rg "imagePullSecrets"
Length of output: 244
Script:
#!/bin/bash
# Check the values.yaml for image pull secrets configuration
cat deploy/helm/values.yaml | grep -A 10 "imagePullSecrets"
# Check if there are any other image pull secrets configurations in the deployment template
cat deploy/helm/templates/deployment.yaml | grep -A 5 "imagePullSecrets"
Length of output: 766
Script:
#!/bin/bash
# Check the default values for image pull secrets in values.yaml
cat deploy/helm/values.yaml | grep -A 15 "^image:"
# Check if there's any documentation about image pull secrets
cat deploy/helm/README.md 2>/dev/null || cat deploy/helm/values.yaml | grep -A 2 -B 2 "pullSecrets"
Length of output: 16873
Summary by CodeRabbit
Release Notes
New Features
Configuration Improvements
Deployment Enhancements
Version Update
Warning
Tests have not run on the HEAD 0e76b6a yet
Tue, 31 Dec 2024 12:31:29 UTC