From 5d1edf096ee2cbba331458ff0f85360e40c95380 Mon Sep 17 00:00:00 2001 From: Yong Date: Fri, 4 Jul 2025 23:09:02 -0500 Subject: [PATCH 1/7] Fix helm doc --- helm/polaris/README.md | 436 +++++++++++++++------------------- helm/polaris/README.md.gotmpl | 114 +++------ run.sh | 11 +- 3 files changed, 230 insertions(+), 331 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index 2647c30bf2..662de98dcc 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -27,7 +27,7 @@ # Polaris Helm chart -![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.0.0-incubating-SNAPSHOT](https://img.shields.io/badge/AppVersion-1.0.0--incubating--SNAPSHOT-informational?style=flat-square) +![Version: 1.1.0-incubating-SNAPSHOT](https://img.shields.io/badge/Version-1.1.0--incubating--SNAPSHOT-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.1.0-incubating-SNAPSHOT](https://img.shields.io/badge/AppVersion-1.1.0--incubating--SNAPSHOT-informational?style=flat-square) A Helm chart for Apache Polaris (incubating). @@ -39,15 +39,6 @@ A Helm chart for Apache Polaris (incubating). ## Installation -### Prerequisites - -When using the (deprecated) EclipseLink-backed metastore, a custom `persistence.xml` is required, -and a Kubernetes Secret must be created for it. Below is a sample command: - -```bash -kubectl create secret generic polaris-secret -n polaris --from-file=persistence.xml -``` - ### Running locally with a Kind cluster The below instructions assume Kind and Helm are installed. @@ -58,20 +49,17 @@ Simply run the `run.sh` script from the Polaris repo root: ./run.sh ``` -If using the EclipseLink-backed metastore, make sure to specify the `--eclipse-link-deps` option. - -This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker -images with support for Postgres and load them into the Kind cluster. (It will also create an -example Deployment and Service with in-memory storage.) +This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker images and load them into the Kind cluster (It will also create an example Deployment and Service with in-memory storage.) ### Running locally with a Minikube cluster -The below instructions assume a Minikube cluster is already running and Helm is installed. +The below instructions assume Minikube and Helm are installed. -If necessary, build and load the Docker images with support for Postgres into Minikube: +Start the Minikube cluster, build and load image into the Minikube cluster: ```bash -eval $(minikube -p minikube docker-env) +minikube start +eval $(minikube docker-env) ./gradlew \ :polaris-server:assemble \ @@ -81,22 +69,7 @@ eval $(minikube -p minikube docker-env) -Dquarkus.container-image.build=true ``` -### Installing the chart locally - -The below instructions assume a local Kubernetes cluster is running and Helm is installed. - -#### Common setup - -Create the target namespace: - -```bash -kubectl create namespace polaris -``` - -Create all the required resources in the `polaris` namespace. This usually includes a Postgres -database and a Kubernetes Secret for the `persistence.xml` file. The Polaris chart does not create -these resources automatically, as they are not required for all Polaris deployments. The chart will -fail if these resources are not created beforehand. +#### Installing the Helm chart Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. @@ -105,24 +78,22 @@ Below are two sample deployment models for installing the chart: one with a non- > **These files are intended for testing purposes primarily, and may not be suitable for production use**. > For production deployments, create your own values files based on the provided examples. -#### Non-persistent backend +##### Non-persistent backend Install the chart with a non-persistent backend. From Polaris repo root: - ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/simple-values.yaml \ - polaris helm/polaris + --values helm/polaris/ci/simple-values.yaml \ + polaris helm/polaris --create-namespace polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained above, then you need to run `helm upgrade` as follows: - ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/simple-values.yaml \ + --values helm/polaris/ci/simple-values.yaml \ --set=image.repository=localhost:5001/apache/polaris \ - polaris helm/polaris + polaris helm/polaris --create-namespace polaris ``` #### Persistent backend @@ -130,29 +101,22 @@ helm upgrade --install --namespace polaris \ > [!WARNING] > The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance. -Install the chart with a persistent backend. From Polaris repo root: +Install the dependencies in fixtures directory. From Polaris repo root: +```bash +kubectl create namespace polaris +kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ +kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s +``` +Install the chart with a persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/persistence-values.yaml \ + --values helm/polaris/ci/persistence-values.yaml \ polaris helm/polaris - kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=polaris --timeout=120s ``` -After deploying the chart with a persistent backend, the `persistence.xml` file, originally loaded into the Kubernetes pod via a secret, can be accessed locally if needed. This file contains the persistence configuration required for the next steps. Use the following command to retrieve it: - -```bash -kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- cat /deployments/config/persistence.xml > persistence.xml -``` - -The `persistence.xml` file references the Postgres hostname as postgres. Update it to localhost to enable local connections: - -```bash -sed -i .bak 's/postgres:/localhost:/g' persistence.xml -``` - -To access Polaris and Postgres locally, set up port forwarding for both services: +To access Polaris and Postgres locally, set up port forwarding for both services (This is needed for bootstrap processes): ```bash kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') 8181:8181 @@ -160,12 +124,13 @@ kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.i ``` Run the catalog bootstrap using the Polaris admin tool. This step initializes the catalog with the required configuration: - ```bash -java -Dpolaris.persistence.eclipselink.configuration-file=./persistence.xml \ - -Dpolaris.persistence.eclipselink.persistence-unit=polaris \ - -jar runtime/admin/build/polaris-admin-*-runner.jar \ - bootstrap -c POLARIS,root,pass -r POLARIS +container_envs=$(kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- env) +export QUARKUS_DATASOURCE_USERNAME=$(echo "$container_envs" | grep quarkus.datasource.username | awk -F '=' '{print $2}' | tr -d '\n\r') +export QUARKUS_DATASOURCE_PASSWORD=$(echo "$container_envs" | grep quarkus.datasource.password | awk -F '=' '{print $2}' | tr -d '\n\r') +export QUARKUS_DATASOURCE_JDBC_URL=$(echo "$container_envs" | grep quarkus.datasource.jdbc.url | sed 's/postgres/localhost/2' | awk -F '=' '{print $2}' | tr -d '\n\r') + +java -jar runtime/admin/build/quarkus-app/quarkus-run.jar bootstrap -c POLARIS,root,pass -r POLARIS ``` ### Uninstalling @@ -190,27 +155,18 @@ The following tools are required to run the tests: * [Chart Testing](https://github.com/helm/chart-testing) Quick installation instructions for these tools: - ```bash helm plugin install https://github.com/helm-unittest/helm-unittest.git brew install chart-testing ``` -The integration tests also require some fixtures to be deployed. The `ci/fixtures` directory -contains the required resources. To deploy them, run the following command: - -```bash -kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ -kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s -``` +The integration tests also require some fixtures to be deployed. Follow the above commands to setup required resources. -The `helm/polaris/ci` contains a number of values files that will be used to install the chart with -different configurations. +The `helm/polaris/ci` contains a number of values files that will be used to install the chart with different configurations. ### Running the unit tests -Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from -the Polaris repo root: +Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from the Polaris repo root: ```bash helm unittest helm/polaris @@ -224,13 +180,11 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting up -a local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting upa local cluster. Integration tests are run with the Chart Testing tool: - ```bash -ct install --namespace polaris --debug --charts ./helm/polaris +ct install --namespace polaris --charts ./helm/polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained @@ -243,166 +197,166 @@ ct install --namespace polaris --debug --charts ./helm/polaris \ ## Values -| Key | Type | Default | Description | -|-----------------------------------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| advancedConfig | object | `{}` | Advanced configuration. You can pass here any valid Polaris or Quarkus configuration property. Any property that is defined here takes precedence over all the other configuration values generated by this chart. Properties can be passed "flattened" or as nested YAML objects (see examples below). Note: values should be strings; avoid using numbers, booleans, or other types. | -| affinity | object | `{}` | Affinity and anti-affinity for polaris pods. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. | -| authentication | object | `{"authenticator":{"type":"default"},"tokenBroker":{"maxTokenGeneration":"PT1H","secret":{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"},"type":"rsa-key-pair"},"tokenService":{"type":"default"}}` | Polaris authentication configuration. | -| authentication.authenticator | object | `{"type":"default"}` | The type of authentication to use. Two built-in types are supported: default and test; test is not recommended for production. | -| authentication.tokenBroker | object | `{"maxTokenGeneration":"PT1H","secret":{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"},"type":"rsa-key-pair"}` | The type of token broker to use. Two built-in types are supported: rsa-key-pair and symmetric-key. | -| authentication.tokenBroker.secret | object | `{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"}` | The secret name to pull the public and private keys, or the symmetric key secret from. | -| authentication.tokenBroker.secret.name | string | `nil` | The name of the secret to pull the keys from. If not provided, a key pair will be generated. This is not recommended for production. | -| authentication.tokenBroker.secret.privateKey | string | `"private.pem"` | The private key file to use for RSA key pair token broker. Only required when using rsa-key-pair. | -| authentication.tokenBroker.secret.publicKey | string | `"public.pem"` | The public key file to use for RSA key pair token broker. Only required when using rsa-key-pair. | -| authentication.tokenBroker.secret.secretKey | string | `"secret"` | The symmetric key file to use for symmetric key token broker. Only required when using symmetric-key. | -| authentication.tokenService | object | `{"type":"default"}` | The type of token service to use. Two built-in types are supported: default and test; test is not recommended for production. | -| autoscaling.enabled | bool | `false` | Specifies whether automatic horizontal scaling should be enabled. Do not enable this when using in-memory version store type. | -| autoscaling.maxReplicas | int | `3` | The maximum number of replicas to maintain. | -| autoscaling.minReplicas | int | `1` | The minimum number of replicas to maintain. | -| autoscaling.targetCPUUtilizationPercentage | int | `80` | Optional; set to zero or empty to disable. | -| autoscaling.targetMemoryUtilizationPercentage | string | `nil` | Optional; set to zero or empty to disable. | -| configMapLabels | object | `{}` | Additional Labels to apply to polaris configmap. | -| containerSecurityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"runAsNonRoot":true,"runAsUser":10000,"seccompProfile":{"type":"RuntimeDefault"}}` | Security context for the polaris container. See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/. | -| containerSecurityContext.runAsUser | int | `10000` | UID 10000 is compatible with Polaris OSS default images; change this if you are using a different image. | -| cors | object | `{"accessControlAllowCredentials":null,"accessControlMaxAge":null,"allowedHeaders":[],"allowedMethods":[],"allowedOrigins":[],"exposedHeaders":[]}` | Polaris CORS configuration. | -| cors.accessControlAllowCredentials | string | `nil` | The `Access-Control-Allow-Credentials` response header. The value of this header will default to `true` if `allowedOrigins` property is set and there is a match with the precise `Origin` header. | -| cors.accessControlMaxAge | string | `nil` | The `Access-Control-Max-Age` response header value indicating how long the results of a pre-flight request can be cached. Must be a valid duration. | -| cors.allowedHeaders | list | `[]` | HTTP headers allowed for CORS, ex: X-Custom, Content-Disposition. If this is not set or empty, all requested headers are considered allowed. | -| cors.allowedMethods | list | `[]` | HTTP methods allowed for CORS, ex: GET, PUT, POST. If this is not set or empty, all requested methods are considered allowed. | -| cors.allowedOrigins | list | `[]` | Origins allowed for CORS, e.g. http://polaris.apache.org, http://localhost:8181. In case an entry of the list is surrounded by forward slashes, it is interpreted as a regular expression. | -| cors.exposedHeaders | list | `[]` | HTTP headers exposed to the client, ex: X-Custom, Content-Disposition. The default is an empty list. | -| extraEnv | list | `[]` | Advanced configuration via Environment Variables. Extra environment variables to add to the Polaris server container. You can pass here any valid EnvVar object: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#envvar-v1-core This can be useful to get configuration values from Kubernetes secrets or config maps. | -| extraInitContainers | list | `[]` | Add additional init containers to the polaris pod(s) See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/. | -| extraServices | list | `[]` | Additional service definitions. All service definitions always select all Polaris pods. Use this if you need to expose specific ports with different configurations, e.g. expose polaris-http with an alternate LoadBalancer service instead of ClusterIP. | -| extraVolumeMounts | list | `[]` | Extra volume mounts to add to the polaris container. See https://kubernetes.io/docs/concepts/storage/volumes/. | -| extraVolumes | list | `[]` | Extra volumes to add to the polaris pod. See https://kubernetes.io/docs/concepts/storage/volumes/. | -| features | object | `{"realmOverrides":{}}` | Polaris features configuration. | -| features.realmOverrides | object | `{}` | Features to enable or disable per realm. This field is a map of maps. The realm name is the key, and the value is a map of feature names to values. If a feature is not present in the map, the default value from the 'defaults' field is used. | -| fileIo | object | `{"type":"default"}` | Polaris FileIO configuration. | -| fileIo.type | string | `"default"` | The type of file IO to use. Two built-in types are supported: default and wasb. The wasb one translates WASB paths to ABFS ones. | -| image.configDir | string | `"/deployments/config"` | The path to the directory where the application.properties file, and other configuration files, if any, should be mounted. Note: if you are using EclipseLink, then this value must be at least two folders down to the root folder, e.g. `/deployments/config` is OK, whereas `/deployments` is not. | -| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | -| image.repository | string | `"apache/polaris"` | The image repository to pull from. | -| image.tag | string | `"latest"` | The image tag. | -| imagePullSecrets | list | `[]` | References to secrets in the same namespace to use for pulling any of the images used by this chart. Each entry is a LocalObjectReference to an existing secret in the namespace. The secret must contain a .dockerconfigjson key with a base64-encoded Docker configuration file. See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ for more information. | -| ingress.annotations | object | `{}` | Annotations to add to the ingress. | -| ingress.className | string | `""` | Specifies the ingressClassName; leave empty if you don't want to customize it | -| ingress.enabled | bool | `false` | Specifies whether an ingress should be created. | -| ingress.hosts | list | `[{"host":"chart-example.local","paths":[]}]` | A list of host paths used to configure the ingress. | -| ingress.tls | list | `[]` | A list of TLS certificates; each entry has a list of hosts in the certificate, along with the secret name used to terminate TLS traffic on port 443. | -| livenessProbe | object | `{"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"terminationGracePeriodSeconds":30,"timeoutSeconds":10}` | Configures the liveness probe for polaris pods. | -| livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the probe to be considered failed after having succeeded. Minimum value is 1. | -| livenessProbe.initialDelaySeconds | int | `5` | Number of seconds after the container has started before liveness probes are initiated. Minimum value is 0. | -| livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the probe. Minimum value is 1. | -| livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1. | -| livenessProbe.terminationGracePeriodSeconds | int | `30` | Optional duration in seconds the pod needs to terminate gracefully upon probe failure. Minimum value is 1. | -| livenessProbe.timeoutSeconds | int | `10` | Number of seconds after which the probe times out. Minimum value is 1. | -| logging | object | `{"categories":{"org.apache.iceberg.rest":"INFO","org.apache.polaris":"INFO"},"console":{"enabled":true,"format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"threshold":"ALL"},"file":{"enabled":false,"fileName":"polaris.log","format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"logsDir":"/deployments/logs","rotation":{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"},"storage":{"className":"standard","selectorLabels":{},"size":"512Gi"},"threshold":"ALL"},"level":"INFO","mdc":{},"requestIdHeaderName":"Polaris-Request-Id"}` | Logging configuration. | -| logging.categories | object | `{"org.apache.iceberg.rest":"INFO","org.apache.polaris":"INFO"}` | Configuration for specific log categories. | -| logging.console | object | `{"enabled":true,"format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"threshold":"ALL"}` | Configuration for the console appender. | -| logging.console.enabled | bool | `true` | Whether to enable the console appender. | -| logging.console.format | string | `"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n"` | The log format to use. Ignored if JSON format is enabled. See https://quarkus.io/guides/logging#logging-format for details. | -| logging.console.json | bool | `false` | Whether to log in JSON format. | -| logging.console.threshold | string | `"ALL"` | The log level of the console appender. | -| logging.file | object | `{"enabled":false,"fileName":"polaris.log","format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"logsDir":"/deployments/logs","rotation":{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"},"storage":{"className":"standard","selectorLabels":{},"size":"512Gi"},"threshold":"ALL"}` | Configuration for the file appender. | -| logging.file.enabled | bool | `false` | Whether to enable the file appender. | -| logging.file.fileName | string | `"polaris.log"` | The log file name. | -| logging.file.format | string | `"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n"` | The log format to use. Ignored if JSON format is enabled. See https://quarkus.io/guides/logging#logging-format for details. | -| logging.file.json | bool | `false` | Whether to log in JSON format. | -| logging.file.logsDir | string | `"/deployments/logs"` | The local directory where log files are stored. The persistent volume claim will be mounted here. | -| logging.file.rotation | object | `{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"}` | Log rotation configuration. | -| logging.file.rotation.fileSuffix | string | `nil` | An optional suffix to append to the rotated log files. If present, the rotated log files will be grouped in time buckets, and each bucket will contain at most maxBackupIndex files. The suffix must be in a date-time format that is understood by DateTimeFormatter. If the suffix ends with .gz or .zip, the rotated files will also be compressed using the corresponding algorithm. | -| logging.file.rotation.maxBackupIndex | int | `5` | The maximum number of backup files to keep. | -| logging.file.rotation.maxFileSize | string | `"100Mi"` | The maximum size of the log file before it is rotated. Should be expressed as a Kubernetes quantity. | -| logging.file.storage | object | `{"className":"standard","selectorLabels":{},"size":"512Gi"}` | The log storage configuration. A persistent volume claim will be created using these settings. | -| logging.file.storage.className | string | `"standard"` | The storage class name of the persistent volume claim to create. | -| logging.file.storage.selectorLabels | object | `{}` | Labels to add to the persistent volume claim spec selector; a persistent volume with matching labels must exist. Leave empty if using dynamic provisioning. | -| logging.file.storage.size | string | `"512Gi"` | The size of the persistent volume claim to create. | -| logging.file.threshold | string | `"ALL"` | The log level of the file appender. | -| logging.level | string | `"INFO"` | The log level of the root category, which is used as the default log level for all categories. | -| logging.mdc | object | `{}` | Configuration for MDC (Mapped Diagnostic Context). Values specified here will be added to the log context of all incoming requests and can be used in log patterns. | -| logging.requestIdHeaderName | string | `"Polaris-Request-Id"` | The header name to use for the request ID. | -| managementService | object | `{"annotations":{},"clusterIP":"None","externalTrafficPolicy":null,"internalTrafficPolicy":null,"ports":[{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}],"sessionAffinity":null,"trafficDistribution":null,"type":"ClusterIP"}` | Management service settings. These settings are used to configure liveness and readiness probes, and to configure the dedicated headless service that will expose health checks and metrics, e.g. for metrics scraping and service monitoring. | -| managementService.annotations | object | `{}` | Annotations to add to the service. | -| managementService.clusterIP | string | `"None"` | By default, the management service is headless, i.e. it does not have a cluster IP. This is generally the right option for exposing health checks and metrics, e.g. for metrics scraping and service monitoring. | -| managementService.ports | list | `[{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}]` | The ports the management service will listen on. At least one port is required; the first port implicitly becomes the HTTP port that the application will use for serving management requests. By default, it's 8182. Note: port names must be unique and no more than 15 characters long. | -| managementService.ports[0] | object | `{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}` | The name of the management port. Required. | -| managementService.ports[0].nodePort | string | `nil` | The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. | -| managementService.ports[0].port | int | `8182` | The port the management service listens on. By default, the management interface is exposed on HTTP port 8182. | -| managementService.ports[0].protocol | string | `nil` | The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. | -| managementService.ports[0].targetPort | string | `nil` | Number or name of the port to access on the pods targeted by the service. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used. | -| managementService.type | string | `"ClusterIP"` | The type of service to create. Valid values are: ExternalName, ClusterIP, NodePort, and LoadBalancer. The default value is ClusterIP. | -| metrics.enabled | bool | `true` | Specifies whether metrics for the polaris server should be enabled. | -| metrics.tags | object | `{}` | Additional tags (dimensional labels) to add to the metrics. | -| nodeSelector | object | `{}` | Node labels which must match for the polaris pod to be scheduled on that node. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector. | -| persistence | object | `{"eclipseLink":{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}},"relationalJdbc":{"secret":{"jdbcUrl":null,"name":null,"password":null,"username":null}},"type":"eclipse-link"}` | Polaris persistence configuration. | -| persistence.eclipseLink | object | `{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}}` | The configuration for the eclipse-link persistence manager. | -| persistence.eclipseLink.persistenceUnit | string | `"polaris"` | The persistence unit name to use. | -| persistence.eclipseLink.secret | object | `{"key":"persistence.xml","name":null}` | The secret name to pull persistence.xml from. | -| persistence.eclipseLink.secret.key | string | `"persistence.xml"` | The key in the secret to pull persistence.xml from. | -| persistence.eclipseLink.secret.name | string | `nil` | The name of the secret to pull persistence.xml from. If not provided, the default built-in persistence.xml will be used. This is probably not what you want. | -| persistence.relationalJdbc | object | `{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}}` | The configuration for the relational-jdbc persistence manager. | -| persistence.relationalJdbc.secret | object | `{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}` | The secret containing database connection credentials. | -| persistence.relationalJdbc.secret.jdbcUrl | string | `"jdbcUrl"` | The key in the secret containing the JDBC connection URL. | -| persistence.relationalJdbc.secret.name | string | `nil` | The name of the secret containing database credentials. If not provided, you must configure database connection details via other means. | -| persistence.relationalJdbc.secret.password | string | `"password"` | The key in the secret containing the database password. | -| persistence.relationalJdbc.secret.username | string | `"username"` | The key in the secret containing the database username. | -| persistence.type | string | `"in-memory"` | The type of persistence to use. Supported types: in-memory, eclipse-link, relational-jdbc. | -| podAnnotations | object | `{}` | Annotations to apply to polaris pods. | -| podLabels | object | `{}` | Additional Labels to apply to polaris pods. | -| podSecurityContext | object | `{"fsGroup":10001,"seccompProfile":{"type":"RuntimeDefault"}}` | Security context for the polaris pod. See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/. | -| podSecurityContext.fsGroup | int | `10001` | GID 10001 is compatible with Polaris OSS default images; change this if you are using a different image. | -| rateLimiter | object | `{"tokenBucket":{"requestsPerSecond":9999,"type":"default","window":"PT10S"},"type":"no-op"}` | Polaris rate limiter configuration. | -| rateLimiter.tokenBucket | object | `{"requestsPerSecond":9999,"type":"default","window":"PT10S"}` | The configuration for the default rate limiter, which uses the token bucket algorithm with one bucket per realm. | -| rateLimiter.tokenBucket.requestsPerSecond | int | `9999` | The maximum number of requests per second allowed for each realm. | -| rateLimiter.tokenBucket.type | string | `"default"` | The type of the token bucket rate limiter. Only the default type is supported out of the box. | -| rateLimiter.tokenBucket.window | string | `"PT10S"` | The time window. | -| rateLimiter.type | string | `"no-op"` | The type of rate limiter filter to use. Two built-in types are supported: default and no-op. | -| readinessProbe | object | `{"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10}` | Configures the readiness probe for polaris pods. | -| readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the probe to be considered failed after having succeeded. Minimum value is 1. | -| readinessProbe.initialDelaySeconds | int | `5` | Number of seconds after the container has started before readiness probes are initiated. Minimum value is 0. | -| readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the probe. Minimum value is 1. | -| readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1. | -| readinessProbe.timeoutSeconds | int | `10` | Number of seconds after which the probe times out. Minimum value is 1. | -| realmContext | object | `{"realms":["POLARIS"],"type":"default"}` | Realm context resolver configuration. | -| realmContext.realms | list | `["POLARIS"]` | List of valid realms, for use with the default realm context resolver. The first realm in the list is the default realm. Realms not in this list will be rejected. | -| realmContext.type | string | `"default"` | The type of realm context resolver to use. Two built-in types are supported: default and test; test is not recommended for production as it does not perform any realm validation. | -| replicaCount | int | `1` | The number of replicas to deploy (horizontal scaling). Beware that replicas are stateless; don't set this number > 1 when using in-memory meta store manager. | -| resources | object | `{}` | Configures the resources requests and limits for polaris pods. We usually recommend not to specify default resources and to leave this as a conscious choice for the user. This also increases chances charts run on environments with little resources, such as Minikube. If you do want to specify resources, uncomment the following lines, adjust them as necessary, and remove the curly braces after 'resources:'. | -| revisionHistoryLimit | string | `nil` | The number of old ReplicaSets to retain to allow rollback (if not set, the default Kubernetes value is set to 10). | -| service | object | `{"annotations":{},"clusterIP":null,"externalTrafficPolicy":null,"internalTrafficPolicy":null,"ports":[{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}],"sessionAffinity":null,"trafficDistribution":null,"type":"ClusterIP"}` | Polaris main service settings. | -| service.annotations | object | `{}` | Annotations to add to the service. | -| service.clusterIP | string | `nil` | You can specify your own cluster IP address If you define a Service that has the .spec.clusterIP set to "None" then Kubernetes does not assign an IP address. Instead, DNS records for the service will return the IP addresses of each pod targeted by the server. This is called a headless service. See https://kubernetes.io/docs/concepts/services-networking/service/#headless-services | -| service.externalTrafficPolicy | string | `nil` | Controls how traffic from external sources is routed. Valid values are Cluster and Local. The default value is Cluster. Set the field to Cluster to route traffic to all ready endpoints. Set the field to Local to only route to ready node-local endpoints. If the traffic policy is Local and there are no node-local endpoints, traffic is dropped by kube-proxy. | -| service.internalTrafficPolicy | string | `nil` | Controls how traffic from internal sources is routed. Valid values are Cluster and Local. The default value is Cluster. Set the field to Cluster to route traffic to all ready endpoints. Set the field to Local to only route to ready node-local endpoints. If the traffic policy is Local and there are no node-local endpoints, traffic is dropped by kube-proxy. | -| service.ports | list | `[{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}]` | The ports the service will listen on. At least one port is required; the first port implicitly becomes the HTTP port that the application will use for serving API requests. By default, it's 8181. Note: port names must be unique and no more than 15 characters long. | -| service.ports[0] | object | `{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}` | The name of the port. Required. | -| service.ports[0].nodePort | string | `nil` | The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. | -| service.ports[0].port | int | `8181` | The port the service listens on. By default, the HTTP port is 8181. | -| service.ports[0].protocol | string | `nil` | The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. | -| service.ports[0].targetPort | string | `nil` | Number or name of the port to access on the pods targeted by the service. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used. | -| service.sessionAffinity | string | `nil` | The session affinity for the service. Valid values are: None, ClientIP. The default value is None. ClientIP enables sticky sessions based on the client's IP address. This is generally beneficial to Polaris deployments, but some testing may be required in order to make sure that the load is distributed evenly among the pods. Also, this setting affects only internal clients, not external ones. If Ingress is enabled, it is recommended to set sessionAffinity to None. | -| service.trafficDistribution | string | `nil` | The traffic distribution field provides another way to influence traffic routing within a Kubernetes Service. While traffic policies focus on strict semantic guarantees, traffic distribution allows you to express preferences such as routing to topologically closer endpoints. The only valid value is: PreferClose. The default value is implementation-specific. | -| service.type | string | `"ClusterIP"` | The type of service to create. Valid values are: ExternalName, ClusterIP, NodePort, and LoadBalancer. The default value is ClusterIP. | -| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | -| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. | -| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template. | -| serviceMonitor.enabled | bool | `true` | Specifies whether a ServiceMonitor for Prometheus operator should be created. | -| serviceMonitor.interval | string | `""` | The scrape interval; leave empty to let Prometheus decide. Must be a valid duration, e.g. 1d, 1h30m, 5m, 10s. | -| serviceMonitor.labels | object | `{}` | Labels for the created ServiceMonitor so that Prometheus operator can properly pick it up. | -| serviceMonitor.metricRelabelings | list | `[]` | Relabeling rules to apply to metrics. Ref https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config. | -| storage | object | `{"secret":{"awsAccessKeyId":null,"awsSecretAccessKey":null,"gcpToken":null,"gcpTokenLifespan":null,"name":null}}` | Storage credentials for the server. If the following properties are unset, default credentials will be used, in which case the pod must have the necessary permissions to access the storage. | -| storage.secret | object | `{"awsAccessKeyId":null,"awsSecretAccessKey":null,"gcpToken":null,"gcpTokenLifespan":null,"name":null}` | The secret to pull storage credentials from. | -| storage.secret.awsAccessKeyId | string | `nil` | The key in the secret to pull the AWS access key ID from. Only required when using AWS. | -| storage.secret.awsSecretAccessKey | string | `nil` | The key in the secret to pull the AWS secret access key from. Only required when using AWS. | -| storage.secret.gcpToken | string | `nil` | The key in the secret to pull the GCP token from. Only required when using GCP. | -| storage.secret.gcpTokenLifespan | string | `nil` | The key in the secret to pull the GCP token expiration time from. Only required when using GCP. Must be a valid ISO 8601 duration. The default is PT1H (1 hour). | -| storage.secret.name | string | `nil` | The name of the secret to pull storage credentials from. | -| tasks | object | `{"maxConcurrentTasks":null,"maxQueuedTasks":null}` | Polaris asynchronous task executor configuration. | -| tasks.maxConcurrentTasks | string | `nil` | The maximum number of concurrent tasks that can be executed at the same time. The default is the number of available cores. | -| tasks.maxQueuedTasks | string | `nil` | The maximum number of tasks that can be queued up for execution. The default is Integer.MAX_VALUE. | -| tolerations | list | `[]` | A list of tolerations to apply to polaris pods. See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/. | -| tracing.attributes | object | `{}` | Resource attributes to identify the polaris service among other tracing sources. See https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#service. If left empty, traces will be attached to a service named "Apache Polaris"; to change this, provide a service.name attribute here. | -| tracing.enabled | bool | `false` | Specifies whether tracing for the polaris server should be enabled. | -| tracing.endpoint | string | `"http://otlp-collector:4317"` | The collector endpoint URL to connect to (required). The endpoint URL must have either the http:// or the https:// scheme. The collector must talk the OpenTelemetry protocol (OTLP) and the port must be its gRPC port (by default 4317). See https://quarkus.io/guides/opentelemetry for more information. | -| tracing.sample | string | `"1.0d"` | Which requests should be sampled. Valid values are: "all", "none", or a ratio between 0.0 and "1.0d" (inclusive). E.g. "0.5d" means that 50% of the requests will be sampled. Note: avoid entering numbers here, always prefer a string representation of the ratio. | +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| advancedConfig | object | `{}` | Advanced configuration. You can pass here any valid Polaris or Quarkus configuration property. Any property that is defined here takes precedence over all the other configuration values generated by this chart. Properties can be passed "flattened" or as nested YAML objects (see examples below). Note: values should be strings; avoid using numbers, booleans, or other types. | +| affinity | object | `{}` | Affinity and anti-affinity for polaris pods. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. | +| authentication | object | `{"authenticator":{"type":"default"},"tokenBroker":{"maxTokenGeneration":"PT1H","secret":{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"},"type":"rsa-key-pair"},"tokenService":{"type":"default"}}` | Polaris authentication configuration. | +| authentication.authenticator | object | `{"type":"default"}` | The type of authentication to use. Two built-in types are supported: default and test; test is not recommended for production. | +| authentication.tokenBroker | object | `{"maxTokenGeneration":"PT1H","secret":{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"},"type":"rsa-key-pair"}` | The type of token broker to use. Two built-in types are supported: rsa-key-pair and symmetric-key. | +| authentication.tokenBroker.secret | object | `{"name":null,"privateKey":"private.pem","publicKey":"public.pem","secretKey":"secret"}` | The secret name to pull the public and private keys, or the symmetric key secret from. | +| authentication.tokenBroker.secret.name | string | `nil` | The name of the secret to pull the keys from. If not provided, a key pair will be generated. This is not recommended for production. | +| authentication.tokenBroker.secret.privateKey | string | `"private.pem"` | The private key file to use for RSA key pair token broker. Only required when using rsa-key-pair. | +| authentication.tokenBroker.secret.publicKey | string | `"public.pem"` | The public key file to use for RSA key pair token broker. Only required when using rsa-key-pair. | +| authentication.tokenBroker.secret.secretKey | string | `"secret"` | The symmetric key file to use for symmetric key token broker. Only required when using symmetric-key. | +| authentication.tokenService | object | `{"type":"default"}` | The type of token service to use. Two built-in types are supported: default and test; test is not recommended for production. | +| autoscaling.enabled | bool | `false` | Specifies whether automatic horizontal scaling should be enabled. Do not enable this when using in-memory version store type. | +| autoscaling.maxReplicas | int | `3` | The maximum number of replicas to maintain. | +| autoscaling.minReplicas | int | `1` | The minimum number of replicas to maintain. | +| autoscaling.targetCPUUtilizationPercentage | int | `80` | Optional; set to zero or empty to disable. | +| autoscaling.targetMemoryUtilizationPercentage | string | `nil` | Optional; set to zero or empty to disable. | +| configMapLabels | object | `{}` | Additional Labels to apply to polaris configmap. | +| containerSecurityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"runAsNonRoot":true,"runAsUser":10000,"seccompProfile":{"type":"RuntimeDefault"}}` | Security context for the polaris container. See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/. | +| containerSecurityContext.runAsUser | int | `10000` | UID 10000 is compatible with Polaris OSS default images; change this if you are using a different image. | +| cors | object | `{"accessControlAllowCredentials":null,"accessControlMaxAge":null,"allowedHeaders":[],"allowedMethods":[],"allowedOrigins":[],"exposedHeaders":[]}` | Polaris CORS configuration. | +| cors.accessControlAllowCredentials | string | `nil` | The `Access-Control-Allow-Credentials` response header. The value of this header will default to `true` if `allowedOrigins` property is set and there is a match with the precise `Origin` header. | +| cors.accessControlMaxAge | string | `nil` | The `Access-Control-Max-Age` response header value indicating how long the results of a pre-flight request can be cached. Must be a valid duration. | +| cors.allowedHeaders | list | `[]` | HTTP headers allowed for CORS, ex: X-Custom, Content-Disposition. If this is not set or empty, all requested headers are considered allowed. | +| cors.allowedMethods | list | `[]` | HTTP methods allowed for CORS, ex: GET, PUT, POST. If this is not set or empty, all requested methods are considered allowed. | +| cors.allowedOrigins | list | `[]` | Origins allowed for CORS, e.g. http://polaris.apache.org, http://localhost:8181. In case an entry of the list is surrounded by forward slashes, it is interpreted as a regular expression. | +| cors.exposedHeaders | list | `[]` | HTTP headers exposed to the client, ex: X-Custom, Content-Disposition. The default is an empty list. | +| extraEnv | list | `[]` | Advanced configuration via Environment Variables. Extra environment variables to add to the Polaris server container. You can pass here any valid EnvVar object: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#envvar-v1-core This can be useful to get configuration values from Kubernetes secrets or config maps. | +| extraInitContainers | list | `[]` | Add additional init containers to the polaris pod(s) See https://kubernetes.io/docs/concepts/workloads/pods/init-containers/. | +| extraServices | list | `[]` | Additional service definitions. All service definitions always select all Polaris pods. Use this if you need to expose specific ports with different configurations, e.g. expose polaris-http with an alternate LoadBalancer service instead of ClusterIP. | +| extraVolumeMounts | list | `[]` | Extra volume mounts to add to the polaris container. See https://kubernetes.io/docs/concepts/storage/volumes/. | +| extraVolumes | list | `[]` | Extra volumes to add to the polaris pod. See https://kubernetes.io/docs/concepts/storage/volumes/. | +| features | object | `{"realmOverrides":{}}` | Polaris features configuration. | +| features.realmOverrides | object | `{}` | Features to enable or disable per realm. This field is a map of maps. The realm name is the key, and the value is a map of feature names to values. If a feature is not present in the map, the default value from the 'defaults' field is used. | +| fileIo | object | `{"type":"default"}` | Polaris FileIO configuration. | +| fileIo.type | string | `"default"` | The type of file IO to use. Two built-in types are supported: default and wasb. The wasb one translates WASB paths to ABFS ones. | +| image.configDir | string | `"/deployments/config"` | The path to the directory where the application.properties file, and other configuration files, if any, should be mounted. Note: if you are using EclipseLink, then this value must be at least two folders down to the root folder, e.g. `/deployments/config` is OK, whereas `/deployments` is not. | +| image.pullPolicy | string | `"IfNotPresent"` | The image pull policy. | +| image.repository | string | `"apache/polaris"` | The image repository to pull from. | +| image.tag | string | `"1.1.0-incubating-SNAPSHOT"` | The image tag. | +| imagePullSecrets | list | `[]` | References to secrets in the same namespace to use for pulling any of the images used by this chart. Each entry is a LocalObjectReference to an existing secret in the namespace. The secret must contain a .dockerconfigjson key with a base64-encoded Docker configuration file. See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ for more information. | +| ingress.annotations | object | `{}` | Annotations to add to the ingress. | +| ingress.className | string | `""` | Specifies the ingressClassName; leave empty if you don't want to customize it | +| ingress.enabled | bool | `false` | Specifies whether an ingress should be created. | +| ingress.hosts | list | `[{"host":"chart-example.local","paths":[]}]` | A list of host paths used to configure the ingress. | +| ingress.tls | list | `[]` | A list of TLS certificates; each entry has a list of hosts in the certificate, along with the secret name used to terminate TLS traffic on port 443. | +| livenessProbe | object | `{"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"terminationGracePeriodSeconds":30,"timeoutSeconds":10}` | Configures the liveness probe for polaris pods. | +| livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the probe to be considered failed after having succeeded. Minimum value is 1. | +| livenessProbe.initialDelaySeconds | int | `5` | Number of seconds after the container has started before liveness probes are initiated. Minimum value is 0. | +| livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the probe. Minimum value is 1. | +| livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1. | +| livenessProbe.terminationGracePeriodSeconds | int | `30` | Optional duration in seconds the pod needs to terminate gracefully upon probe failure. Minimum value is 1. | +| livenessProbe.timeoutSeconds | int | `10` | Number of seconds after which the probe times out. Minimum value is 1. | +| logging | object | `{"categories":{"org.apache.iceberg.rest":"INFO","org.apache.polaris":"INFO"},"console":{"enabled":true,"format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"threshold":"ALL"},"file":{"enabled":false,"fileName":"polaris.log","format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"logsDir":"/deployments/logs","rotation":{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"},"storage":{"className":"standard","selectorLabels":{},"size":"512Gi"},"threshold":"ALL"},"level":"INFO","mdc":{},"requestIdHeaderName":"Polaris-Request-Id"}` | Logging configuration. | +| logging.categories | object | `{"org.apache.iceberg.rest":"INFO","org.apache.polaris":"INFO"}` | Configuration for specific log categories. | +| logging.console | object | `{"enabled":true,"format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"threshold":"ALL"}` | Configuration for the console appender. | +| logging.console.enabled | bool | `true` | Whether to enable the console appender. | +| logging.console.format | string | `"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n"` | The log format to use. Ignored if JSON format is enabled. See https://quarkus.io/guides/logging#logging-format for details. | +| logging.console.json | bool | `false` | Whether to log in JSON format. | +| logging.console.threshold | string | `"ALL"` | The log level of the console appender. | +| logging.file | object | `{"enabled":false,"fileName":"polaris.log","format":"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n","json":false,"logsDir":"/deployments/logs","rotation":{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"},"storage":{"className":"standard","selectorLabels":{},"size":"512Gi"},"threshold":"ALL"}` | Configuration for the file appender. | +| logging.file.enabled | bool | `false` | Whether to enable the file appender. | +| logging.file.fileName | string | `"polaris.log"` | The log file name. | +| logging.file.format | string | `"%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] [%X{requestId},%X{realmId}] [%X{traceId},%X{parentId},%X{spanId},%X{sampled}] (%t) %s%e%n"` | The log format to use. Ignored if JSON format is enabled. See https://quarkus.io/guides/logging#logging-format for details. | +| logging.file.json | bool | `false` | Whether to log in JSON format. | +| logging.file.logsDir | string | `"/deployments/logs"` | The local directory where log files are stored. The persistent volume claim will be mounted here. | +| logging.file.rotation | object | `{"fileSuffix":null,"maxBackupIndex":5,"maxFileSize":"100Mi"}` | Log rotation configuration. | +| logging.file.rotation.fileSuffix | string | `nil` | An optional suffix to append to the rotated log files. If present, the rotated log files will be grouped in time buckets, and each bucket will contain at most maxBackupIndex files. The suffix must be in a date-time format that is understood by DateTimeFormatter. If the suffix ends with .gz or .zip, the rotated files will also be compressed using the corresponding algorithm. | +| logging.file.rotation.maxBackupIndex | int | `5` | The maximum number of backup files to keep. | +| logging.file.rotation.maxFileSize | string | `"100Mi"` | The maximum size of the log file before it is rotated. Should be expressed as a Kubernetes quantity. | +| logging.file.storage | object | `{"className":"standard","selectorLabels":{},"size":"512Gi"}` | The log storage configuration. A persistent volume claim will be created using these settings. | +| logging.file.storage.className | string | `"standard"` | The storage class name of the persistent volume claim to create. | +| logging.file.storage.selectorLabels | object | `{}` | Labels to add to the persistent volume claim spec selector; a persistent volume with matching labels must exist. Leave empty if using dynamic provisioning. | +| logging.file.storage.size | string | `"512Gi"` | The size of the persistent volume claim to create. | +| logging.file.threshold | string | `"ALL"` | The log level of the file appender. | +| logging.level | string | `"INFO"` | The log level of the root category, which is used as the default log level for all categories. | +| logging.mdc | object | `{}` | Configuration for MDC (Mapped Diagnostic Context). Values specified here will be added to the log context of all incoming requests and can be used in log patterns. | +| logging.requestIdHeaderName | string | `"Polaris-Request-Id"` | The header name to use for the request ID. | +| managementService | object | `{"annotations":{},"clusterIP":"None","externalTrafficPolicy":null,"internalTrafficPolicy":null,"ports":[{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}],"sessionAffinity":null,"trafficDistribution":null,"type":"ClusterIP"}` | Management service settings. These settings are used to configure liveness and readiness probes, and to configure the dedicated headless service that will expose health checks and metrics, e.g. for metrics scraping and service monitoring. | +| managementService.annotations | object | `{}` | Annotations to add to the service. | +| managementService.clusterIP | string | `"None"` | By default, the management service is headless, i.e. it does not have a cluster IP. This is generally the right option for exposing health checks and metrics, e.g. for metrics scraping and service monitoring. | +| managementService.ports | list | `[{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}]` | The ports the management service will listen on. At least one port is required; the first port implicitly becomes the HTTP port that the application will use for serving management requests. By default, it's 8182. Note: port names must be unique and no more than 15 characters long. | +| managementService.ports[0] | object | `{"name":"polaris-mgmt","nodePort":null,"port":8182,"protocol":null,"targetPort":null}` | The name of the management port. Required. | +| managementService.ports[0].nodePort | string | `nil` | The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. | +| managementService.ports[0].port | int | `8182` | The port the management service listens on. By default, the management interface is exposed on HTTP port 8182. | +| managementService.ports[0].protocol | string | `nil` | The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. | +| managementService.ports[0].targetPort | string | `nil` | Number or name of the port to access on the pods targeted by the service. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used. | +| managementService.type | string | `"ClusterIP"` | The type of service to create. Valid values are: ExternalName, ClusterIP, NodePort, and LoadBalancer. The default value is ClusterIP. | +| metrics.enabled | bool | `true` | Specifies whether metrics for the polaris server should be enabled. | +| metrics.tags | object | `{}` | Additional tags (dimensional labels) to add to the metrics. | +| nodeSelector | object | `{}` | Node labels which must match for the polaris pod to be scheduled on that node. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector. | +| persistence | object | `{"eclipseLink":{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}},"relationalJdbc":{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}},"type":"in-memory"}` | Polaris persistence configuration. | +| persistence.eclipseLink | object | `{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}}` | The configuration for the eclipse-link persistence manager. | +| persistence.eclipseLink.persistenceUnit | string | `"polaris"` | The persistence unit name to use. | +| persistence.eclipseLink.secret | object | `{"key":"persistence.xml","name":null}` | The secret name to pull persistence.xml from. | +| persistence.eclipseLink.secret.key | string | `"persistence.xml"` | The key in the secret to pull persistence.xml from. | +| persistence.eclipseLink.secret.name | string | `nil` | The name of the secret to pull persistence.xml from. If not provided, the default built-in persistence.xml will be used. This is probably not what you want. | +| persistence.relationalJdbc | object | `{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}}` | The configuration for the relational-jdbc persistence manager. | +| persistence.relationalJdbc.secret | object | `{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}` | The secret name to pull the database connection properties from. | +| persistence.relationalJdbc.secret.jdbcUrl | string | `"jdbcUrl"` | The secret key holding the database JDBC connection URL | +| persistence.relationalJdbc.secret.name | string | `nil` | The secret name to pull database connection properties from | +| persistence.relationalJdbc.secret.password | string | `"password"` | The secret key holding the database password for authentication | +| persistence.relationalJdbc.secret.username | string | `"username"` | The secret key holding the database username for authentication | +| persistence.type | string | `"in-memory"` | The type of persistence to use. Two built-in types are supported: in-memory and relational-jdbc. The eclipse-link type is also supported but is deprecated. | +| podAnnotations | object | `{}` | Annotations to apply to polaris pods. | +| podLabels | object | `{}` | Additional Labels to apply to polaris pods. | +| podSecurityContext | object | `{"fsGroup":10001,"seccompProfile":{"type":"RuntimeDefault"}}` | Security context for the polaris pod. See https://kubernetes.io/docs/tasks/configure-pod-container/security-context/. | +| podSecurityContext.fsGroup | int | `10001` | GID 10001 is compatible with Polaris OSS default images; change this if you are using a different image. | +| rateLimiter | object | `{"tokenBucket":{"requestsPerSecond":9999,"type":"default","window":"PT10S"},"type":"no-op"}` | Polaris rate limiter configuration. | +| rateLimiter.tokenBucket | object | `{"requestsPerSecond":9999,"type":"default","window":"PT10S"}` | The configuration for the default rate limiter, which uses the token bucket algorithm with one bucket per realm. | +| rateLimiter.tokenBucket.requestsPerSecond | int | `9999` | The maximum number of requests per second allowed for each realm. | +| rateLimiter.tokenBucket.type | string | `"default"` | The type of the token bucket rate limiter. Only the default type is supported out of the box. | +| rateLimiter.tokenBucket.window | string | `"PT10S"` | The time window. | +| rateLimiter.type | string | `"no-op"` | The type of rate limiter filter to use. Two built-in types are supported: default and no-op. | +| readinessProbe | object | `{"failureThreshold":3,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10}` | Configures the readiness probe for polaris pods. | +| readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the probe to be considered failed after having succeeded. Minimum value is 1. | +| readinessProbe.initialDelaySeconds | int | `5` | Number of seconds after the container has started before readiness probes are initiated. Minimum value is 0. | +| readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the probe. Minimum value is 1. | +| readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1. | +| readinessProbe.timeoutSeconds | int | `10` | Number of seconds after which the probe times out. Minimum value is 1. | +| realmContext | object | `{"realms":["POLARIS"],"type":"default"}` | Realm context resolver configuration. | +| realmContext.realms | list | `["POLARIS"]` | List of valid realms, for use with the default realm context resolver. The first realm in the list is the default realm. Realms not in this list will be rejected. | +| realmContext.type | string | `"default"` | The type of realm context resolver to use. Two built-in types are supported: default and test; test is not recommended for production as it does not perform any realm validation. | +| replicaCount | int | `1` | The number of replicas to deploy (horizontal scaling). Beware that replicas are stateless; don't set this number > 1 when using in-memory meta store manager. | +| resources | object | `{}` | Configures the resources requests and limits for polaris pods. We usually recommend not to specify default resources and to leave this as a conscious choice for the user. This also increases chances charts run on environments with little resources, such as Minikube. If you do want to specify resources, uncomment the following lines, adjust them as necessary, and remove the curly braces after 'resources:'. | +| revisionHistoryLimit | string | `nil` | The number of old ReplicaSets to retain to allow rollback (if not set, the default Kubernetes value is set to 10). | +| service | object | `{"annotations":{},"clusterIP":null,"externalTrafficPolicy":null,"internalTrafficPolicy":null,"ports":[{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}],"sessionAffinity":null,"trafficDistribution":null,"type":"ClusterIP"}` | Polaris main service settings. | +| service.annotations | object | `{}` | Annotations to add to the service. | +| service.clusterIP | string | `nil` | You can specify your own cluster IP address If you define a Service that has the .spec.clusterIP set to "None" then Kubernetes does not assign an IP address. Instead, DNS records for the service will return the IP addresses of each pod targeted by the server. This is called a headless service. See https://kubernetes.io/docs/concepts/services-networking/service/#headless-services | +| service.externalTrafficPolicy | string | `nil` | Controls how traffic from external sources is routed. Valid values are Cluster and Local. The default value is Cluster. Set the field to Cluster to route traffic to all ready endpoints. Set the field to Local to only route to ready node-local endpoints. If the traffic policy is Local and there are no node-local endpoints, traffic is dropped by kube-proxy. | +| service.internalTrafficPolicy | string | `nil` | Controls how traffic from internal sources is routed. Valid values are Cluster and Local. The default value is Cluster. Set the field to Cluster to route traffic to all ready endpoints. Set the field to Local to only route to ready node-local endpoints. If the traffic policy is Local and there are no node-local endpoints, traffic is dropped by kube-proxy. | +| service.ports | list | `[{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}]` | The ports the service will listen on. At least one port is required; the first port implicitly becomes the HTTP port that the application will use for serving API requests. By default, it's 8181. Note: port names must be unique and no more than 15 characters long. | +| service.ports[0] | object | `{"name":"polaris-http","nodePort":null,"port":8181,"protocol":null,"targetPort":null}` | The name of the port. Required. | +| service.ports[0].nodePort | string | `nil` | The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. | +| service.ports[0].port | int | `8181` | The port the service listens on. By default, the HTTP port is 8181. | +| service.ports[0].protocol | string | `nil` | The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. | +| service.ports[0].targetPort | string | `nil` | Number or name of the port to access on the pods targeted by the service. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used. | +| service.sessionAffinity | string | `nil` | The session affinity for the service. Valid values are: None, ClientIP. The default value is None. ClientIP enables sticky sessions based on the client's IP address. This is generally beneficial to Polaris deployments, but some testing may be required in order to make sure that the load is distributed evenly among the pods. Also, this setting affects only internal clients, not external ones. If Ingress is enabled, it is recommended to set sessionAffinity to None. | +| service.trafficDistribution | string | `nil` | The traffic distribution field provides another way to influence traffic routing within a Kubernetes Service. While traffic policies focus on strict semantic guarantees, traffic distribution allows you to express preferences such as routing to topologically closer endpoints. The only valid value is: PreferClose. The default value is implementation-specific. | +| service.type | string | `"ClusterIP"` | The type of service to create. Valid values are: ExternalName, ClusterIP, NodePort, and LoadBalancer. The default value is ClusterIP. | +| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | +| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. | +| serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template. | +| serviceMonitor.enabled | bool | `true` | Specifies whether a ServiceMonitor for Prometheus operator should be created. | +| serviceMonitor.interval | string | `""` | The scrape interval; leave empty to let Prometheus decide. Must be a valid duration, e.g. 1d, 1h30m, 5m, 10s. | +| serviceMonitor.labels | object | `{}` | Labels for the created ServiceMonitor so that Prometheus operator can properly pick it up. | +| serviceMonitor.metricRelabelings | list | `[]` | Relabeling rules to apply to metrics. Ref https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config. | +| storage | object | `{"secret":{"awsAccessKeyId":null,"awsSecretAccessKey":null,"gcpToken":null,"gcpTokenLifespan":null,"name":null}}` | Storage credentials for the server. If the following properties are unset, default credentials will be used, in which case the pod must have the necessary permissions to access the storage. | +| storage.secret | object | `{"awsAccessKeyId":null,"awsSecretAccessKey":null,"gcpToken":null,"gcpTokenLifespan":null,"name":null}` | The secret to pull storage credentials from. | +| storage.secret.awsAccessKeyId | string | `nil` | The key in the secret to pull the AWS access key ID from. Only required when using AWS. | +| storage.secret.awsSecretAccessKey | string | `nil` | The key in the secret to pull the AWS secret access key from. Only required when using AWS. | +| storage.secret.gcpToken | string | `nil` | The key in the secret to pull the GCP token from. Only required when using GCP. | +| storage.secret.gcpTokenLifespan | string | `nil` | The key in the secret to pull the GCP token expiration time from. Only required when using GCP. Must be a valid ISO 8601 duration. The default is PT1H (1 hour). | +| storage.secret.name | string | `nil` | The name of the secret to pull storage credentials from. | +| tasks | object | `{"maxConcurrentTasks":null,"maxQueuedTasks":null}` | Polaris asynchronous task executor configuration. | +| tasks.maxConcurrentTasks | string | `nil` | The maximum number of concurrent tasks that can be executed at the same time. The default is the number of available cores. | +| tasks.maxQueuedTasks | string | `nil` | The maximum number of tasks that can be queued up for execution. The default is Integer.MAX_VALUE. | +| tolerations | list | `[]` | A list of tolerations to apply to polaris pods. See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/. | +| tracing.attributes | object | `{}` | Resource attributes to identify the polaris service among other tracing sources. See https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#service. If left empty, traces will be attached to a service named "Apache Polaris"; to change this, provide a service.name attribute here. | +| tracing.enabled | bool | `false` | Specifies whether tracing for the polaris server should be enabled. | +| tracing.endpoint | string | `"http://otlp-collector:4317"` | The collector endpoint URL to connect to (required). The endpoint URL must have either the http:// or the https:// scheme. The collector must talk the OpenTelemetry protocol (OTLP) and the port must be its gRPC port (by default 4317). See https://quarkus.io/guides/opentelemetry for more information. | +| tracing.sample | string | `"1.0d"` | Which requests should be sampled. Valid values are: "all", "none", or a ratio between 0.0 and "1.0d" (inclusive). E.g. "0.5d" means that 50% of the requests will be sampled. Note: avoid entering numbers here, always prefer a string representation of the ratio. | diff --git a/helm/polaris/README.md.gotmpl b/helm/polaris/README.md.gotmpl index c676663f56..af6ed5f3bb 100644 --- a/helm/polaris/README.md.gotmpl +++ b/helm/polaris/README.md.gotmpl @@ -35,27 +35,12 @@ {{ template "chart.homepageLine" . }} -{{ template "chart.maintainersHeader" . }} - -{{- range .Maintainers }} -* [{{ .Name }}]({{ if .Url }}{{ .Url }}{{ else }}https://github.com/{{ .Name }}{{ end }}) -{{- end }} - {{ template "chart.sourcesSection" . }} {{ template "chart.requirementsSection" . }} ## Installation -### Prerequisites - -When using the (deprecated) EclipseLink-backed metastore, a custom `persistence.xml` is required, -and a Kubernetes Secret must be created for it. Below is a sample command: - -```bash -kubectl create secret generic polaris-secret -n polaris --from-file=persistence.xml -``` - ### Running locally with a Kind cluster The below instructions assume Kind and Helm are installed. @@ -66,20 +51,17 @@ Simply run the `run.sh` script from the Polaris repo root: ./run.sh ``` -If using the EclipseLink-backed metastore, make sure to specify the `--eclipse-link-deps` option. - -This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker -images with support for Postgres and load them into the Kind cluster. (It will also create an -example Deployment and Service with in-memory storage.) +This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker images and load them into the Kind cluster (It will also create an example Deployment and Service with in-memory storage.) ### Running locally with a Minikube cluster -The below instructions assume a Minikube cluster is already running and Helm is installed. +The below instructions assume Minikube and Helm are installed. -If necessary, build and load the Docker images with support for Postgres into Minikube: +Start the Minikube cluster, build and load image into the Minikube cluster: ```bash -eval $(minikube -p minikube docker-env) +minikube start +eval $(minikube docker-env) ./gradlew \ :polaris-server:assemble \ @@ -89,22 +71,7 @@ eval $(minikube -p minikube docker-env) -Dquarkus.container-image.build=true ``` -### Installing the chart locally - -The below instructions assume a local Kubernetes cluster is running and Helm is installed. - -#### Common setup - -Create the target namespace: - -```bash -kubectl create namespace polaris -``` - -Create all the required resources in the `polaris` namespace. This usually includes a Postgres -database and a Kubernetes Secret for the `persistence.xml` file. The Polaris chart does not create -these resources automatically, as they are not required for all Polaris deployments. The chart will -fail if these resources are not created beforehand. +#### Installing the Helm chart Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. @@ -113,24 +80,22 @@ Below are two sample deployment models for installing the chart: one with a non- > **These files are intended for testing purposes primarily, and may not be suitable for production use**. > For production deployments, create your own values files based on the provided examples. -#### Non-persistent backend +##### Non-persistent backend Install the chart with a non-persistent backend. From Polaris repo root: - ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/simple-values.yaml \ - polaris helm/polaris + --values helm/polaris/ci/simple-values.yaml \ + polaris helm/polaris --create-namespace polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained above, then you need to run `helm upgrade` as follows: - ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/simple-values.yaml \ + --values helm/polaris/ci/simple-values.yaml \ --set=image.repository=localhost:5001/apache/polaris \ - polaris helm/polaris + polaris helm/polaris --create-namespace polaris ``` #### Persistent backend @@ -138,29 +103,22 @@ helm upgrade --install --namespace polaris \ > [!WARNING] > The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance. -Install the chart with a persistent backend. From Polaris repo root: +Install the dependencies in fixtures directory. From Polaris repo root: +```bash +kubectl create namespace polaris +kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ +kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s +``` +Install the chart with a persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ - --debug --values helm/polaris/ci/persistence-values.yaml \ + --values helm/polaris/ci/persistence-values.yaml \ polaris helm/polaris - kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=polaris --timeout=120s ``` -After deploying the chart with a persistent backend, the `persistence.xml` file, originally loaded into the Kubernetes pod via a secret, can be accessed locally if needed. This file contains the persistence configuration required for the next steps. Use the following command to retrieve it: - -```bash -kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- cat /deployments/config/persistence.xml > persistence.xml -``` - -The `persistence.xml` file references the Postgres hostname as postgres. Update it to localhost to enable local connections: - -```bash -sed -i .bak 's/postgres:/localhost:/g' persistence.xml -``` - -To access Polaris and Postgres locally, set up port forwarding for both services: +To access Polaris and Postgres locally, set up port forwarding for both services (This is needed for bootstrap processes): ```bash kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') 8181:8181 @@ -168,12 +126,13 @@ kubectl port-forward -n polaris $(kubectl get pod -n polaris -l app.kubernetes.i ``` Run the catalog bootstrap using the Polaris admin tool. This step initializes the catalog with the required configuration: - ```bash -java -Dpolaris.persistence.eclipselink.configuration-file=./persistence.xml \ - -Dpolaris.persistence.eclipselink.persistence-unit=polaris \ - -jar runtime/admin/build/polaris-admin-*-runner.jar \ - bootstrap -c POLARIS,root,pass -r POLARIS +container_envs=$(kubectl exec -it -n polaris $(kubectl get pod -n polaris -l app.kubernetes.io/name=polaris -o jsonpath='{.items[0].metadata.name}') -- env) +export QUARKUS_DATASOURCE_USERNAME=$(echo "$container_envs" | grep quarkus.datasource.username | awk -F '=' '{print $2}' | tr -d '\n\r') +export QUARKUS_DATASOURCE_PASSWORD=$(echo "$container_envs" | grep quarkus.datasource.password | awk -F '=' '{print $2}' | tr -d '\n\r') +export QUARKUS_DATASOURCE_JDBC_URL=$(echo "$container_envs" | grep quarkus.datasource.jdbc.url | sed 's/postgres/localhost/2' | awk -F '=' '{print $2}' | tr -d '\n\r') + +java -jar runtime/admin/build/quarkus-app/quarkus-run.jar bootstrap -c POLARIS,root,pass -r POLARIS ``` ### Uninstalling @@ -198,27 +157,18 @@ The following tools are required to run the tests: * [Chart Testing](https://github.com/helm/chart-testing) Quick installation instructions for these tools: - ```bash helm plugin install https://github.com/helm-unittest/helm-unittest.git brew install chart-testing ``` -The integration tests also require some fixtures to be deployed. The `ci/fixtures` directory -contains the required resources. To deploy them, run the following command: +The integration tests also require some fixtures to be deployed. Follow the above commands to setup required resources. -```bash -kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ -kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s -``` - -The `helm/polaris/ci` contains a number of values files that will be used to install the chart with -different configurations. +The `helm/polaris/ci` contains a number of values files that will be used to install the chart with different configurations. ### Running the unit tests -Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from -the Polaris repo root: +Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from the Polaris repo root: ```bash helm unittest helm/polaris @@ -232,13 +182,11 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting up -a local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting upa local cluster. Integration tests are run with the Chart Testing tool: - ```bash -ct install --namespace polaris --debug --charts ./helm/polaris +ct install --namespace polaris --charts ./helm/polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained diff --git a/run.sh b/run.sh index aeee47e93c..3d5c17fc3a 100755 --- a/run.sh +++ b/run.sh @@ -24,8 +24,6 @@ # Function to display usage information usage() { echo "Usage: $0 [--eclipse-link-deps=] [-h|--help]" - echo " --eclipse-link-deps= EclipseLink dependencies to use, e.g." - echo " --eclipse-link-deps=com.h2database:h2:2.3.232" echo " -h, --help Display this help message" exit 1 } @@ -33,9 +31,6 @@ usage() { # Parse command-line arguments while [[ "$#" -gt 0 ]]; do case $1 in - --eclipse-link-deps=*) - ECLIPSE_LINK_DEPS="-PeclipseLinkDeps=${1#*=}" - ;; -h|--help) usage ;; @@ -53,9 +48,11 @@ sh ./kind-registry.sh # Build and deploy the server image echo "Building polaris image..." ./gradlew \ - :polaris-server:build \ + :polaris-server:assemble \ :polaris-server:quarkusAppPartsBuild --rerun \ - $ECLIPSE_LINK_DEPS \ + :polaris-admin:assemble \ + :polaris-admin:quarkusAppPartsBuild --rerun \ + -Dquarkus.container-image.tag=postgres-latest \ -Dquarkus.container-image.build=true \ -Dquarkus.container-image.registry=localhost:5001 From 2e47ef29993702d44fd7fdc016dc5c5767c50f47 Mon Sep 17 00:00:00 2001 From: Yong Date: Fri, 4 Jul 2025 23:21:32 -0500 Subject: [PATCH 2/7] Remove persistent ref --- helm/polaris/README.md | 7 +-- helm/polaris/templates/_helpers.tpl | 7 --- helm/polaris/templates/configmap.yaml | 11 +---- helm/polaris/tests/configmap_test.yaml | 8 ---- helm/polaris/tests/deployment_test.yaml | 58 ------------------------- helm/polaris/values.yaml | 12 ----- 6 files changed, 3 insertions(+), 100 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index 662de98dcc..b7472a389b 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -289,12 +289,7 @@ ct install --namespace polaris --debug --charts ./helm/polaris \ | metrics.enabled | bool | `true` | Specifies whether metrics for the polaris server should be enabled. | | metrics.tags | object | `{}` | Additional tags (dimensional labels) to add to the metrics. | | nodeSelector | object | `{}` | Node labels which must match for the polaris pod to be scheduled on that node. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector. | -| persistence | object | `{"eclipseLink":{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}},"relationalJdbc":{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}},"type":"in-memory"}` | Polaris persistence configuration. | -| persistence.eclipseLink | object | `{"persistenceUnit":"polaris","secret":{"key":"persistence.xml","name":null}}` | The configuration for the eclipse-link persistence manager. | -| persistence.eclipseLink.persistenceUnit | string | `"polaris"` | The persistence unit name to use. | -| persistence.eclipseLink.secret | object | `{"key":"persistence.xml","name":null}` | The secret name to pull persistence.xml from. | -| persistence.eclipseLink.secret.key | string | `"persistence.xml"` | The key in the secret to pull persistence.xml from. | -| persistence.eclipseLink.secret.name | string | `nil` | The name of the secret to pull persistence.xml from. If not provided, the default built-in persistence.xml will be used. This is probably not what you want. | +| persistence | object | `{"relationalJdbc":{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}},"type":"in-memory"}` | Polaris persistence configuration. | | persistence.relationalJdbc | object | `{"secret":{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}}` | The configuration for the relational-jdbc persistence manager. | | persistence.relationalJdbc.secret | object | `{"jdbcUrl":"jdbcUrl","name":null,"password":"password","username":"username"}` | The secret name to pull the database connection properties from. | | persistence.relationalJdbc.secret.jdbcUrl | string | `"jdbcUrl"` | The secret key holding the database JDBC connection URL | diff --git a/helm/polaris/templates/_helpers.tpl b/helm/polaris/templates/_helpers.tpl index d16b1d6d46..dab6da3f2b 100644 --- a/helm/polaris/templates/_helpers.tpl +++ b/helm/polaris/templates/_helpers.tpl @@ -191,13 +191,6 @@ Prints the config volume definition for deployments and jobs. path: symmetric.key {{- end }} {{- end }} - {{- if and ( eq .Values.persistence.type "eclipse-link" ) .Values.persistence.eclipseLink.secret.name }} - - secret: - name: {{ tpl .Values.persistence.eclipseLink.secret.name . }} - items: - - key: {{ tpl .Values.persistence.eclipseLink.secret.key . }} - path: persistence.xml - {{- end }} {{- end -}} {{/* diff --git a/helm/polaris/templates/configmap.yaml b/helm/polaris/templates/configmap.yaml index 6d12fbd7e1..7b07f9e69c 100644 --- a/helm/polaris/templates/configmap.yaml +++ b/helm/polaris/templates/configmap.yaml @@ -6,9 +6,9 @@ to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - + http://www.apache.org/licenses/LICENSE-2.0 - + Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY @@ -47,13 +47,6 @@ data: {{- end -}} {{- end -}} - {{- /* Persistence */ -}} - {{- $_ = set $map "polaris.persistence.type" .Values.persistence.type -}} - {{- if and ( eq .Values.persistence.type "eclipse-link" ) .Values.persistence.eclipseLink.secret.name -}} - {{- $_ = set $map "polaris.persistence.eclipselink.persistence-unit" .Values.persistence.eclipseLink.persistenceUnit -}} - {{- $_ = set $map "polaris.persistence.eclipselink.configuration-file" (printf "%s/persistence.xml" .Values.image.configDir ) -}} - {{- end -}} - {{- /* File IO */ -}} {{- $_ = set $map "polaris.file-io.type" .Values.fileIo.type -}} diff --git a/helm/polaris/tests/configmap_test.yaml b/helm/polaris/tests/configmap_test.yaml index 875469dac5..3834982a5d 100644 --- a/helm/polaris/tests/configmap_test.yaml +++ b/helm/polaris/tests/configmap_test.yaml @@ -102,14 +102,6 @@ tests: - matchRegex: { path: 'data["application.properties"]', pattern: "polaris.features.realm-overrides.\"realm1\".\"feature1\"=false" } - matchRegex: { path: 'data["application.properties"]', pattern: "polaris.features.realm-overrides.\"realm2\".\"feature2\"=43" } - - it: should configure persistence - set: - persistence: { type: "eclipse-link", eclipseLink: { persistenceUnit: "polaris", secret: { name: "polaris-persistence" } } } - asserts: - - matchRegex: { path: 'data["application.properties"]', pattern: "polaris.persistence.type=eclipse-link" } - - matchRegex: { path: 'data["application.properties"]', pattern: "polaris.persistence.eclipselink.persistence-unit=polaris" } - - matchRegex: { path: 'data["application.properties"]', pattern: "polaris.persistence.eclipselink.configuration-file=/deployments/config/persistence.xml" } - - it: should configure relational-jdbc persistence set: persistence: { type: "relational-jdbc", relationalJdbc: { secret: { name: "polaris-persistence" } } } diff --git a/helm/polaris/tests/deployment_test.yaml b/helm/polaris/tests/deployment_test.yaml index 10f9fd7789..99e2ec35ba 100644 --- a/helm/polaris/tests/deployment_test.yaml +++ b/helm/polaris/tests/deployment_test.yaml @@ -356,31 +356,6 @@ tests: mountPath: /deployments/config readOnly: true - - it: should evaluate template expressions in persistence secret name - set: - persistence: - type: eclipse-link - eclipseLink: - secret: - name: "{{ .Release.Name }}-persistence-secret" - asserts: - - contains: - path: spec.template.spec.volumes - content: - name: config-volume - projected: - sources: - - configMap: - items: - - key: application.properties - path: application.properties - name: polaris-release - - secret: - items: - - key: persistence.xml - path: persistence.xml - name: polaris-release-persistence-secret - # spec.template.spec.containers[0].ports - it: should set container ports by default asserts: @@ -987,39 +962,6 @@ tests: - key: private.key path: private.pem - - it: should configure config volume with persistence secret - set: - image.configDir: /config/dir - persistence: - type: eclipse-link - eclipseLink: - secret: - name: polaris-persistence - key: custom.xml - asserts: - - contains: - path: spec.template.spec.containers[0].volumeMounts - content: - name: config-volume - mountPath: /config/dir - readOnly: true - - contains: - path: spec.template.spec.volumes - content: - name: config-volume - projected: - sources: - - configMap: - name: polaris-release - items: - - key: application.properties - path: application.properties - - secret: - name: polaris-persistence - items: - - key: custom.xml - path: persistence.xml - - it: should set relational-jdbc persistence environment variables set: persistence: { type: "relational-jdbc", relationalJdbc: { secret: { name: "polaris-persistence", username: "username", password: "password", jdbcUrl: "jdbcUrl" } } } diff --git a/helm/polaris/values.yaml b/helm/polaris/values.yaml index 0f79e87748..a0440fe2ec 100644 --- a/helm/polaris/values.yaml +++ b/helm/polaris/values.yaml @@ -537,18 +537,6 @@ persistence: # -- The secret key holding the database JDBC connection URL jdbcUrl: jdbcUrl - # -- The configuration for the eclipse-link persistence manager. - eclipseLink: - # -- The secret name to pull persistence.xml from. - secret: - # -- The name of the secret to pull persistence.xml from. - # If not provided, the default built-in persistence.xml will be used. This is probably not what you want. - name: ~ - # -- The key in the secret to pull persistence.xml from. - key: persistence.xml - # -- The persistence unit name to use. - persistenceUnit: polaris - # -- Polaris FileIO configuration. fileIo: # -- The type of file IO to use. Two built-in types are supported: default and wasb. The wasb one translates WASB paths to ABFS ones. From 597f99e22756677653743cc0379aeba01db8924a Mon Sep 17 00:00:00 2001 From: Yong Date: Fri, 4 Jul 2025 23:32:20 -0500 Subject: [PATCH 3/7] Remove persistent ref --- helm/polaris/templates/configmap.yaml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/helm/polaris/templates/configmap.yaml b/helm/polaris/templates/configmap.yaml index 7b07f9e69c..a69531ec4f 100644 --- a/helm/polaris/templates/configmap.yaml +++ b/helm/polaris/templates/configmap.yaml @@ -47,6 +47,9 @@ data: {{- end -}} {{- end -}} + {{- /* Persistence */ -}} + {{- $_ = set $map "polaris.persistence.type" .Values.persistence.type -}} + {{- /* File IO */ -}} {{- $_ = set $map "polaris.file-io.type" .Values.fileIo.type -}} From 0fc84529ff5dd3684fd53b2ddc7324b18bc88e7a Mon Sep 17 00:00:00 2001 From: Yong Date: Mon, 7 Jul 2025 12:06:00 -0500 Subject: [PATCH 4/7] Fixes based on feedback --- helm/polaris/README.md | 2 +- helm/polaris/README.md.gotmpl | 2 +- helm/polaris/templates/configmap.yaml | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index b7472a389b..977cf91af0 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -180,7 +180,7 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting upa local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting up a local cluster. Integration tests are run with the Chart Testing tool: ```bash diff --git a/helm/polaris/README.md.gotmpl b/helm/polaris/README.md.gotmpl index af6ed5f3bb..ad6635e5b2 100644 --- a/helm/polaris/README.md.gotmpl +++ b/helm/polaris/README.md.gotmpl @@ -182,7 +182,7 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting upa local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting up a local cluster. Integration tests are run with the Chart Testing tool: ```bash diff --git a/helm/polaris/templates/configmap.yaml b/helm/polaris/templates/configmap.yaml index a69531ec4f..a3ec774419 100644 --- a/helm/polaris/templates/configmap.yaml +++ b/helm/polaris/templates/configmap.yaml @@ -6,9 +6,9 @@ to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - + http://www.apache.org/licenses/LICENSE-2.0 - + Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY From 6ec59bfd3130738ecaed7bf18943bddfb016db9d Mon Sep 17 00:00:00 2001 From: Yong Date: Mon, 7 Jul 2025 15:03:10 -0500 Subject: [PATCH 5/7] Fixes based on feedback --- helm/polaris/README.md | 57 ++++++++++++++++++++++------------- helm/polaris/README.md.gotmpl | 57 ++++++++++++++++++++++------------- 2 files changed, 72 insertions(+), 42 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index 977cf91af0..72080b692d 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -62,14 +62,29 @@ minikube start eval $(minikube docker-env) ./gradlew \ - :polaris-server:assemble \ - :polaris-server:quarkusAppPartsBuild --rerun \ - :polaris-admin:assemble \ - :polaris-admin:quarkusAppPartsBuild --rerun \ - -Dquarkus.container-image.build=true + :polaris-server:assemble \ + :polaris-server:quarkusAppPartsBuild --rerun \ + :polaris-admin:assemble \ + :polaris-admin:quarkusAppPartsBuild --rerun \ + -Dquarkus.container-image.tag=postgres-latest \ + -Dquarkus.container-image.build=true ``` -#### Installing the Helm chart +### Installing the chart locally + +The below instructions assume a local Kubernetes cluster is running and Helm is installed. + +#### Common setup + +Create the target namespace: +```bash +kubectl create namespace polaris +``` + +Create all the required resources in the `polaris` namespace. This usually includes a Postgres +database and a Kubernetes Secret for Polaris service certification files. The Polaris chart does not create +these resources automatically, as they are not required for all Polaris deployments. The chart will +fail if these resources are not created beforehand. You can find a reference for these resources in the `Prerequisites` section within `Development & Testing`. Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. @@ -78,13 +93,13 @@ Below are two sample deployment models for installing the chart: one with a non- > **These files are intended for testing purposes primarily, and may not be suitable for production use**. > For production deployments, create your own values files based on the provided examples. -##### Non-persistent backend +#### Non-persistent backend Install the chart with a non-persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ --values helm/polaris/ci/simple-values.yaml \ - polaris helm/polaris --create-namespace polaris + polaris helm/polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained @@ -93,7 +108,7 @@ above, then you need to run `helm upgrade` as follows: helm upgrade --install --namespace polaris \ --values helm/polaris/ci/simple-values.yaml \ --set=image.repository=localhost:5001/apache/polaris \ - polaris helm/polaris --create-namespace polaris + polaris helm/polaris ``` #### Persistent backend @@ -101,13 +116,6 @@ helm upgrade --install --namespace polaris \ > [!WARNING] > The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance. -Install the dependencies in fixtures directory. From Polaris repo root: -```bash -kubectl create namespace polaris -kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ -kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s -``` - Install the chart with a persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ @@ -160,14 +168,20 @@ helm plugin install https://github.com/helm-unittest/helm-unittest.git brew install chart-testing ``` -The integration tests also require some fixtures to be deployed. Follow the above commands to setup required resources. +The integration tests also require some fixtures to be deployed. The `ci/fixtures` directory +contains the required resources. To deploy them, run the following command: +```bash +kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ +kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s +``` -The `helm/polaris/ci` contains a number of values files that will be used to install the chart with different configurations. +The `helm/polaris/ci` contains a number of values files that will be used to install the chart with +different configurations. ### Running the unit tests -Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from the Polaris repo root: - +Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from +the Polaris repo root: ```bash helm unittest helm/polaris ``` @@ -180,7 +194,8 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting up a local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting up +a local cluster. Integration tests are run with the Chart Testing tool: ```bash diff --git a/helm/polaris/README.md.gotmpl b/helm/polaris/README.md.gotmpl index ad6635e5b2..c0a84befb7 100644 --- a/helm/polaris/README.md.gotmpl +++ b/helm/polaris/README.md.gotmpl @@ -64,14 +64,29 @@ minikube start eval $(minikube docker-env) ./gradlew \ - :polaris-server:assemble \ - :polaris-server:quarkusAppPartsBuild --rerun \ - :polaris-admin:assemble \ - :polaris-admin:quarkusAppPartsBuild --rerun \ - -Dquarkus.container-image.build=true + :polaris-server:assemble \ + :polaris-server:quarkusAppPartsBuild --rerun \ + :polaris-admin:assemble \ + :polaris-admin:quarkusAppPartsBuild --rerun \ + -Dquarkus.container-image.tag=postgres-latest \ + -Dquarkus.container-image.build=true ``` -#### Installing the Helm chart +### Installing the chart locally + +The below instructions assume a local Kubernetes cluster is running and Helm is installed. + +#### Common setup + +Create the target namespace: +```bash +kubectl create namespace polaris +``` + +Create all the required resources in the `polaris` namespace. This usually includes a Postgres +database and a Kubernetes Secret for Polaris service certification files. The Polaris chart does not create +these resources automatically, as they are not required for all Polaris deployments. The chart will +fail if these resources are not created beforehand. You can find a reference for these resources in the `Prerequisites` section within `Development & Testing`. Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. @@ -80,13 +95,13 @@ Below are two sample deployment models for installing the chart: one with a non- > **These files are intended for testing purposes primarily, and may not be suitable for production use**. > For production deployments, create your own values files based on the provided examples. -##### Non-persistent backend +#### Non-persistent backend Install the chart with a non-persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ --values helm/polaris/ci/simple-values.yaml \ - polaris helm/polaris --create-namespace polaris + polaris helm/polaris ``` Note: if you are running the tests on a Kind cluster started with the `run.sh` command explained @@ -95,7 +110,7 @@ above, then you need to run `helm upgrade` as follows: helm upgrade --install --namespace polaris \ --values helm/polaris/ci/simple-values.yaml \ --set=image.repository=localhost:5001/apache/polaris \ - polaris helm/polaris --create-namespace polaris + polaris helm/polaris ``` #### Persistent backend @@ -103,13 +118,6 @@ helm upgrade --install --namespace polaris \ > [!WARNING] > The Postgres deployment set up in the fixtures directory is intended for testing purposes only and is not suitable for production use. For production deployments, use a managed Postgres service or a properly configured and secured Postgres instance. -Install the dependencies in fixtures directory. From Polaris repo root: -```bash -kubectl create namespace polaris -kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ -kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s -``` - Install the chart with a persistent backend. From Polaris repo root: ```bash helm upgrade --install --namespace polaris \ @@ -162,14 +170,20 @@ helm plugin install https://github.com/helm-unittest/helm-unittest.git brew install chart-testing ``` -The integration tests also require some fixtures to be deployed. Follow the above commands to setup required resources. +The integration tests also require some fixtures to be deployed. The `ci/fixtures` directory +contains the required resources. To deploy them, run the following command: +```bash +kubectl apply --namespace polaris -f helm/polaris/ci/fixtures/ +kubectl wait --namespace polaris --for=condition=ready pod --selector=app.kubernetes.io/name=postgres --timeout=120s +``` -The `helm/polaris/ci` contains a number of values files that will be used to install the chart with different configurations. +The `helm/polaris/ci` contains a number of values files that will be used to install the chart with +different configurations. ### Running the unit tests -Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from the Polaris repo root: - +Helm unit tests do not require a Kubernetes cluster. To run the unit tests, execute Helm Unit from +the Polaris repo root: ```bash helm unittest helm/polaris ``` @@ -182,7 +196,8 @@ ct lint --charts helm/polaris ### Running the integration tests -Integration tests require a Kubernetes cluster. See installation instructions above for setting up a local cluster. +Integration tests require a Kubernetes cluster. See installation instructions above for setting up +a local cluster. Integration tests are run with the Chart Testing tool: ```bash From fed7b3549f660cfbaa4551ae696e9069a76c52b1 Mon Sep 17 00:00:00 2001 From: Yong Date: Mon, 7 Jul 2025 15:04:48 -0500 Subject: [PATCH 6/7] Fixes based on feedback --- helm/polaris/README.md | 4 +++- helm/polaris/README.md.gotmpl | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index 72080b692d..f04ca30a52 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -49,7 +49,9 @@ Simply run the `run.sh` script from the Polaris repo root: ./run.sh ``` -This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker images and load them into the Kind cluster (It will also create an example Deployment and Service with in-memory storage.) +This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker +images with support for Postgres and load them into the Kind cluster. (It will also create an +example Deployment and Service with in-memory storage.) ### Running locally with a Minikube cluster diff --git a/helm/polaris/README.md.gotmpl b/helm/polaris/README.md.gotmpl index c0a84befb7..b70d6a9265 100644 --- a/helm/polaris/README.md.gotmpl +++ b/helm/polaris/README.md.gotmpl @@ -51,7 +51,9 @@ Simply run the `run.sh` script from the Polaris repo root: ./run.sh ``` -This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker images and load them into the Kind cluster (It will also create an example Deployment and Service with in-memory storage.) +This script will create a Kind cluster, deploy a local Docker registry, build the Polaris Docker +images with support for Postgres and load them into the Kind cluster. (It will also create an +example Deployment and Service with in-memory storage.) ### Running locally with a Minikube cluster From aaf852b7335dcc39707f8d250104f8cd2f72fc9d Mon Sep 17 00:00:00 2001 From: Yong Date: Tue, 8 Jul 2025 09:22:17 -0500 Subject: [PATCH 7/7] Fixes based on feedback --- helm/polaris/README.md | 5 +++-- helm/polaris/README.md.gotmpl | 5 +++-- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/helm/polaris/README.md b/helm/polaris/README.md index f04ca30a52..ddf28f0b09 100644 --- a/helm/polaris/README.md +++ b/helm/polaris/README.md @@ -84,9 +84,10 @@ kubectl create namespace polaris ``` Create all the required resources in the `polaris` namespace. This usually includes a Postgres -database and a Kubernetes Secret for Polaris service certification files. The Polaris chart does not create +database, Kubernetes secrets, and service accounts. The Polaris chart does not create these resources automatically, as they are not required for all Polaris deployments. The chart will -fail if these resources are not created beforehand. You can find a reference for these resources in the `Prerequisites` section within `Development & Testing`. +fail if these resources are not created beforehand. You can find some examples in the +`helm/polaris/ci/fixtures` directory, but beware that these are primarily intended for tests. Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend. diff --git a/helm/polaris/README.md.gotmpl b/helm/polaris/README.md.gotmpl index b70d6a9265..2ce129dcd5 100644 --- a/helm/polaris/README.md.gotmpl +++ b/helm/polaris/README.md.gotmpl @@ -86,9 +86,10 @@ kubectl create namespace polaris ``` Create all the required resources in the `polaris` namespace. This usually includes a Postgres -database and a Kubernetes Secret for Polaris service certification files. The Polaris chart does not create +database, Kubernetes secrets, and service accounts. The Polaris chart does not create these resources automatically, as they are not required for all Polaris deployments. The chart will -fail if these resources are not created beforehand. You can find a reference for these resources in the `Prerequisites` section within `Development & Testing`. +fail if these resources are not created beforehand. You can find some examples in the +`helm/polaris/ci/fixtures` directory, but beware that these are primarily intended for tests. Below are two sample deployment models for installing the chart: one with a non-persistent backend and another with a persistent backend.