diff --git a/doc/user/content/self-managed/_index.md b/doc/user/content/self-managed/_index.md new file mode 100644 index 0000000000000..a88370dd5a427 --- /dev/null +++ b/doc/user/content/self-managed/_index.md @@ -0,0 +1,53 @@ +--- +title: "Self-managed Materialize" +description: "" +aliases: + - /self-hosted/ +robots: "noindex, nofollow" +--- + +With self-managed Materialize, you can deploy and operate Materialize in your +Kubernetes environment. For self-managed Materialize, Materialize offers: + +- A Kubernetes Operator that manages your Materialize running in your Kubernetes + environment. + +- Materialize packaged as a containerized application that can be deployed in a + Kubernetes cluster. + +## Recommended instance types + +Materialize has been tested to work on instances with the following properties: + +- ARM-based CPU +- 1:8 ratio of vCPU to GiB memory +- 1:16 ratio of vCPU to GiB local instance storage (if enabling spill-to-disk) + +When operating in AWS, we recommend: + +- Using the `r7gd` and `r6gd` families of instances (and `r8gd` once available) + when running with local disk + +- Using the `r8g`, `r7g`, and `r6g` families when running without local disk + +See also the [operational guidelines](/self-managed/operational-guidelines/). + + +## Installation + +For instructions on installing Materialize on your Kubernetes cluster, see: + +- [Install locally on kind](/self-managed/installation/install-on-local-kind/) + +- [Install locally on + minikube](/self-managed/installation/install-on-local-minikube/) + +- [Install on AWS](/self-managed/installation/install-on-aws/) +- [Install on GCP](/self-managed/installation/install-on-gcp/) + +## Related pages + + diff --git a/doc/user/content/self-managed/configuration.md b/doc/user/content/self-managed/configuration.md new file mode 100644 index 0000000000000..7d33455cbe30c --- /dev/null +++ b/doc/user/content/self-managed/configuration.md @@ -0,0 +1,68 @@ +--- +title: "Materialize Operator Configuration" +description: "" +aliases: + - /self-hosted/configuration/ +robots: "noindex, nofollow" +--- + +You can configure the Materialize operator chart. For example: + +- **RBAC** + + The chart creates a `ClusterRole` and `ClusterRoleBinding` by default. To use + an existing `ClusterRole`, set [`rbac.create=false`](/self-managed/configuration/#rbaccreate) and specify the name of + the existing `ClusterRole` using the + [`rbac.clusterRole`](/self-managed/configuration/#rbacclusterrole) parameter. + +- **Network Policies** + + Network policies can be enabled by setting + [`networkPolicies.enabled=true`](/self-managed/configuration/#networkpoliciesenabled). + By default, the chart uses native Kubernetes network policies. To use Cilium + network policies instead, set + `networkPolicies.useNativeKubernetesPolicy=false`. + +- **Observability** + + To enable observability features, set + [`observability.enabled=true`](/self-managed/configuration/#observabilityenabled). + This will create the necessary resources for monitoring the operator. If you + want to use Prometheus, also set + [`observability.prometheus.enabled=true`](/self-managed/configuration/#observabilityprometheusenabled). + + +## Configure the Materialize operator chart + +To configure the Materialize operator chart, you can: + +- *Recommended:* Modify the provided `values.yaml` file (or create your own + YAML file) that specifies the configuration values and then install the + chart with the `-f` flag: + + ```shell + helm install my-materialize-operator -f /path/to/values.yaml /path/to/materialize/helm-charts/operator + ``` + +- Specify each parameter using the `--set key=value[,key=value]` argument to + `helm install`. For example: + + ```shell + helm install my-materialize-operator \ + --set operator.image.tag=v1.0.0 \ + /path/to/materialize/helm-charts/operator + ``` + +{{% self-managed/materialize-operator-chart-parameters-table %}} + +## Parameters + +{{% self-managed/materialize-operator-chart-parameters %}} + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Installation](/self-managed/installation/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/installation/_index.md b/doc/user/content/self-managed/installation/_index.md new file mode 100644 index 0000000000000..a8236385fc831 --- /dev/null +++ b/doc/user/content/self-managed/installation/_index.md @@ -0,0 +1,5 @@ +--- +title: "Installation" +description: "Installation guides for self-managed Materialize." +robots: "noindex, nofollow" +--- diff --git a/doc/user/content/self-managed/installation/install-on-aws.md b/doc/user/content/self-managed/installation/install-on-aws.md new file mode 100644 index 0000000000000..3cd03f2c5502a --- /dev/null +++ b/doc/user/content/self-managed/installation/install-on-aws.md @@ -0,0 +1,362 @@ +--- +title: "Install on AWS" +description: "" +robots: "noindex, nofollow" +--- + +The following tutorial deploys Materialize onto AWS. + +{{< important >}} + +For testing purposes only. For testing purposes only. For testing purposes only. .... + +{{< /important >}} + +## Prerequisites + +### Required + +#### AWS Kubernetes environment + +Materialize provides a [Terraform +module](https://github.com/MaterializeInc/terraform-aws-materialize/blob/main/README.md) +to deploy a sample infrastructure on AWS with the following components: + +- EKS component +- Networking component +- Storage component +- Database component for metadata storage + +See the +[README](https://github.com/MaterializeInc/terraform-aws-materialize/blob/main/README.md) +for information on how to deploy the infrastructure. + +#### `kubectl` + +Install `kubectl` and configure cluster access. For details, see the [Amazon EKS +documentation](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html). + +Configure `kubectl` to connect to your EKS cluster, replacing +`` with the region of your EKS cluster: + +```bash +aws eks update-kubeconfig --name materialize-cluster --region +``` + +{{< note >}} + +The exact authentication method may vary depending on your EKS configuration. + +{{< /note >}} + +To verify, run the following command: + +```bash +kubectl get nodes +``` + +#### Helm 3.2.0+ + +If you don't have Helm version 3.2.0+ installed, refer to the [Helm +documentation](https://helm.sh/docs/intro/install/). + + +### Recommended but optional + +#### OpenEBS + +For optimal performance, Materialize requires fast, *locally-attached* NVMe +storage. Having a locally-attached storage allows Materialize to spill to disk +when operating on datasets larger than main memory as well as allows for a more +graceful degradation rather than OOMing. *Network-attached* storage (like EBS +volumes) can significantly degrade performance and is not supported. + +For locally-attached NVMe storage, we recommend using OpenEBS with LVM Local PV +for managing local volumes. While other storage solutions may work, we have +tested and recommend OpenEBS for optimal performance. + +For locally-attached NVMe storage, install OpenEBS to your running Kubernetes +cluster. + +```bash +# Add OpenEBS to Helm +helm repo add openebs https://openebs.github.io/openebs +helm repo update + +# Install only the Local PV Storage Engines +helm install openebs --namespace openebs openebs/openebs \ + --set engines.replicated.mayastor.enabled=false \ + --create-namespace +``` + +Verify the installation: +```bash +kubectl get pods -n openebs -l role=openebs-lvm +``` + +#### Logical Volume Manager (LVM) configuration + +Logical Volume Manager (LVM) setup varies by environment. Below is our tested +and recommended configuration: + +##### AWS EC2 with Bottlerocket AMI + +Tested configurations: + +| | | +|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **Instance types** | **r6g**, **r7g** families
**Note:** LVM setup may work on other instance types with local storage (like i3.xlarge, i4i.xlarge, r5d.xlarge), but we have not extensively tested these configurations. | +| **AMI** | AWS Bottlerocket | +| **Instance store volumes** | Required | + +To setup: + +1. Use Bottlerocket bootstrap container for LVM configuration. +1. Configure volume group name as `instance-store-vg` + +{{< tip >}} + +If you are using the recommended Bottlerocket AMI with the Terraform module, +the LVM configuration is automatically handled by the EKS module using the +provided user data script. + +{{< /tip >}} + +To verify the LVM setup, run the following: + +```bash +kubectl debug -it node/ --image=amazonlinux:2 +chroot /host +lvs +``` + +You should see a volume group named `instance-store-vg`. + +## 1. Install the Materialize Operator + +1. If installing for the first time, create a namespace. The default + configuration uses the `materialize` namespace. + + ```bash + kubectl create namespace materialize + ``` + +1. Create a `my-AWS-values.yaml` configuration file for the Materialize + operator. Update with details from your AWS Kubernetes environment. For more + information on cloud provider configuration, see the [Materialize Operator + Configuration](/self-managed/configuration/#operator-parameters). + + ```yaml + # my-AWS-values.yaml + # Note: Updated with recent config changes in main branch and not v0.125.2 branch + + operator: + args: + startupLogFilter: INFO + cloudProvider: + providers: + aws: + accountID: "" + enabled: true + iam: + roles: + connection: null + environment: null + region: "" + type: "aws" + + namespace: + create: false + name: "materialize" + + # Adjust network policies as needed + networkPolicies: + enabled: true + egress: + enabled: true + cidrs: ["0.0.0.0/0"] + ingress: + enabled: true + cidrs: ["0.0.0.0/0"] + internal: + enabled: true + ``` + + If you have [opted for locally-attached storage](#openebs), include the + storage configuration in your `my-AWS-values.yaml` file: + + {{< tabs >}} + {{< tab "OpenEBS" >}} + + If using OpenEBS, set up the storage class as follows: + ```yaml + storage: + storageClass: + create: true + name: "openebs-lvm-instance-store-ext4" + provisioner: "local.csi.openebs.io" + parameters: + storage: "lvm" + fsType: "ext4" + volgroup: "instance-store-vg" + ``` + {{< /tab >}} + {{< tab "Other Storage" >}} + While OpenEBS is our recommended solution, you can use any storage provisioner that meets your performance requirements by overriding the provisioner and parameters values. + + For example, to use a different storage provider: + + ```yaml + storage: + storageClass: + create: true + name: "your-storage-class" + provisioner: "your.storage.provisioner" + parameters: + # Parameters specific to your chosen storage provisioner + ``` + {{< /tab >}} + {{< /tabs >}} + +1. Clone/download the [Materialize + repo](https://github.com/MaterializeInc/materialize). + +1. Go to the Materialize repo directory. + + ```bash + cd materialize + ``` + +1. Install the Materialize operator with the release name + `my-materialize-operator`, specifying the path to your `my-AWS-values.yaml` + file: + + ```shell + helm install my-materialize-operator -f path/to/my-AWS-values.yaml materialize/misc/helm-charts/operator + ``` + +1. Verify the installation: + + ```shell + kubectl get all -n materialize + ``` + +## 2. Install Materialize + +To deploy Materialize: + +1. Create a [Kubernetes + Secret](https://kubernetes.io/docs/concepts/configuration/secret/) for your + backend configuration information and save in a file (e.g., + `materialize-backend-secret.yaml`). + + Replace `${terraform_output.metadata_backend_url}` and + `{terraform_output.persist_backend_url}` with the actual values from the + Terraform output. + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: materialize-backend + namespace: materialize-environment + stringData: + metadata_backend_url: "${terraform_output.metadata_backend_url}" + persist_backend_url: "${terraform_output.persist_backend_url}" + ``` + +1. Create a YAML file (e.g., `my-materialize.yaml`) for your Materialize + configuration. + + Replace `${var.service_account_name}` with the the desired name for your + Materialize. It should be a UUID (e.g., + `12345678-1234-1234-1234-123456789012`). + + ```yaml + apiVersion: materialize.cloud/v1alpha1 + kind: Materialize + metadata: + name: "${var.service_account_name}" + namespace: materialize-environment + spec: + environmentdImageRef: materialize/environmentd:latest + environmentdResourceRequirements: + limits: + memory: 16Gi + requests: + cpu: "2" + memory: 16Gi + balancerdResourceRequirements: + limits: + memory: 256Mi + requests: + cpu: "100m" + memory: 256Mi + backendSecretName: materialize-backend + ``` + +1. Create the `materialize-environment` namespace and apply the files to install + Materialize: + + ```shell + kubectl create namespace materialize-environment + kubectl apply -f materialize-backend-secret.yaml + kubectl apply -f my-materialize.yaml + ``` + +1. Verify the installation: + + ```bash + kubectl get materializes -n materialize-environment + kubectl get pods -n materialize-environment + ``` + +## Troubleshooting + +If you encounter issues: + +1. Check operator logs: +```bash +kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize +``` + +2. Check environment logs: +```bash +kubectl logs -l app.kubernetes.io/name=environmentd -n materialize-environment +``` + +3. Verify the storage configuration: +```bash +kubectl get sc +kubectl get pv +kubectl get pvc -A +``` + +## Cleanup + +Delete the Materialize environment: +```bash +kubectl delete -f materialize-environment.yaml +``` + +To uninstall the Materialize operator: +```bash +helm uninstall materialize-operator -n materialize +``` + +This will remove the operator but preserve any PVs and data. To completely clean +up: + +```bash +kubectl delete namespace materialize +kubectl delete namespace materialize-environment +``` + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Materialize Operator Configuration](/self-managed/configuration/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Installation](/self-managed/installation/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/installation/install-on-gcp.md b/doc/user/content/self-managed/installation/install-on-gcp.md new file mode 100644 index 0000000000000..63614f163301e --- /dev/null +++ b/doc/user/content/self-managed/installation/install-on-gcp.md @@ -0,0 +1,234 @@ +--- +title: "Install on GCP" +description: "" +robots: "noindex, nofollow" +--- + +The following tutorial deploys Materialize onto GCP. + +{{< important >}} + +For testing purposes only. For testing purposes only. For testing purposes only. .... + +{{< /important >}} + +## Prerequisites + +### Required + +#### GCP Kubernetes environment + +Materialize provides a [Terraform +module](https://github.com/MaterializeInc/terraform-google-materialize) to +deploy a sample infrastructure on GCP with the following: + +- GKE component +- Storage component +- Database component for metadata storage + +See the +[README](https://github.com/MaterializeInc/terraform-google-materialize/blob/main/README.md) +for information on how to deploy the infrastructure. + +#### `kubectl` + +Install `kubectl` and configure cluster access. +For details, see the [GCP documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl). + + +#### Helm 3.2.0+ + +If you don't have Helm version 3.2.0+ installed, refer to the [Helm +documentation](https://helm.sh/docs/intro/install/). + + +## 1. Install the Materialize Operator + +1. If installing for the first time, create a namespace. The default + configuration uses the `materialize` namespace. + + ```bash + kubectl create namespace materialize + ``` + +1. Create a `my-GCP-values.yaml` configuration file for the Materialize + operator. Update with details from your GCP Kubernetes environment. For more + information on cloud provider configuration, see the [Materialize Operator + Configuration](/self-managed/configuration/#operator-parameters). + + ```yaml + # my-GCP-values.yaml + # Note: Updated with recent config changes in main branch and not v0.125.2 branch + + + operator: + operator: + args: + startupLogFilter: INFO + cloudProvider: + providers: + gcp: + enabled: true + region: "" + type: "gcp" + + namespace: + create: false + name: "materialize" + + # Adjust network policies as needed + networkPolicies: + enabled: true + egress: + enabled: true + cidrs: ["0.0.0.0/0"] + ingress: + enabled: true + cidrs: ["0.0.0.0/0"] + internal: + enabled: true + ``` + + +1. Clone/download the [Materialize + repo](https://github.com/MaterializeInc/materialize). + +1. Go to the Materialize repo directory. + + ```bash + cd materialize + ``` + +1. Install the Materialize operator with the release name + `my-materialize-operator`, specifying the path to your + `my-GCP-values.yaml` file: + + ```shell + helm install my-materialize-operator -f path/to/my-GCP-values.yaml materialize/misc/helm-charts/operator + ``` + +1. Verify the installation: + + ```shell + kubectl get all -n materialize + ``` + +## 2. Install Materialize + +To deploy Materialize: + +1. Create a [Kubernetes + Secret](https://kubernetes.io/docs/concepts/configuration/secret/) for your + backend configuration information and save in a file (e.g., + `materialize-backend-secret.yaml`). + + Replace `${terraform_output.metadata_backend_url}` and + `{terraform_output.persist_backend_url}` with the actual values from the + Terraform output. + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: materialize-backend + namespace: materialize-environment + stringData: + metadata_backend_url: "${terraform_output.metadata_backend_url}" + persist_backend_url: "${terraform_output.persist_backend_url}" + ``` + +1. Create a YAML file (e.g., `my-materialize.yaml`) for your Materialize + configuration. + + Replace `${var.service_account_name}` with the the desired name for your + Materialize. It should be a UUID (e.g., + `12345678-1234-1234-1234-123456789012`). + + ```yaml + apiVersion: materialize.cloud/v1alpha1 + kind: Materialize + metadata: + name: "${var.service_account_name}" + namespace: materialize-environment + spec: + environmentdImageRef: materialize/environmentd:latest + environmentdResourceRequirements: + limits: + memory: 16Gi + requests: + cpu: "2" + memory: 16Gi + balancerdResourceRequirements: + limits: + memory: 256Mi + requests: + cpu: "100m" + memory: 256Mi + backendSecretName: materialize-backend + ``` + +1. Create the `materialize-environment` namespace and apply the files to install + Materialize: + + ```shell + kubectl create namespace materialize-environment + kubectl apply -f materialize-backend-secret.yaml + kubectl apply -f my-materialize.yaml + ``` + +1. Verify the installation: + + ```bash + kubectl get materializes -n materialize-environment + kubectl get pods -n materialize-environment + ``` + +## Troubleshooting + +If you encounter issues: + +1. Check operator logs: +```bash +kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize +``` + +2. Check environment logs: +```bash +kubectl logs -l app.kubernetes.io/name=environmentd -n materialize-environment +``` + +3. Verify the storage configuration: +```bash +kubectl get sc +kubectl get pv +kubectl get pvc -A +``` + +## Cleanup + +Delete the Materialize environment: +```bash +kubectl delete -f materialize-environment.yaml +``` + +To uninstall the Materialize operator: +```bash +helm uninstall materialize-operator -n materialize +``` + +This will remove the operator but preserve any PVs and data. To completely clean +up: + +```bash +kubectl delete namespace materialize +kubectl delete namespace materialize-environment +``` + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Materialize Operator Configuration](/self-managed/configuration/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Installation](/self-managed/installation/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/installation/install-on-local-kind.md b/doc/user/content/self-managed/installation/install-on-local-kind.md new file mode 100644 index 0000000000000..d57c66ac7ce81 --- /dev/null +++ b/doc/user/content/self-managed/installation/install-on-local-kind.md @@ -0,0 +1,221 @@ +--- +title: "Install locally on kind" +description: "" +aliases: + - /self-hosted/install-on-local-kind/ +robots: "noindex, nofollow" +--- + +The following tutorial deploys Materialize onto a local +[`kind`](https://kind.sigs.k8s.io/) cluster. The tutorial deploys the following +components onto your local `kind` cluster: + +- Materialize Operator using Helm into your local `kind` cluster. +- MinIO object storage as the blob storage for your Materialize. +- PostgreSQL database as the metadata database for your Materialize. +- Materialize as a containerized application into your local `kind` cluster. + +{{< important >}} + +For testing purposes only. For testing purposes only. For testing purposes only. .... + +{{< /important >}} + +## Prerequisites + +### kind + +Install [`kind`](https://kind.sigs.k8s.io/docs/user/quick-start/). + +### Docker + +Install [`Docker`](https://docs.docker.com/get-started/get-docker/). + +### Helm 3.2.0+ + +If you don't have Helm version 3.2.0+ installed, refer to the [Helm +documentation](https://helm.sh/docs/intro/install/). + +### Kubernetes + +Materialize supports [Kubernetes 1.19+](https://kubernetes.io/docs/setup/). + +### `kubectl` + +This tutorial uses `kubectl`. To install, refer to the [`kubectl` documentationq](https://kubernetes.io/docs/tasks/tools/). + +### Materialize repo + +The following instructions assume that you are installing from the [Materialize +repo](https://github.com/MaterializeInc/materialize). + +{{< important >}} + +{{% self-managed/git-checkout-branch %}} + +{{< /important >}} + +## Installation + +1. Start Docker if it is not already running. + +1. Open a Terminal window. + +1. Create a kind cluster. + + ```shell + kind create cluster + ``` + +1. Create the `materialize` namespace. + + ```shell + kubectl create namespace materialize + ``` + +1. Install the Materialize Helm chart using the files provided in the + Materialize repo. + + 1. Go to the Materialize repo directory. + + 1. Install the Materialize operator with the release name + `my-materialize-operator`: + + ```shell + helm install my-materialize-operator -f misc/helm-charts/operator/values.yaml misc/helm-charts/operator + ``` + + 1. Verify the installation and check the status: + + ```shell + kubectl get all -n materialize + ``` + + Wait for the components to be in the `Running` state: + + ```shell + NAME READY STATUS RESTARTS AGE + pod/my-materialize-operator-776b98455b-w9kkl 0/1 ContainerCreating 0 6s + + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/my-materialize-operator 0/1 1 0 6s + + NAME DESIRED CURRENT READY AGE + replicaset.apps/my-materialize-operator-776b98455b 1 1 0 6s + ``` + + If you run into an error during deployment, refer to the + [Troubleshooting](/self-hosted/troubleshooting) guide. + +1. Install PostgreSQL and minIO. + + 1. Go to the Materialize repo directory. + + 1. Use the provided `postgres.yaml` file to install PostgreSQL as the + metadata database: + + ```shell + kubectl apply -f misc/helm-charts/testing/postgres.yaml + ``` + + 1. Use the provided `minio.yaml` file to install minIO as the blob storage: + + ```shell + kubectl apply -f misc/helm-charts/testing/minio.yaml + ``` + +1. Optional. Install the following metrics service for certain system metrics + but not required. + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + ``` + +1. Install Materialize into a new `materialize-environment` namespace: + + 1. Go to the Materialize repo directory. + + 1. Use the provided `environmentd.yaml` file to create the + `materialize-environment` namespace and install Materialize: + + ```shell + kubectl apply -f misc/helm-charts/testing/environmentd.yaml + ``` + + 1. Verify the installation and check the status: + + ```shell + kubectl get all -n materialize-environment + ``` + + Wait for the components to be in the `Running` state. + + ```shell + NAME READY STATUS RESTARTS AGE + pod/mzfhj38ptdjs-balancerd-6dd5bb645d-p7r2j 1/1 Running 0 3m21s + pod/mzfhj38ptdjs-cluster-s1-replica-s1-gen-1-0 1/1 Running 0 3m25s + pod/mzfhj38ptdjs-cluster-s2-replica-s2-gen-1-0 1/1 Running 0 3m25s + pod/mzfhj38ptdjs-cluster-s3-replica-s3-gen-1-0 1/1 Running 0 3m25s + pod/mzfhj38ptdjs-cluster-u1-replica-u1-gen-1-0 1/1 Running 0 3m25s + pod/mzfhj38ptdjs-console-84cb5c98d6-9zlc4 1/1 Running 0 3m21s + pod/mzfhj38ptdjs-console-84cb5c98d6-rjjcs 1/1 Running 0 3m21s + pod/mzfhj38ptdjs-environmentd-1-0 1/1 Running 0 3m29s + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE + service/mzfhj38ptdjs-balancerd NodePort 10.96.60. 152 6876:32386/TCP,6875:31334/ TCP 3m21s + service/mzfhj38ptdjs-cluster-s1-replica-s1-gen-1 ClusterIP 10.96.162. 190 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/ TCP 3m25s + service/mzfhj38ptdjs-cluster-s2-replica-s2-gen-1 ClusterIP 10.96.120. 116 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/ TCP 3m25s + service/mzfhj38ptdjs-cluster-s3-replica-s3-gen-1 ClusterIP 10.96.187. 199 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/ TCP 3m25s + service/mzfhj38ptdjs-cluster-u1-replica-u1-gen-1 ClusterIP 10.96.92. 133 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/ TCP 3m25s + service/mzfhj38ptdjs-console NodePort 10.96.97. 5 9000:30847/ TCP 3m21s + service/mzfhj38ptdjs-environmentd NodePort 10.96.188. 140 6875:30525/TCP,6876:31052/TCP,6877:31711/TCP, 6878:31367/TCP,6880:30141/TCP,6881:30283/TCP 3m21s + service/mzfhj38ptdjs-environmentd-1 NodePort 10.96.228. 68 6875:32253/TCP,6876:31876/TCP,6877:31886/TCP, 6878:31643/TCP,6880:32409/TCP,6881:30932/TCP 3m29s + service/mzfhj38ptdjs-persist-pubsub-1 ClusterIP None 6879/ TCP 3m29s + + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/mzfhj38ptdjs-balancerd 1/1 1 1 3m21s + deployment.apps/mzfhj38ptdjs-console 2/2 2 2 3m21s + + NAME DESIRED CURRENT READY AGE + replicaset.apps/mzfhj38ptdjs-balancerd-6dd5bb645d 1 1 1 3m21s + replicaset.apps/mzfhj38ptdjs-console-84cb5c98d6 2 2 2 3m21s + + NAME READY AGE + statefulset.apps/mzfhj38ptdjs-cluster-s1-replica-s1-gen-1 1/1 3m25s + statefulset.apps/mzfhj38ptdjs-cluster-s2-replica-s2-gen-1 1/1 3m25s + statefulset.apps/mzfhj38ptdjs-cluster-s3-replica-s3-gen-1 1/1 3m25s + statefulset.apps/mzfhj38ptdjs-cluster-u1-replica-u1-gen-1 1/1 3m25s + statefulset.apps/mzfhj38ptdjs-environmentd-1 1/1 3m29s + ``` + + If you run into an error during deployment, refer to the + [Troubleshooting](/self-hosted/troubleshooting) guide. + +1. Open the Materialize console in your browser: + + 1. From the `kubectl` output, find the Materialize console service. + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) + service/mzfhj38ptdjs-console NodePort 10.96.97.5 9000:30847/TCP + ``` + + 1. Forward the Materialize console service to your local machine: + + ```shell + kubectl port-forward svc/mzfhj38ptdjs-console 9000:9000 -n materialize-environment + ``` + + 1. Open a browser and navigate to + [http://localhost:9000](http://localhost:9000). + + ![Image of self-managed Materialize console running on local kind](/images/self-managed/self-managed-console-kind.png) + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-hosted/) +- [Materialize Operator Configuration](/self-hosted/configuration/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Installation](/self-managed/installation/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/installation/install-on-local-minikube.md b/doc/user/content/self-managed/installation/install-on-local-minikube.md new file mode 100644 index 0000000000000..aa1e9487493af --- /dev/null +++ b/doc/user/content/self-managed/installation/install-on-local-minikube.md @@ -0,0 +1,235 @@ +--- +title: "Install locally on minikube" +description: "" +robots: "noindex, nofollow" +--- + +The following tutorial deploys Materialize onto a local +[`minikube`](https://minikube.sigs.k8s.io/docs/start/) cluster. The tutorial +deploys the following components onto your local `minikube` cluster: + +- Materialize Operator using Helm into your local `minikube` cluster. +- MinIO object storage as the blob storage for your Materialize. +- PostgreSQL database as the metadata database for your Materialize. +- Materialize as a containerized application into your local `minikube` cluster. + +{{< important >}} + +For testing purposes only. For testing purposes only. For testing purposes only. .... + +{{< /important >}} + +## Prerequisites + +### minikube + +Install [`minikube`](https://minikube.sigs.k8s.io/docs/start/). + +### Container or virtual machine manager + +The following tutorial uses `Docker` as the container or virtual machine +manager. + +Install [`Docker`](https://docs.docker.com/get-started/get-docker/). + +To use another container or virtual machine manager as listed on the +[`minikube` documentation](https://minikube.sigs.k8s.io/docs/start/), refer to +the specific container/VM manager documentation. + +### Helm 3.2.0+ + +If you don't have Helm version 3.2.0+ installed, refer to the [Helm +documentation](https://helm.sh/docs/intro/install/). + +### Kubernetes + +Materialize supports [Kubernetes 1.19+](https://kubernetes.io/docs/setup/). + +### `kubectl` + +This tutorial uses `kubectl`. To install, refer to the [`kubectl` documentationq](https://kubernetes.io/docs/tasks/tools/). + +### Materialize repo + +The following instructions assume that you are installing from the [Materialize +repo](https://github.com/MaterializeInc/materialize). + +{{< important >}} + +{{% self-managed/git-checkout-branch %}} + +{{< /important >}} + +## Installation + +1. Start Docker if it is not already running. + +1. Open a Terminal window. + +1. Create a minikube cluster. + + ```shell + minikube start + ``` + +1. Create the `materialize` namespace. + + ```shell + kubectl create namespace materialize + ``` + +1. Install the Materialize Helm chart using the files provided in the + Materialize repo. + + 1. Go to the Materialize repo directory. + + 1. Optional. Edit the `misc/helm-charts/operator/values.yaml` to update (in + the `operator.args` section) the `region` value to `minikube`: + + ```yaml + region: "minikube" + ``` + + 1. Install the Materialize operator with the release name + `my-materialize-operator`: + + ```shell + helm install my-materialize-operator -f misc/helm-charts/operator/values.yaml misc/helm-charts/operator + ``` + + 1. Verify the installation and check the status: + + ```shell + kubectl get all -n materialize + ``` + + Wait for the components to be in the `Running` state: + + ```shell + NAME READY STATUS RESTARTS AGE + pod/my-materialize-operator-8fc75cd7d-vcvl8 1/1 Running 0 8s + + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/my-materialize-operator 1/1 1 1 8s + + NAME DESIRED CURRENT READY AGE + replicaset.apps/my-materialize-operator-8fc75cd7d 1 1 1 8s + ``` + + If you run into an error during deployment, refer to the + [Troubleshooting](/self-hosted/troubleshooting) guide. + +1. Install PostgreSQL and minIO. + + 1. Go to the Materialize repo directory. + + 1. Use the provided `postgres.yaml` file to install PostgreSQL as the + metadata database: + + ```shell + kubectl apply -f misc/helm-charts/testing/postgres.yaml + ``` + + 1. Use the provided `minio.yaml` file to install minIO as the blob storage: + + ```shell + kubectl apply -f misc/helm-charts/testing/minio.yaml + ``` + +1. Optional. Install the following metrics service for certain system metrics + but not required. + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + ``` + +1. Install Materialize into a new `materialize-environment` namespace: + + 1. Go to the Materialize repo directory. + + 1. Use the provided `environmentd.yaml` file to create the + `materialize-environment` namespace and install Materialize: + + ```shell + kubectl apply -f misc/helm-charts/testing/environmentd.yaml + ``` + + 1. Verify the installation and check the status: + + ```shell + kubectl get all -n materialize-environment + ``` + + Wait for the components to be in the `Running` state. + + + ```shell + NAME READY STATUS RESTARTS AGE + pod/mz2f4guf58oj-balancerd-69b5486554-xdpzq 1/1 Running 0 4m38s + pod/mz2f4guf58oj-cluster-s1-replica-s1-gen-1-0 1/1 Running 0 4m43s + pod/mz2f4guf58oj-cluster-s2-replica-s2-gen-1-0 1/1 Running 0 4m43s + pod/mz2f4guf58oj-cluster-s3-replica-s3-gen-1-0 1/1 Running 0 4m43s + pod/mz2f4guf58oj-cluster-u1-replica-u1-gen-1-0 1/1 Running 0 4m43s + pod/mz2f4guf58oj-console-557fdb88db-gb7zn 1/1 Running 0 4m38s + pod/mz2f4guf58oj-console-557fdb88db-xfv8w 1/1 Running 0 4m38s + pod/mz2f4guf58oj-environmentd-1-0 1/1 Running 0 4m55s + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/mz2f4guf58oj-balancerd NodePort 10.102.116.95 6876:32161/TCP,6875:31896/TCP 4m38s + service/mz2f4guf58oj-cluster-s1-replica-s1-gen-1 ClusterIP 10.105.24.34 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/TCP 4m43s + service/mz2f4guf58oj-cluster-s2-replica-s2-gen-1 ClusterIP 10.97.165.188 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/TCP 4m43s + service/mz2f4guf58oj-cluster-s3-replica-s3-gen-1 ClusterIP 10.107.119.66 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/TCP 4m43s + service/mz2f4guf58oj-cluster-u1-replica-u1-gen-1 ClusterIP 10.103.70.133 2100/TCP,2103/TCP,2101/TCP,2102/TCP,6878/TCP 4m43s + service/mz2f4guf58oj-console NodePort 10.111.141.122 9000:30793/TCP 4m38s + service/mz2f4guf58oj-environmentd NodePort 10.100.113.144 6875:30588/TCP,6876:31828/TCP,6877:31859/TCP,6878:30579/TCP,6880:31895/TCP,6881:30263/TCP 4m38s + service/mz2f4guf58oj-environmentd-1 NodePort 10.96.37.132 6875:32689/TCP,6876:30816/TCP,6877:32014/TCP,6878:30266/TCP,6880:32366/TCP,6881:31536/TCP 4m55s + service/mz2f4guf58oj-persist-pubsub-1 ClusterIP None 6879/TCP 4m55s + + NAME READY UP-TO-DATE AVAILABLE AGE + deployment.apps/mz2f4guf58oj-balancerd 1/1 1 1 4m38s + deployment.apps/mz2f4guf58oj-console 2/2 2 2 4m38s + + NAME DESIRED CURRENT READY AGE + replicaset.apps/mz2f4guf58oj-balancerd-69b5486554 1 1 1 4m38s + replicaset.apps/mz2f4guf58oj-console-557fdb88db 2 2 2 4m38s + + NAME READY AGE + statefulset.apps/mz2f4guf58oj-cluster-s1-replica-s1-gen-1 1/1 4m43s + statefulset.apps/mz2f4guf58oj-cluster-s2-replica-s2-gen-1 1/1 4m43s + statefulset.apps/mz2f4guf58oj-cluster-s3-replica-s3-gen-1 1/1 4m43s + statefulset.apps/mz2f4guf58oj-cluster-u1-replica-u1-gen-1 1/1 4m43s + statefulset.apps/mz2f4guf58oj-environmentd-1 1/1 4m55s + ``` + + If you run into an error during deployment, refer to the + [Troubleshooting](/self-hosted/troubleshooting) guide. + +1. Open the Materialize console in your browser: + + 1. From the `kubectl` output, find the Materialize console service. + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) + service/mz2f4guf58oj-console NodePort 10.111.141.122 9000:30793/TCP + ``` + + 1. Forward the Materialize console service to your local machine: + + ```shell + kubectl port-forward service/mzfhj38ptdjs-console 9000:9000 -n materialize-environment + ``` + + 1. Open a browser and navigate to + [http://localhost:9000](http://localhost:9000). + + ![Image of self-managed Materialize console running on local minikube](/images/self-managed/self-managed-console-minkiube.png) + + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-hosted/) +- [Materialize Operator Configuration](/self-hosted/configuration/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Installation](/self-managed/installation/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/operational-guidelines.md b/doc/user/content/self-managed/operational-guidelines.md new file mode 100644 index 0000000000000..9376cb894913c --- /dev/null +++ b/doc/user/content/self-managed/operational-guidelines.md @@ -0,0 +1,38 @@ +--- +title: "Operational guidelines" +description: "" +robots: "noindex, nofollow" +--- + +## Recommended instance types + +Materialize has been tested to work on instances with the following properties: + +- ARM-based CPU +- 1:8 ratio of vCPU to GiB memory +- 1:16 ratio of vCPU to GiB local instance storage (if enabling spill-to-disk) + +When operating in AWS, we recommend: + +- Using the `r7gd` and `r6gd` families of instances (and `r8gd` once available) + when running with local disk + +- Using the `r8g`, `r7g`, and `r6g` families when running without local disk + +## CPU affinity + +It is strongly recommended to enable the Kubernetes `static` [CPU management policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy). +This ensures that each worker thread of Materialize is given exclusively access to a vCPU. Our benchmarks have shown this +to substantially improve the performance of compute-bound workloads. + +## Network policies + +Enabling network policies ... + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Configuration](/self-managed/configuration/) +- [Installation](/self-managed/installation/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/troubleshooting.md b/doc/user/content/self-managed/troubleshooting.md new file mode 100644 index 0000000000000..0ff0a3203106f --- /dev/null +++ b/doc/user/content/self-managed/troubleshooting.md @@ -0,0 +1,27 @@ +--- +title: "Troubleshooting" +description: "" +aliases: + - /self-hosted/troubleshooting/ +robots: "noindex, nofollow" +--- + +If you encounter issues with the Materialize operator, check the operator logs: + +```shell +kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize +``` + +To check the status of your Materialize deployment, run: + +```shell +kubectl get all -n materialize +``` + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Configuration](/self-managed/configuration/) +- [Operational guidelines](/self-managed/operational-guidelines/) +- [Installation](/self-managed/installation/) +- [Upgrading](/self-managed/upgrading/) diff --git a/doc/user/content/self-managed/upgrading.md b/doc/user/content/self-managed/upgrading.md new file mode 100644 index 0000000000000..d678e51255997 --- /dev/null +++ b/doc/user/content/self-managed/upgrading.md @@ -0,0 +1,168 @@ +--- +title: "Upgrading" +description: "Upgrading Helm chart and Materialize." +robots: "noindex, nofollow" +--- + +The following provides steps for upgrading the Materialize operator and +Materialize instances. While the operator and instances can be upgraded +independently, you should ensure version compatibility between them. + +When upgrading: + +- Ensure version compatibility between the Materialize operator and Materialize + instance. The operator can manage instances within a certain version range + version range. + +- Upgrade the operator first. + +- Always upgrade your Materialize instances after upgrading the operator to + ensure compatibility. + +### Upgrading the Helm Chart + +{{< important >}} + +Upgrade the operator first. + +{{}} + +To upgrade the Materialize operator to a new version: + +```shell +helm upgrade my-materialize-operator materialize/misc/helm-charts/operator +``` + +If you have custom values, make sure to include your values file: + +```shell +helm upgrade my-materialize-operator materialize/misc/helm-charts/operator -f my-values.yaml +``` + +### Upgrading Materialize Instances + +{{< important >}} + +Always upgrade your Materialize instances after upgrading the operator to +ensure compatibility. + +{{}} + +To upgrade your Materialize instances, you'll need to update the Materialize custom resource and trigger a rollout. + +By default, the operator performs rolling upgrades (`inPlaceRollout: false`) which minimize downtime but require additional Kubernetes cluster resources during the transition. However, keep in mind that rolling upgrades typically take longer to complete due to the sequential rollout process. For environments where downtime is acceptable, you can opt for in-place upgrades (`inPlaceRollout: true`). + +#### Determining the Version + +The compatible version for your Materialize instances is specified in the Helm chart's `appVersion`. For the installed chart version, you can run: + +```shell +helm list -n materialize +``` + +Or check the `Chart.yaml` file in the `misc/helm-charts/operator` directory: + +```yaml +apiVersion: v2 +name: materialize-operator +# ... +version: 25.1.0-beta.1 +appVersion: v0.125.2 # Use this version for your Materialize instances +``` + +Use the `appVersion` (`v0.125.2` in this case) when updating your Materialize instances to ensure compatibility. + +#### Using `kubectl` patch + +For standard upgrades such as image updates: + +```shell +# For version updates, first update the image reference +kubectl patch materialize \ + -n \ + --type='merge' \ + -p "{\"spec\": {\"environmentdImageRef\": \"materialize/environmentd:v0.125.2\"}}" + +# Then trigger the rollout with a new UUID +kubectl patch materialize \ + -n \ + --type='merge' \ + -p "{\"spec\": {\"requestRollout\": \"$(uuidgen)\"}}" +``` + +You can combine both operations in a single command if preferred: + +```shell +kubectl patch materialize 12345678-1234-1234-1234-123456789012 \ + -n materialize-environment \ + --type='merge' \ + -p "{\"spec\": {\"environmentdImageRef\": \"materialize/environmentd:v0.125.2\", \"requestRollout\": \"$(uuidgen)\"}}" +``` + +#### Using YAML Definition + +Alternatively, you can update your Materialize custom resource definition directly: + +```yaml +apiVersion: materialize.cloud/v1alpha1 +kind: Materialize +metadata: + name: 12345678-1234-1234-1234-123456789012 + namespace: materialize-environment +spec: + environmentdImageRef: materialize/environmentd:v0.125.2 # Update version as needed + requestRollout: 22222222-2222-2222-2222-222222222222 # Generate new UUID + forceRollout: 33333333-3333-3333-3333-333333333333 # Optional: for forced rollouts + inPlaceRollout: false # When false, performs a rolling upgrade rather than in-place + backendSecretName: materialize-backend +``` + +Apply the updated definition: + +```shell +kubectl apply -f materialize.yaml +``` + +#### Forced Rollouts + +If you need to force a rollout even when there are no changes to the instance: + +```shell +kubectl patch materialize \ + -n materialize-environment \ + --type='merge' \ + -p "{\"spec\": {\"requestRollout\": \"$(uuidgen)\", \"forceRollout\": \"$(uuidgen)\"}}" +``` + +The behavior of a forced rollout follows your `inPlaceRollout` setting: +- With `inPlaceRollout: false` (default): Creates new instances before terminating the old ones, temporarily requiring twice the resources during the transition +- With `inPlaceRollout: true`: Directly replaces the instances, causing downtime but without requiring additional resources + +### Verifying the Upgrade + +After initiating the rollout, you can monitor the status: + +```shell +# Watch the status of your Materialize environment +kubectl get materialize -n materialize-environment -w + +# Check the logs of the operator +kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize +``` + +### Notes on Rollouts + +- `requestRollout` triggers a rollout only if there are actual changes to the instance (like image updates) +- `forceRollout` triggers a rollout regardless of whether there are changes, which can be useful for debugging or when you need to force a rollout for other reasons +- Both fields expect UUID values and each rollout requires a new, unique UUID value +- `inPlaceRollout`: + - When `false` (default): Performs a rolling upgrade by spawning new instances before terminating old ones. While this minimizes downtime, there may still be a brief interruption during the transition. + - When `true`: Directly replaces existing instances, which will cause downtime. + +## See also + +- [Materialize Kubernetes Operator Helm Chart](/self-managed/) +- [Configuration](/self-managed/configuration/) +- [Installation](/self-managed/installation/) +- [Troubleshooting](/self-managed/troubleshooting/) +- [Operational guidelines](/self-managed/operational-guidelines/) diff --git a/doc/user/data/materialize_operator_chart_parameter.yml b/doc/user/data/materialize_operator_chart_parameter.yml new file mode 100644 index 0000000000000..b202c698fdab7 --- /dev/null +++ b/doc/user/data/materialize_operator_chart_parameter.yml @@ -0,0 +1,587 @@ +parameters: + - parameter: clusterd.nodeSelector + description: "" + default: "{}" + + - parameter: environmentd.nodeSelector + description: "" + default: "{}" + + - parameter: namespace.create + description: "" + default: false + + - parameter: namespace.name + description: "" + default: "materialize" + + - parameter: networkPolicies.egress.cidrs[0] + description: "" + default: "0.0.0.0/0" + + - parameter: networkPolicies.egress.enabled + description: "" + default: false + + - parameter: networkPolicies.enabled + description: "" + default: false + + - parameter: networkPolicies.ingress.cidrs[0] + description: "" + default: "0.0.0.0/0" + + - parameter: networkPolicies.ingress.enabled + description: "" + default: false + + - parameter: networkPolicies.internal.enabled + description: "" + default: false + + - parameter: observability.enabled + description: "" + default: false + + - parameter: observability.prometheus.enabled + description: "" + default: false + + - parameter: operator.args.startupLogFilter + description: "" + default: "INFO,mz_orchestratord=TRACE" + + - parameter: operator.cloudProvider.providers.aws.accountID + description: "" + default: "" + + - parameter: operator.cloudProvider.providers.aws.enabled + description: "" + default: false + + - parameter: operator.cloudProvider.providers.aws.iam.roles.connection + description: "" + default: "" + + - parameter: operator.cloudProvider.providers.aws.iam.roles.environment + description: "" + default: "" + + - parameter: operator.cloudProvider.providers.gcp.enabled + description: "" + default: false + + - parameter: operator.cloudProvider.region + description: "" + default: "kind" + + - parameter: operator.cloudProvider.type + description: "" + default: "local" + + - parameter: operator.clusters.defaultSizes.analytics + description: "" + default: "25cc" + + - parameter: operator.clusters.defaultSizes.catalogServer + description: "" + default: "50cc" + + - parameter: operator.clusters.defaultSizes.default + description: "" + default: "25cc" + + - parameter: operator.clusters.defaultSizes.probe + description: "" + default: "mz_probe" + + - parameter: operator.clusters.defaultSizes.support + description: "" + default: "25cc" + + - parameter: operator.clusters.defaultSizes.system + description: "" + default: "25cc" + + - parameter: operator.clusters.sizes.100cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.100cc.cpu_limit + description: "" + default: 2 + + - parameter: operator.clusters.sizes.100cc.credits_per_hour + description: "" + default: "1" + + - parameter: operator.clusters.sizes.100cc.disk_limit + description: "" + default: "31050MiB" + + - parameter: operator.clusters.sizes.100cc.memory_limit + description: "" + default: "15525MiB" + + - parameter: operator.clusters.sizes.100cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.100cc.workers + description: "" + default: 2 + + - parameter: operator.clusters.sizes.1200cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.1200cc.cpu_limit + description: "" + default: 24 + + - parameter: operator.clusters.sizes.1200cc.credits_per_hour + description: "" + default: "12" + + - parameter: operator.clusters.sizes.1200cc.disk_limit + description: "" + default: "372603MiB" + + - parameter: operator.clusters.sizes.1200cc.memory_limit + description: "" + default: "186301MiB" + + - parameter: operator.clusters.sizes.1200cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.1200cc.workers + description: "" + default: 24 + + - parameter: operator.clusters.sizes.128C.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.128C.cpu_limit + description: "" + default: 62 + + - parameter: operator.clusters.sizes.128C.credits_per_hour + description: "" + default: "128" + + - parameter: operator.clusters.sizes.128C.disk_limit + description: "" + default: "962560MiB" + + - parameter: operator.clusters.sizes.128C.memory_limit + description: "" + default: "481280MiB" + + - parameter: operator.clusters.sizes.128C.scale + description: "" + default: 4 + + - parameter: operator.clusters.sizes.128C.workers + description: "" + default: 62 + + - parameter: operator.clusters.sizes.1600cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.1600cc.cpu_limit + description: "" + default: 31 + + - parameter: operator.clusters.sizes.1600cc.credits_per_hour + description: "" + default: "16" + + - parameter: operator.clusters.sizes.1600cc.disk_limit + description: "" + default: "481280MiB" + + - parameter: operator.clusters.sizes.1600cc.memory_limit + description: "" + default: "240640MiB" + + - parameter: operator.clusters.sizes.1600cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.1600cc.workers + description: "" + default: 31 + + - parameter: operator.clusters.sizes.200cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.200cc.cpu_limit + description: "" + default: 4 + + - parameter: operator.clusters.sizes.200cc.credits_per_hour + description: "" + default: "2" + + - parameter: operator.clusters.sizes.200cc.disk_limit + description: "" + default: "62100MiB" + + - parameter: operator.clusters.sizes.200cc.memory_limit + description: "" + default: "31050MiB" + + - parameter: operator.clusters.sizes.200cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.200cc.workers + description: "" + default: 4 + + - parameter: operator.clusters.sizes.256C.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.256C.cpu_limit + description: "" + default: 62 + + - parameter: operator.clusters.sizes.256C.credits_per_hour + description: "" + default: "256" + + - parameter: operator.clusters.sizes.256C.disk_limit + description: "" + default: "962560MiB" + + - parameter: operator.clusters.sizes.256C.memory_limit + description: "" + default: "481280MiB" + + - parameter: operator.clusters.sizes.256C.scale + description: "" + default: 8 + + - parameter: operator.clusters.sizes.256C.workers + description: "" + default: 62 + + - parameter: operator.clusters.sizes.25cc.cpu_exclusive + description: "" + default: false + + - parameter: operator.clusters.sizes.25cc.cpu_limit + description: "" + default: 0.5 + + - parameter: operator.clusters.sizes.25cc.credits_per_hour + description: "" + default: "0.25" + + - parameter: operator.clusters.sizes.25cc.disk_limit + description: "" + default: "7762MiB" + + - parameter: operator.clusters.sizes.25cc.memory_limit + description: "" + default: "3881MiB" + + - parameter: operator.clusters.sizes.25cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.25cc.workers + description: "" + default: 1 + + - parameter: operator.clusters.sizes.300cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.300cc.cpu_limit + description: "" + default: 6 + + - parameter: operator.clusters.sizes.300cc.credits_per_hour + description: "" + default: "3" + + - parameter: operator.clusters.sizes.300cc.disk_limit + description: "" + default: "93150MiB" + + - parameter: operator.clusters.sizes.300cc.memory_limit + description: "" + default: "46575MiB" + + - parameter: operator.clusters.sizes.300cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.300cc.workers + description: "" + default: 6 + + - parameter: operator.clusters.sizes.3200cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.3200cc.cpu_limit + description: "" + default: 62 + + - parameter: operator.clusters.sizes.3200cc.credits_per_hour + description: "" + default: "32" + + - parameter: operator.clusters.sizes.3200cc.disk_limit + description: "" + default: "962560MiB" + + - parameter: operator.clusters.sizes.3200cc.memory_limit + description: "" + default: "481280MiB" + + - parameter: operator.clusters.sizes.3200cc.scale + description: "" + default: 1 + + - parameter: operator.clusters.sizes.3200cc.workers + description: "" + default: 62 + + - parameter: operator.clusters.sizes.400cc.cpu_exclusive + description: "" + default: true + + - parameter: operator.clusters.sizes.400cc.cpu_limit + description: "" + default: 8 + - parameter: operator.clusters.sizes.400cc.credits_per_hour + description: "" + default: "4" + - parameter: operator.clusters.sizes.400cc.disk_limit + description: "" + default: 124201MiB + - parameter: operator.clusters.sizes.400cc.memory_limit + description: "" + default: 62100MiB + - parameter: operator.clusters.sizes.400cc.scale + description: "" + default: 1 + - parameter: operator.clusters.sizes.400cc.workers + description: "" + default: 8 + - parameter: operator.clusters.sizes.50cc.cpu_exclusive + description: "" + default: true + - parameter: operator.clusters.sizes.50cc.cpu_limit + description: "" + default: 1 + - parameter: operator.clusters.sizes.50cc.credits_per_hour + description: "" + default: "0.5" + - parameter: operator.clusters.sizes.50cc.disk_limit + description: "" + default: 15525MiB + - parameter: operator.clusters.sizes.50cc.memory_limit + description: "" + default: 7762MiB + - parameter: operator.clusters.sizes.50cc.scale + description: "" + default: 1 + - parameter: operator.clusters.sizes.50cc.workers + description: "" + default: 1 + + - parameter: operator.clusters.sizes.512C.cpu_exclusive + description: "" + default: true + - parameter: operator.clusters.sizes.512C.cpu_limit + description: "" + default: 62 + - parameter: operator.clusters.sizes.512C.credits_per_hour + description: "" + default: "512" + - parameter: operator.clusters.sizes.512C.disk_limit + description: "" + default: 962560MiB + - parameter: operator.clusters.sizes.512C.memory_limit + description: "" + default: 481280MiB + - parameter: operator.clusters.sizes.512C.scale + description: "" + default: 16 + - parameter: operator.clusters.sizes.512C.workers + description: "" + default: 62 + - parameter: operator.clusters.sizes.600cc.cpu_exclusive + description: "" + default: true + - parameter: operator.clusters.sizes.600cc.cpu_limit + description: "" + default: 12 + - parameter: operator.clusters.sizes.600cc.credits_per_hour + description: "" + default: "6" + - parameter: operator.clusters.sizes.600cc.disk_limit + description: "" + default: 186301MiB + - parameter: operator.clusters.sizes.600cc.memory_limit + description: "" + default: 93150MiB + - parameter: operator.clusters.sizes.600cc.scale + description: "" + default: 1 + - parameter: operator.clusters.sizes.600cc.workers + description: "" + default: 12 + - parameter: operator.clusters.sizes.6400cc.cpu_exclusive + description: "" + default: true + - parameter: operator.clusters.sizes.6400cc.cpu_limit + description: "" + default: 62 + - parameter: operator.clusters.sizes.6400cc.credits_per_hour + description: "" + default: "64" + - parameter: operator.clusters.sizes.6400cc.disk_limit + description: "" + default: 962560MiB + - parameter: operator.clusters.sizes.6400cc.memory_limit + description: "" + default: 481280MiB + - parameter: operator.clusters.sizes.6400cc.scale + description: "" + default: 2 + - parameter: operator.clusters.sizes.6400cc.workers + description: "" + default: 62 + - parameter: operator.clusters.sizes.800cc.cpu_exclusive + description: "" + default: true + - parameter: operator.clusters.sizes.800cc.cpu_limit + description: "" + default: 16 + - parameter: operator.clusters.sizes.800cc.credits_per_hour + description: "" + default: "8" + - parameter: operator.clusters.sizes.800cc.disk_limit + description: "" + default: 248402MiB + - parameter: operator.clusters.sizes.800cc.memory_limit + description: "" + default: 124201MiB + - parameter: operator.clusters.sizes.800cc.scale + description: "" + default: 1 + - parameter: operator.clusters.sizes.800cc.workers + description: "" + default: 16 + + - parameter: operator.clusters.sizes.mz_probe.cpu_exclusive + description: "" + default: false + - parameter: operator.clusters.sizes.mz_probe.cpu_limit + description: "" + default: 0.1 + - parameter: operator.clusters.sizes.mz_probe.credits_per_hour + description: "" + default: "0.00" + - parameter: operator.clusters.sizes.mz_probe.disk_limit + description: "" + default: 1552MiB + - parameter: operator.clusters.sizes.mz_probe.memory_limit + description: "" + default: 776MiB + - parameter: operator.clusters.sizes.mz_probe.scale + description: "" + default: 1 + - parameter: operator.clusters.sizes.mz_probe.workers + description: "" + default: 1 + - parameter: operator.features.consoleImageTagMapOverride + description: "" + default: "{}" + - parameter: operator.features.createBalancers + description: "" + default: true + - parameter: operator.features.createConsole + description: "" + default: true + - parameter: operator.image.pullPolicy + description: "" + default: IfNotPresent + - parameter: operator.image.repository + description: "" + default: materialize/orchestratord + + - parameter: operator.image.tag + description: "" + default: v0.125.2 + - parameter: operator.nodeSelector + description: "" + default: "{}" + - parameter: operator.resources.limits.memory + description: "" + default: 512Mi + - parameter: operator.resources.requests.cpu + description: "" + default: 100m + - parameter: operator.resources.requests.memory + description: "" + default: 512Mi + - parameter: rbac.create + description: "" + default: true + - parameter: rbac.clusterRole + description: "" + default: "" + - parameter: serviceAccount.create + description: "" + default: true + - parameter: serviceAccount.name + description: "" + default: orchestratord + + - parameter: storage.storageClass.allowVolumeExpansion + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: false + - parameter: storage.storageClass.create + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: false + - parameter: storage.storageClass.name + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: "" + - parameter: storage.storageClass.parameters.fsType + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: ext4 + - parameter: storage.storageClass.parameters.storage + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: lvm + - parameter: storage.storageClass.parameters.volgroup + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: instance-store-vg + - parameter: storage.storageClass.provisioner + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: "" + - parameter: storage.storageClass.reclaimPolicy + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: Delete + - parameter: storage.storageClass.volumeBindingMode + description: | + See https://kubernetes.io/docs/concepts/storage/storage-classes/. + default: WaitForFirstConsumer diff --git a/doc/user/layouts/partials/head.html b/doc/user/layouts/partials/head.html index bfffb9f05809e..2841fee871169 100644 --- a/doc/user/layouts/partials/head.html +++ b/doc/user/layouts/partials/head.html @@ -5,6 +5,11 @@ + +{{ if .Params.robots }} + +{{ end }} + {{ $title }} diff --git a/doc/user/layouts/shortcodes/self-managed/git-checkout-branch.md b/doc/user/layouts/shortcodes/self-managed/git-checkout-branch.md new file mode 100644 index 0000000000000..dbe620a0317ee --- /dev/null +++ b/doc/user/layouts/shortcodes/self-managed/git-checkout-branch.md @@ -0,0 +1 @@ +Check out the `v0.125.2` branch. diff --git a/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters-table.html b/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters-table.html new file mode 100644 index 0000000000000..bb56060ccc32e --- /dev/null +++ b/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters-table.html @@ -0,0 +1,18 @@ + + + + + + + + + + +{{ range $.Site.Data.materialize_operator_chart_parameter.parameters }} + + + + +{{ end }} + +
ParameterDefault
{{ .parameter }}{{ .default }}
diff --git a/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters.html b/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters.html new file mode 100644 index 0000000000000..9dade259e638f --- /dev/null +++ b/doc/user/layouts/shortcodes/self-managed/materialize-operator-chart-parameters.html @@ -0,0 +1,21 @@ + +{{- $headings := slice -}} +{{ range $.Site.Data.materialize_operator_chart_parameter.parameters }} + +{{ $parts := split .parameter "." }} +{{ $heading := index $parts 0 }} + +{{if not (in $headings $heading) }} + +{{- $headings = $headings | append $heading -}} + +### `{{ $heading | $.Page.RenderString }}` parameters + +{{ end }} + +#### {{ .parameter }} + +**Default**: {{ .default | $.Page.RenderString }} + +{{.description | $.Page.RenderString }} +{{ end }} diff --git a/doc/user/static/images/self-managed/self-managed-console-kind.png b/doc/user/static/images/self-managed/self-managed-console-kind.png new file mode 100644 index 0000000000000..90dd8ba9eccce Binary files /dev/null and b/doc/user/static/images/self-managed/self-managed-console-kind.png differ diff --git a/doc/user/static/images/self-managed/self-managed-console-minkiube.png b/doc/user/static/images/self-managed/self-managed-console-minkiube.png new file mode 100644 index 0000000000000..7d77f4bad8ce6 Binary files /dev/null and b/doc/user/static/images/self-managed/self-managed-console-minkiube.png differ