From 0031395f84eeac1bf41c199cf78b386fdeb09b7f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ma=C5=82gorzata=20=C5=9Awieca?= Date: Tue, 1 Oct 2024 16:49:09 +0200 Subject: [PATCH] docs: Create a new folder structure (#1905) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Create a new folder structure * Apply Chris's suggestions * Formatting fix * Revert "Formatting fix" This reverts commit 2f9aac7c3fd6ed7ce687608d04b19cf01f78b065. * remove whitespace --------- Co-authored-by: Christoph Schwägerl --- README.md | 71 +++------ docs/README.md | 30 ---- .../01-architecture.md} | 2 +- .../02-controllers.md} | 0 .../03-config-private-registry.md} | 0 .../04-local-test-setup.md} | 2 +- .../05-api-changelog.md} | 0 .../assets/kyma-module-template-structure.svg | 0 .../assets/kyma-operator-architecture.svg | 0 .../assets/lifecycle-manager-architecture.svg | 0 .../resources/01-kyma.md} | 10 +- .../resources/02-manifest.md} | 24 ++- .../resources/03-moduletemplate.md} | 12 +- docs/contributor/resources/04-watcher.md | 11 ++ .../api => contributor/resources}/README.md | 27 ++-- docs/developer-tutorials/README.md | 3 - .../prepare-gcr-registry.md | 89 ----------- .../provision-cluster-and-registry.md | 141 ------------------ .../starting-operator-with-webhooks.md | 19 --- docs/modularization.md | 45 ------ docs/technical-reference/running-modes.md | 31 ---- .../01-10-control-plane-quick-start.md | 94 ------------ ...nage-module-with-custom-resource-policy.md | 41 ----- docs/user/01-10-kyma-crd.md | 11 ++ docs/user/README.md | 3 + internal/declarative/README.md | 95 ------------ 26 files changed, 90 insertions(+), 671 deletions(-) delete mode 100644 docs/README.md rename docs/{technical-reference/architecture.md => contributor/01-architecture.md} (98%) rename docs/{technical-reference/controllers.md => contributor/02-controllers.md} (100%) rename docs/{developer-tutorials/config-private-registry.md => contributor/03-config-private-registry.md} (100%) rename docs/{developer-tutorials/local-test-setup.md => contributor/04-local-test-setup.md} (99%) rename docs/{technical-reference/api/changelog.md => contributor/05-api-changelog.md} (100%) rename docs/{ => contributor}/assets/kyma-module-template-structure.svg (100%) rename docs/{ => contributor}/assets/kyma-operator-architecture.svg (100%) rename docs/{ => contributor}/assets/lifecycle-manager-architecture.svg (100%) rename docs/{technical-reference/api/kyma-cr.md => contributor/resources/01-kyma.md} (96%) rename docs/{technical-reference/api/manifest-cr.md => contributor/resources/02-manifest.md} (85%) rename docs/{technical-reference/api/moduleTemplate-cr.md => contributor/resources/03-moduletemplate.md} (91%) create mode 100644 docs/contributor/resources/04-watcher.md rename docs/{technical-reference/api => contributor/resources}/README.md (72%) delete mode 100644 docs/developer-tutorials/README.md delete mode 100644 docs/developer-tutorials/prepare-gcr-registry.md delete mode 100644 docs/developer-tutorials/provision-cluster-and-registry.md delete mode 100644 docs/developer-tutorials/starting-operator-with-webhooks.md delete mode 100644 docs/modularization.md delete mode 100644 docs/technical-reference/running-modes.md delete mode 100644 docs/user-tutorials/01-10-control-plane-quick-start.md delete mode 100644 docs/user-tutorials/02-10-manage-module-with-custom-resource-policy.md create mode 100644 docs/user/01-10-kyma-crd.md create mode 100644 docs/user/README.md delete mode 100644 internal/declarative/README.md diff --git a/README.md b/README.md index 4a0c0fce4e..1fbb272dc1 100644 --- a/README.md +++ b/README.md @@ -1,66 +1,33 @@ - -[![REUSE status](https://api.reuse.software/badge/github.com/kyma-project/lifecycle-manager)](https://api.reuse.software/info/github.com/kyma-project/lifecycle-manager) # Lifecycle Manager -Kyma is an opinionated set of Kubernetes-based modular building blocks that includes the necessary capabilities to develop and run enterprise-grade cloud-native applications. Kyma's Lifecycle Manager is a tool that manages the lifecycle of these modules in your cluster. - -## Modularization - -Lifecycle Manager was introduced along with the concept of Kyma modularization. With Kyma's modular approach, you can install just the modules you need, giving you more flexibility and reducing the footprint of your Kyma cluster. Lifecycle Manager manages clusters using the [Kyma](api/v1beta2/kyma_types.go) custom resource (CR). The CR defines the desired state of modules in a cluster. With the CR you can enable and disable modules. Lifecycle Manager installs or uninstalls modules and updates their statuses. For more details, read about the [modularization concept in Kyma](https://github.com/kyma-project/community/tree/main/concepts/modularization). - -## Basic Concepts - -See the list of basic concepts relating to Lifecycle Manager to understand its workflow better. - -- Kyma custom resource (CR) - represents Kyma installation in a cluster. It contains the list of modules and their state. -- ModuleTemplate CR - contains modules' metadata with links to their images and manifests. ModuleTemplate CR represents a module in a particular version. Based on this resource Lifecycle Manager enables or disables modules in your cluster. -- Manifest CR - represents resources that make up a module and are to be installed by Lifecycle Manager. The Manifest CR is a rendered module enabled on a particular cluster. -- Module CR, such as Keda CR - allows you to configure the behavior of a module. This is a per-module CR. - -For the worklow details, read the [Architecture](docs/technical-reference/architecture.md) document. - -## Quick Start - -Follow this quick start guide to set up the environment and use Lifecycle Manager to add modules. - -### Prerequisites - -To use Lifecycle Manager in a local setup, install the following: - -- [k3d](https://k3d.io/) -- [istioctl](https://istio.io/latest/docs/setup/install/istioctl/) -- [Kyma CLI](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/01-install-kyma-CLI) - -### Steps - -1. To set up the environment, provision a local k3d cluster and install Kyma. Run: + +[![REUSE status](https://api.reuse.software/badge/github.com/kyma-project/lifecycle-manager)](https://api.reuse.software/info/github.com/kyma-project/lifecycle-manager) - ```bash - k3d registry create kyma-registry --port 5001 - k3d cluster create kyma --kubeconfig-switch-context -p 80:80@loadbalancer -p 443:443@loadbalancer --registry-use kyma-registry - kubectl create ns kyma-system - kyma alpha deploy - ``` +## Overview -2. Apply a ModuleTemplate CR. Run the following kubectl command: +Lifecycle Manager is an operator based on the [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. It extends Kubernetes API by providing multiple Custom Resource Definitions, which allow you to manage their resources through custom resources (CR). For more information, see [Lifecycle Manager Resources](./docs/contributor/resources/README.md). - ```bash - kubectl apply -f {MODULE_TEMPLATE.yaml} - ``` +Lifecycle Manager manages the lifecycle of [Kyma Modules](https://help.sap.com/docs/btp/sap-business-technology-platform/kyma-modules) in a cluster. It was introduced along with the concept of Kyma [modularizaion](https://github.com/kyma-project/community/tree/main/concepts/modularization). -**TIP:** You can use any deployment-ready ModuleTemplates, such as [cluster-ip](https://github.com/pbochynski/) or [keda](https://github.com/kyma-project/keda-manager). +For more information on the Lifecycle Manager's workflow, see the [Architecture](docs/contributor/01-architecture.md) document. -3. Enable a module. Run: +## Usage - ```bash - kyma alpha add module {MODULE_NAME} - ``` +If you are a Kyma end user, see the [user documentation](./docs/user/README.md). -**TIP:** Check the [modular Kyma interactive tutorial](https://killercoda.com/kyma-project/scenario/modular-kyma) to play with enabling and disabling Kyma modules in both terminal and Busola. +## Development -## Read More +If you want to provide new features for Lifecycle Manager, develop a module, or are part of the SRE team, visit the [contributor](/docs/contributor/) folder. You will find the following documents: -Go to the [`Table of Contents`](/docs/README.md) in the `/docs` directory to find the complete list of documents on Lifecycle Manager. Read those to learn more about Lifecycle Manager and its functionalities. +* [Architecture](/docs/contributor/01-architecture.md) +* [Controllers](/docs/contributor/02-controllers.md) +* [Provide Credentials for Private OCI Registry Authentication](/docs/contributor/03-config-private-registry.md) +* [Local Test Setup in the Control Plane Mode Using k3d](/docs/contributor/04-local-test-setup.md) +* [Resources](/docs/contributor/resources/README.md) + * [Kyma](/docs/contributor/resources/01-kyma.md) + * [Manifest](/docs/contributor/resources/02-manifest.md) + * [ModuleTemplate](/docs/contributor/resources/03-moduletemplate.md) + * [Watcher](/docs/contributor/resources/04-watcher.md) ## Contributing diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index 9191d99436..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,30 +0,0 @@ -# Documentation - -## Overview - -The `docs` folder contains documentation on the Lifecycle Manager project. - -## Table of Contents - -The table of contents lists all the documents in repository with their short description. - -* [Developer tutorials](developer-tutorials/README.md) - a directory containing infrastructure-related guides for developers - * [Provide credential for private OCI registry authentication](developer-tutorials/config-private-registry.md) - * [Local test setup in the control-plane mode using k3d](developer-tutorials/local-test-setup.md) - * [Create a test environment on Google Container Registry (GCR)](developer-tutorials/prepare-gcr-registry.md) - * [Provision cluster and OCI registry](developer-tutorials/provision-cluster-and-registry.md) - * [Enable Webhooks in Lifecycle Manager](developer-tutorials/starting-operator-with-webhooks.md) -* Technical reference - a directory with techncial details on Lifecycle Manager, such as architecture, APIs, or running modes - * [API](technical-reference/api/README.md) - a directory with the description of Lifecycle Manager's custom resources (CRs) - * [Kyma CR](technical-reference/api/kyma-cr.md) - * [Manifest CR](technical-reference/api/manifest-cr.md) - * [ModuleTemplate CR](technical-reference/api/moduleTemplate-cr.md) - * [Architecture](technical-reference/architecture.md) - describes Lifecycle Manager's architecture - * [Controllers](technical-reference/controllers.md) - describes Kyma, Manifest and Watcher controllers - * [Running Modes](technical-reference/running-modes.md) - describes Lifecycle Manager's running modes - * [Declarative Reconciliation Library Reference Documentation](/internal/declarative/README.md) - * [Internal Manifest Reconciliation Library Extensions](/internal/manifest/README.md) -* User tutorials - * [Managing module enablement with the CustomResourcePolicy](user-tutorials/02-10-manage-module-with-custom-resource-policy.md) - * [Quick Start](user-tutorials/01-10-control-plane-quick-start.md) -* [Modularization](modularization.md) - describes the modularization concept and its building blocks in the context of Lifecycle Manager diff --git a/docs/technical-reference/architecture.md b/docs/contributor/01-architecture.md similarity index 98% rename from docs/technical-reference/architecture.md rename to docs/contributor/01-architecture.md index 261197d12f..fde0a9e24c 100644 --- a/docs/technical-reference/architecture.md +++ b/docs/contributor/01-architecture.md @@ -12,7 +12,7 @@ Lifecycle Manager: The diagram shows a sample deployment of KCP in interaction with a Kyma runtime. -![Lifecycle Manager Architecture](/docs/assets/lifecycle-manager-architecture.svg) +![Lifecycle Manager Architecture](./assets/lifecycle-manager-architecture.svg) To run, Lifecycle Manager uses the following workflow: diff --git a/docs/technical-reference/controllers.md b/docs/contributor/02-controllers.md similarity index 100% rename from docs/technical-reference/controllers.md rename to docs/contributor/02-controllers.md diff --git a/docs/developer-tutorials/config-private-registry.md b/docs/contributor/03-config-private-registry.md similarity index 100% rename from docs/developer-tutorials/config-private-registry.md rename to docs/contributor/03-config-private-registry.md diff --git a/docs/developer-tutorials/local-test-setup.md b/docs/contributor/04-local-test-setup.md similarity index 99% rename from docs/developer-tutorials/local-test-setup.md rename to docs/contributor/04-local-test-setup.md index f4ef4f6c6c..9029982996 100644 --- a/docs/developer-tutorials/local-test-setup.md +++ b/docs/contributor/04-local-test-setup.md @@ -1,4 +1,4 @@ -# Local test Setup in the control-plane Mode Using k3d +# Local Test Setup in the Control Plane Mode Using k3d > ### Supported Versions > * Golang: `v1.22.5` diff --git a/docs/technical-reference/api/changelog.md b/docs/contributor/05-api-changelog.md similarity index 100% rename from docs/technical-reference/api/changelog.md rename to docs/contributor/05-api-changelog.md diff --git a/docs/assets/kyma-module-template-structure.svg b/docs/contributor/assets/kyma-module-template-structure.svg similarity index 100% rename from docs/assets/kyma-module-template-structure.svg rename to docs/contributor/assets/kyma-module-template-structure.svg diff --git a/docs/assets/kyma-operator-architecture.svg b/docs/contributor/assets/kyma-operator-architecture.svg similarity index 100% rename from docs/assets/kyma-operator-architecture.svg rename to docs/contributor/assets/kyma-operator-architecture.svg diff --git a/docs/assets/lifecycle-manager-architecture.svg b/docs/contributor/assets/lifecycle-manager-architecture.svg similarity index 100% rename from docs/assets/lifecycle-manager-architecture.svg rename to docs/contributor/assets/lifecycle-manager-architecture.svg diff --git a/docs/technical-reference/api/kyma-cr.md b/docs/contributor/resources/01-kyma.md similarity index 96% rename from docs/technical-reference/api/kyma-cr.md rename to docs/contributor/resources/01-kyma.md index 713c04248e..201932ed8c 100644 --- a/docs/technical-reference/api/kyma-cr.md +++ b/docs/contributor/resources/01-kyma.md @@ -1,4 +1,12 @@ -# Kyma Custom Resource +# Kyma + +The `kymas.operator.kyma-project.io` Custom Resource Definition (CRD) defines the structure and format used to manage a cluster and its desired state. It contains the list of added modules and their state. + +To get the latest CRD in the YAML format, run the following command: + +```bash +kubectl get crd kymas.operator.kyma-project.io -o yaml +``` The [Kyma custom resource (CR)](../../../api/v1beta2/kyma_types.go) is used to declare the desired state of a cluster. **.spec.channel**, **.spec.modules[].channel**, and **.spec.modules** are the basic fields that are used together to define the cluster state. diff --git a/docs/technical-reference/api/manifest-cr.md b/docs/contributor/resources/02-manifest.md similarity index 85% rename from docs/technical-reference/api/manifest-cr.md rename to docs/contributor/resources/02-manifest.md index 7e823806f5..2000f83f24 100644 --- a/docs/technical-reference/api/manifest-cr.md +++ b/docs/contributor/resources/02-manifest.md @@ -1,6 +1,14 @@ -# Manifest Custom Resource +# Manifest -The [Manifest custom resource (CR)](../../../api/v1beta2/manifest_types.go) is our internal representation of what results from the resolution of a ModuleTemplate CR in the context of a single cluster represented by a Kyma CR. Thus, a lot of configuration elements are similar or entirely equivalent to the data layer found in a ModuleTemplate CR. +The `manifests.operator.kyma-project.io` Custom Resource Definition (CRD) defines the structure and format used to configure the Manifest resource. + +The Manifest custom resource (CR) represent resources that make up a module and are to be installed by Lifecycle Manager. The Manifest CR is a rendered module added to a particular cluster. + +To get the latest CRD in the YAML format, run the following command: + +```bash +kubectl get crd manifests.operator.kyma-project.io -o yaml +``` ## Patching @@ -10,7 +18,7 @@ The [Runner](../../../pkg/module/sync/runner.go) is responsible for creating and 2. The Manifest CR channel differs from the Kyma CR's module status channel. 3. The Manifest CR state differs from the Kyma CR's module status state. ->[!NOTE] +>[!NOTE] >The module status is not present in the Kyma CR for mandatory modules, hence their Manifest CR is updated using SSA in every reconcile loop. ## Configuration @@ -102,12 +110,12 @@ The resource is the default data that should be initialized for the module and i ### **.status** -The Manifest CR status is set based on the following logic, managed by the manifest reconciler: +The Manifest CR status is set based on the following logic, managed by the manifest reconciler: -- If the module defined in the Manifest CR is successfully applied and the deployed module is up and running, the status of the Manifest CR is set to `Ready`. -- While the manifest is being applied and the Deployment is still starting, the status of the Manifest CR is set to `Processing`. -- If the Deployment cannot start (for example, due to an `ImagePullBackOff` error) or if the application of the manifest fails, the status of the Manifest CR is set to `Error`. -- If the Manifest CR is marked for deletion, the status of the Manifest CR is set to `Deleting`. +* If the module defined in the Manifest CR is successfully applied and the deployed module is up and running, the status of the Manifest CR is set to `Ready`. +* While the manifest is being applied and the Deployment is still starting, the status of the Manifest CR is set to `Processing`. +* If the Deployment cannot start (for example, due to an `ImagePullBackOff` error) or if the application of the manifest fails, the status of the Manifest CR is set to `Error`. +* If the Manifest CR is marked for deletion, the status of the Manifest CR is set to `Deleting`. This status provides a reliable way to track the state of the Manifest CR and the associated module. It offers insights into the deployment process and any potential issues while being decoupled from the module's business logic. diff --git a/docs/technical-reference/api/moduleTemplate-cr.md b/docs/contributor/resources/03-moduletemplate.md similarity index 91% rename from docs/technical-reference/api/moduleTemplate-cr.md rename to docs/contributor/resources/03-moduletemplate.md index 042b3363e4..9c43f6d559 100644 --- a/docs/technical-reference/api/moduleTemplate-cr.md +++ b/docs/contributor/resources/03-moduletemplate.md @@ -1,6 +1,14 @@ -# ModuleTemplate Custom Resource +# ModuleTemplate -The core of our modular discovery is the [ModuleTemplate custom resource (CR)](../../../api/v1beta2/moduletemplate_types.go). It is used to initialize and resolve modules. +The `moduletemplates.operator.kyma-project.io` Custom Resource Definition (CRD) defines the structure and format used to configure the ModuleTemplate resource. + +The ModuleTemplate custom resource (CR) defines a module, in a particular version, that can be added to or deleted from the module list in the Kyma CR. Each ModuleTemplate CR represents one module. + +To get the latest CRD in the YAML format, run the following command: + +```bash +kubectl get crd moduletemplates.operator.kyma-project.io -o yaml +``` ## Configuration diff --git a/docs/contributor/resources/04-watcher.md b/docs/contributor/resources/04-watcher.md new file mode 100644 index 0000000000..4bd1b00191 --- /dev/null +++ b/docs/contributor/resources/04-watcher.md @@ -0,0 +1,11 @@ +# Watcher + +The `watchers.operator.kyma-project.io` Custom Resource Definition (CRD) defines the structure and format used to configure the Watcher resource. + +The Watcher custom resource (CR) defines the callback functionality for synchronized Kyma runtime clusters, that allows lower latencies before the Kyma Control Plane cluster detects any changes. + +To get the latest CRD in the YAML format, run the following command: + +```bash +kubectl get crd watchers.operator.kyma-project.io -o yaml +``` diff --git a/docs/technical-reference/api/README.md b/docs/contributor/resources/README.md similarity index 72% rename from docs/technical-reference/api/README.md rename to docs/contributor/resources/README.md index bc330d1ffe..729c7f6134 100644 --- a/docs/technical-reference/api/README.md +++ b/docs/contributor/resources/README.md @@ -1,22 +1,11 @@ -# Lifecycle Manager API +# Lifecycle Manager Resources -## Overview +The API of Lifecycle Manager is based on Kubernetes Custom Resource Definitions (CRDs), which extend the Kubernetes API with custom additions. The CRDs allow Lifecycle Manager to manage clusters and modules. To inspect the specification of the Lifecycle Manager resources, see: -The Lifecycle Manager API types consist of three major custom resources (CRs). Each CR deals with a specific aspect of reconciling modules into their corresponding states. - -1. [Kyma CR](/api/v1beta2/kyma_types.go) that introduces a single entry point CustomResourceDefinition to control a cluster and it's desired state. -2. [Manifest CR](/api/v1beta2/manifest_types.go) that introduces a single entry point CustomResourceDefinition to control a module and it's desired state. -3. [ModuleTemplate CR](/api/v1beta2/moduletemplate_types.go) that contains all reference data for the modules to be installed correctly. It is a standardized desired state for a module available in a given release channel. - -Additionally, we maintain the [Watcher CR](/api/v1beta2/watcher_types.go) to define the callback functionality for synchronized remote clusters that allows lower latencies before the Control Plane detects any changes. - -## Custom Resource Definitions - -Read more about the custom resource definitions (CRDs) in the respective documents: - -* [Kyma CR](kyma-cr.md) -* [Manifest CR](manifest-cr.md) -* [ModuleTemplate CR](moduleTemplate-cr.md) +* [Kyma CRD](01-kyma.md) +* [Manifest CRD](02-manifest.md) +* [ModuleTemplateCRD](03-moduletemplate.md) +* [Watcher CRD](04-watcher.md) ## Synchronization of Module Catalog with Remote Clusters @@ -36,7 +25,7 @@ By default, without any labels configured on Kyma and ModuleTemplate CRs, a Modu **NOTE:** Disabling synchronization for already synchronized ModuleTemplates CRs doesn't remove them from remote clusters. The CRs remain as they are, but any subsequent changes to these ModuleTemplate CRs in the Control Plane are not synchronized. -For details, read about [the Kyma CR synchronization labels](kyma-cr.md#operatorkyma-projectio-labels) and [the ModuleTemplate CR synchronization labels](moduleTemplate-cr.md#operatorkyma-projectio-labels). +For more information, see [the Kyma CR synchronization labels](kyma-cr.md#operatorkyma-projectio-labels) and [the ModuleTemplate CR synchronization labels](moduleTemplate-cr.md#operatorkyma-projectio-labels). ## Stability @@ -48,3 +37,5 @@ See the list of CRs involved in Lifecycle Manager's workflow and their stability | v1beta2 | [ModuleTemplate](/api/v1beta2/moduletemplate_types.go) | Beta-Grade - no breaking changes without API incrementation. Use for automation and watch upstream as close as possible for deprecations or new versions. Alpha API is deprecated and converted via webhook. | | v1beta2 | [Manifest](/api/v1beta2/manifest_types.go) | Beta-Grade - no breaking changes without API incrementation. Use for automation and watch upstream as close as possible for deprecations or new versions. Alpha API is deprecated and converted via webhook. | | v1beta2 | [Watcher](/api/v1beta2/watcher_types.go) | Beta-Grade - no breaking changes without API incrementation. Use for automation and watch upstream as close as possible for deprecations or new versions. Alpha API is deprecated and converted via webhook. | + +For more information on changes introduced by an API version, see [API Changelog](../api-changelog.md). diff --git a/docs/developer-tutorials/README.md b/docs/developer-tutorials/README.md deleted file mode 100644 index 4554b3bde3..0000000000 --- a/docs/developer-tutorials/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Developer Tutorials - -This directory contains infrastructure-related developer tutorials around Lifecycle Manager. diff --git a/docs/developer-tutorials/prepare-gcr-registry.md b/docs/developer-tutorials/prepare-gcr-registry.md deleted file mode 100644 index 7101ebca2f..0000000000 --- a/docs/developer-tutorials/prepare-gcr-registry.md +++ /dev/null @@ -1,89 +0,0 @@ -# Create a test Environment on Google Container Registry (GCR) - -## Context - -If you use the GCP Artifact Registry, follow these instructions to create a test environment. - -## Prerequisites - -This tutorial assumes that you have a GCP project called `sap-kyma-jellyfish-dev`. - -## Procedure - -### Create your Repository - -1. Create an Artifact Registry repository. For tutorial purposes, call it `operator-test`. - - ```sh - gcloud artifacts repositories create operator-test \ - --repository-format=docker \ - --location europe-west3 - ``` - -2. To make it work with remote clusters such as in Gardener, specify the Read access to the repository, if possible anonymously: - - ```sh - gcloud artifacts repositories add-iam-policy-binding operator-test \ - --location=europe-west3 --member=allUsers --role=roles/artifactregistry.reader - ``` - -### Authenticate Locally and Create a Service Account in Google Cloud - -1. Under the assumption you're [creating and using a service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) called `operator-test-sa`, authenticate against your registry: - - ```sh - gcloud auth configure-docker \ - europe-west3-docker.pkg.dev - ``` - -2. For productive purposes, create a service account. For tutorial purposes, call it `operator-test-sa`. - - ```sh - gcloud iam service-accounts create operator-test-sa \ - --display-name="Operator Test Service Account" - -3. To get the necessary permissions, assign roles to your service account. - - > **TIP:** For details, read [Required roles](https://cloud.google.com/iam/docs/creating-managing-service-accounts#permissions). - - ```sh - gcloud projects add-iam-policy-binding sap-kyma-jellyfish-dev \ - --member='serviceAccount:operator-test-sa@sap-kyma-jellyfish-dev.iam.gserviceaccount.com' \ - --role='roles/artifactregistry.reader' \ - --role='roles/artifactregistry.writer' - ``` - -4. Impersonate the service account: - - ```sh - gcloud auth print-access-token --impersonate-service-account operator-test-sa@sap-kyma-jellyfish-dev.iam.gserviceaccount.com - ``` - -5. Verify your login: - - ```sh - gcloud auth print-access-token --impersonate-service-account operator-test-sa@sap-kyma-jellyfish-dev.iam.gserviceaccount.com | docker login -u oauth2accesstoken --password-stdin https://europe-west3-docker.pkg.dev/sap-kyma-jellyfish-dev/operator-test - ``` - - Export `GCR_DOCKER_PASSWORD` for the `docker-push` make command: - - ```sh - export GCR_DOCKER_PASSWORD=$(gcloud auth print-access-token --impersonate-service-account operator-test-sa@sap-kyma-jellyfish-dev.iam.gserviceaccount.com) - ``` - -6. Adjust the `docker-push` command in `Makefile`: - - ```makefile - .PHONY: docker-push - docker-push: ## Push docker image with the manager. - ifneq (,$(GCR_DOCKER_PASSWORD)) - docker login $(IMG_REGISTRY) -u oauth2accesstoken --password $(GCR_DOCKER_PASSWORD) - endif - docker push ${IMG} - ``` - -7. Use the following setup in conjunction with Kyma CLI: - - ```sh - kyma alpha create module --module-config-file ${module config file} -c oauth2accesstoken:$GCR_DOCKER_PASSWORD - ``` diff --git a/docs/developer-tutorials/provision-cluster-and-registry.md b/docs/developer-tutorials/provision-cluster-and-registry.md deleted file mode 100644 index 74a48efe92..0000000000 --- a/docs/developer-tutorials/provision-cluster-and-registry.md +++ /dev/null @@ -1,141 +0,0 @@ -# Provision a Cluster and OCI Registry - -## Context - -This tutorial shows how to set up a cluster in different environments. - -For the control-plane mode, with Kyma Control Plane (KCP) and Kyma runtime (SKR), create **two separate clusters** following the instructions below. - -## Procedure - -### Local Cluster Setup - -1. Create a `k3d` cluster: - - ```sh - k3d cluster create op-kcp --registry-create op-kcp-registry.localhost:8888 - - # also add for the in-cluster mode only - k3d cluster create op-skr --registry-create op-skr-registry.localhost:8888 - ``` - -2. Configure the local `k3d` registry. To reach the registries using `localhost`, add the following code to your `/etc/hosts` file: - - ```sh - # Added for Operator Registries - 127.0.0.1 op-kcp-registry.localhost - - # also add for the in-cluster mode only - 127.0.0.1 op-skr-registry.localhost - ``` - -3. Set the `IMG` environment variable for the `docker-build` and `docker-push` commands, to make sure images are accessible by local k3d clusters. - - * For the **single-cluster mode**: - - ```sh - # pointing to KCP registry in dual cluster mode - export IMG=op-kcp-registry.localhost:8888/unsigned/operator-images - ``` - - * For the **control-plane mode**: - - ```sh - # pointing to SKR registry in dual cluster mode - export IMG=op-skr-registry.localhost:8888/unsigned/operator-images - ``` - -4. Once you pushed your image, verify the content. For browsing through the content of the local container registry, use, for example, `http://op-kcp-registry.localhost:8888/v2/_catalog?n=100`. - -### Remote Cluster Setup - -Learn how to use a Gardener cluster for testing. - -1. Go to the [Gardener account](https://dashboard.garden.canary.k8s.ondemand.com/account) and download your `Access Kubeconfig`. - -2. Provision a compliant remote cluster: - - ```sh - # name - name of the cluster - # gardener_project - Gardener project name - # gcp_secret - Cloud provider secret name (e.g. GCP) - # gardener_account_kubeconfig - path to Access Kubeconfig from Step 1 - cat << EOF | kubectl apply --kubeconfig="${gardener_account_kubeconfig}" -f - - apiVersion: core.gardener.cloud/v1beta1 - kind: Shoot - metadata: - name: ${name} - spec: - secretBindingName: ${gcp_secret} - cloudProfileName: gcp - region: europe-west3 - purpose: evaluation - provider: - type: gcp - infrastructureConfig: - apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 - kind: InfrastructureConfig - networks: - workers: 10.250.0.0/16 - controlPlaneConfig: - apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 - kind: ControlPlaneConfig - zone: europe-west3-a - workers: - - name: cpu-worker - minimum: 1 - maximum: 3 - machine: - type: n1-standard-4 - volume: - type: pd-standard - size: 50Gi - zones: - - europe-west3-a - networking: - type: calico - pods: 100.96.0.0/11 - nodes: 10.250.0.0/16 - services: 100.64.0.0/13 - hibernation: - enabled: false - schedules: - - start: "00 14 * * ?" - location: "Europe/Berlin" - addons: - nginxIngress: - enabled: false - EOF - - echo "waiting fo cluster to be ready..." - kubectl wait --kubeconfig="${gardener_account_kubeconfig}" --for=condition=EveryNodeReady shoot/${name} --timeout=17m - - # create kubeconfig request, that creates a Kubeconfig, which is valid for one day - kubectl create -kubeconfig="${gardener_account_kubeconfig}" \ - -f <(printf '{"spec":{"expirationSeconds":86400}}') \ - --raw /apis/core.gardener.cloud/v1beta1/namespaces/garden-${gardener_project}/shoots/${name}/adminkubeconfig | \ - jq -r ".status.kubeconfig" | \ - base64 -d > ${name}_kubeconfig.yaml - - # merge with the existing kubeconfig settings - mkdir -p ~/.kube - KUBECONFIG="~/.kube/config:${name}_kubeconfig.yaml" kubectl config view --merge > merged_kubeconfig.yaml - mv merged_kubeconfig.yaml ~/.kube/config - ``` - -3. Create an external registry. - - When using an external registry, make sure that the Gardener cluster (`op-kcpskr`) can reach your registry. - - You can follow the guide to [set up a GCP-hosted artifact registry (GCR)](prepare-gcr-registry.md). - - > **CAUTION:** For private registries, you may have to configure additional settings not covered in this tutorial. - -4. Set the `IMG` environment variable for the `docker-build` and `docker-push` commands. - - ```sh - # this an example - # sap-kyma-jellyfish-dev is the GCP project - # operator-test is the artifact registry - export IMG=europe-west3-docker.pkg.dev/sap-kyma-jellyfish-dev/operator-test - ``` diff --git a/docs/developer-tutorials/starting-operator-with-webhooks.md b/docs/developer-tutorials/starting-operator-with-webhooks.md deleted file mode 100644 index 99b7c63178..0000000000 --- a/docs/developer-tutorials/starting-operator-with-webhooks.md +++ /dev/null @@ -1,19 +0,0 @@ -# Enable Webhooks in Lifecycle Manager - -## Context - -To make local testing easier, webhooks are disabled by default. To enable webhooks running with the operator, you must change some `kustomization.yaml` files as well as introduce a flag that will enable the webhook server. - -For further information, read the [kubebuilder tutorial](https://kubebuilder.io/cronjob-tutorial/running-webhook.html). - -## Procedure - -1. Go to [`config/crd/kustomization.yaml`](https://github.com/kyma-project/lifecycle-manager/blob/main/config/crd/kustomization.yaml). Follow the instructions from the file to uncomment sections referring to [WEBHOOK] and [CERT_MANAGER]. - -2. Go to [`config/default/kustomization.yaml`](https://github.com/kyma-project/lifecycle-manager/blob/main/config/default/kustomization.yaml). Follow the instruction in the file and uncomment all sections referring to [WEBHOOK], [CERT-MANAGER], and [PROMETHEUS]. - -3. Enable the webhooks by setting the `enable-webhooks` flag. Run: - - ```bash - go run ./main.go ./flags.go --enable-webhooks - ``` diff --git a/docs/modularization.md b/docs/modularization.md deleted file mode 100644 index f332e10eca..0000000000 --- a/docs/modularization.md +++ /dev/null @@ -1,45 +0,0 @@ -# Modularization - -Modules are the next generation of components in Kyma that are available for local and cluster installation. - -Modules are no longer represented by a single helm-chart, but instead are bundled and released within channels through a [ModuleTemplate custom resource (CR)](/api/v1beta2/moduletemplate_types.go), a unique link of a module, and its desired state of resources and configuration, and a channel. - -Lifecycle Manager manages clusters using the [Kyma CR](/api/v1beta2/kyma_types.go). The CR defines the desired state of modules in a cluster. With the CR you can add and delete modules with domain-specific functionality with additional configuration. - -The modules themselves are built and distributed as OCI artifacts. The internal structure of the artifact conforms to the [Open Component Model](https://ocm.software/) scheme version 3. Modules contain an immutable layer set of a module operator deployment description and its configuration. - -![Kyma Module Structure](/docs/assets/kyma-module-template-structure.svg) - -If you use Kyma [CLI](https://github.com/kyma-project/cli), you can create a Kyma module by running `kyma alpha create module`. This command packages all the contents on the provided path as an OCI artifact and pushes the artifact to the provided OCI registry. Use the `kyma alpha create module --help` command to learn more about the module structure and how it is created. You can also use the CLI's auto-detection of [Kubebuilder](https://kubebuilder.io) projects to easily bundle your module with little effort. - -The modules are installed and controlled by Lifecycle Manager. We use [Open Component Model](https://ocm.software) to describe all of our modules descriptively. -Based on the [ModuleTemplate CR](/api/v1beta2/moduletemplate_types.go), the module is resolved from its layers and version and is used as a template for the [Manifest CR](/api/v1beta2/manifest_types.go). -Whenever a module is accepted by Lifecycle Manager the ModuleTemplate CR gets translated into a Manifest CR, which describes the actual desired state of the module operator. - -The Lifecycle Manager then updates the [Kyma CR](/api/v1beta2/kyma_types.go) of the cluster based on the observed status changes in the module CR (similar to a native Kubernetes deployment tracking availability). - -Module operators only have to watch their custom resources and reconcile modules in the target clusters to the desired state. - -## Example - -A sample Kyma CR could look like this: - -```bash -apiVersion: operator.kyma-project.io/v1beta2 -kind: Kyma -metadata: - name: my-kyma -spec: - modules: - - name: my-module -``` - -The creation of the Kyma CR triggers a reconciliation that: - -1. Looks for a ModuleTemplate CR based on the search criteria, for example, the OCM Component Name of the module or simply the name of the ModuleTemplate CR. -2. Creates a Manifest CR for `my-module` based on a ModuleTemplate CR found in the cluster by resolving all relevant image layers for the installation. -3. Installs the content of the modules operator by applying it to the cluster, and observing its state. -4. Reports back all states observed in the Manifest CR which then get propagated to the Kyma CR in the cluster. - Lifecycle Manager then uses the observed states to aggregate and combine the readiness condition of the cluster and determine the installation state or trigger more reconciliation loops as needed. - -As mentioned above, when each module operator completes their installation, it reports its resource status. However, to accurately report the state, we read out the `.status.state` field to accumulate status reporting for the entire cluster. diff --git a/docs/technical-reference/running-modes.md b/docs/technical-reference/running-modes.md deleted file mode 100644 index 75514aa6ed..0000000000 --- a/docs/technical-reference/running-modes.md +++ /dev/null @@ -1,31 +0,0 @@ -# Running Modes - -Lifecycle Manager can run in two modes: - -* **single-cluster:** Deployment mode in which Lifecycle Manager is running on the same cluster in which you deploy Kyma. This mode doesn't require [synchronization](api/README.md#synchronization-of-module-catalog-with-remote-clusters) of Kyma CRs or ModuleTemplate CRs. -* **control-plane:** Deployment mode in which Lifecycle Manager is running on the central Kubernetes cluster that manages multiple remote clusters which are targets for Kyma installations. In this mode, Kyma and ModuleTemplate CRs are synchronized between the central cluster and remote ones. Access to remote clusters is enabled using centrally-managed K8s Secrets with the required connection configuration. - -To configure the running mode for Lifecycle Manager, use the `in-kcp-mode` command-line flag. By default, the flag is set to `false`. If set to `true`, Lifecycle Manager runs in the control-plane mode. - -> Tip: Use the single-cluster mode for local development and testing. For E2E testing, testing of scalability and remote reconciliation, we recommend to use a separate Control Plane cluster. - -## Release Lifecycles for Modules - -Teams providing module operators should work (and release) independently of Lifecycle Manager. In other words, Lifecycle Manager should not have hard-coded dependencies to any module operator. -As such, all module interactions are abstracted through the [ModuleTemplate](/api/v1beta2/moduletemplate_types.go). - -This abstraction of a module template is used for generically deploying instances of a module within a Kyma Runtime at a specific Release Group we call `Channel` (for more information, visit the respective Chapter in the [Concept of Modularization](https://github.com/kyma-project/community/tree/main/concepts/modularization#release-channels)). It not only contains a specification of a Module with its underlying components through [OCM Component Descriptors](https://github.com/gardener/component-spec/blob/master/doc/proposal/02-component-descriptor.md), but also talks in detail about the schemas, labels, and other essential resources. - -These serve as small-scale BoM's for all contents included in a module and can be interpreted by Lifecycle Manager and [Module Manager](https://github.com/kyma-project/module-manager/) -to correctly install a module (for more information, please have a look at the respective chapter in the [Kyma Modularization Concept](https://github.com/kyma-project/community/tree/main/concepts/modularization#component-descriptor)). - -## Versioning and Releasing - -Kyma up to Version 2.x was always a single release. However, the vision of Lifecycle Manager is to fully encapsulate individual modules, with each providing a (possibly fully independent) release cycle. -By design, the KCP deliveries are continuously shipped and improved. We aim to support versioned module deliveries, so the Lifecycle Manager and its adjacent infrastructure will be maintained as well as delivered continuously, and it is recommended to track upstream as close as possible. - -## Comparison to the Old Reconciler - -Traditionally, Kyma was installed with the [Kyma Reconciler](https://github.com/kyma-incubator/reconciler), a Control-Plane implementation of our architecture based on polling and a SQL Store for tracking reconciliations. -While this worked great for smaller and medium scale deliveries, we had trouble to scale and maintain it when put under significant load. -We chose to replace this with operator-focused reconciliation - for details on the reasoning, read [Concept for Operator Reconciliation](https://github.com/kyma-project/community/tree/main/concepts/operator-reconciliation). diff --git a/docs/user-tutorials/01-10-control-plane-quick-start.md b/docs/user-tutorials/01-10-control-plane-quick-start.md deleted file mode 100644 index 8cc8354d40..0000000000 --- a/docs/user-tutorials/01-10-control-plane-quick-start.md +++ /dev/null @@ -1,94 +0,0 @@ -# Control Plane Quick Start - -## Context - -This quick start guide shows how to: - -* provision a Kyma Control Plane (KCP) cluster and deploy Lifecycle Manager using Kyma CLI -* deploy a ModuleTemplate CR -* manage modules using Kyma CLI - -## Prerequisites - -To use Lifecycle Manager in a local setup, you need the following prerequisites: - -* [k3d](https://k3d.io/) -* [istioctl](https://istio.io/latest/docs/setup/install/istioctl/) -* [Kyma CLI](https://kyma-project.io/#/04-operation-guides/operations/01-install-kyma-CLI) - -## Procedure - -### Provision a KCP Cluster - -1. Provision a local k3d cluster as KCP. By default, a cluster named `k3d-kyma` is created. Run: - - ```bash - k3d registry create kyma-registry --port 5001 - k3d cluster create k3d-kyma --kubeconfig-switch-context -p 80:80@loadbalancer -p 443:443@loadbalancer --registry-use kyma-registry - ``` - -2. Because the services deployed in KCP are managed by Istio, you need to install Istio on the local k3d cluster. Run: - - ```bash - istioctl install -y - ``` - -3. Lifecycle Manager exposes metrics that are collected by Prometheus Operator in KCP to provide better observability. To simplify the local setup, you only need to deploy the ServiceMonitor CRD using the following command: - - ```bash - kubectl apply -f https://raw.githubusercontent.com/prometheus-community/helm-charts/main/charts/kube-prometheus-stack/charts/crds/crds/crd-servicemonitors.yaml - ``` - -You can also follow the -official [Prometheus Operator quick start](https://prometheus-operator.dev/docs/getting-started/) guide to deploy a full -set of Prometheus Operator features into your cluster. - -### Deploy Lifecycle Manager - -We recommend deploying Lifecycle Manager with the KCP kustomize profile. You must create the `kcp-system` and `kyma-system` Namespaces before the actual deployment. Run: - - ```bash - kubectl create ns kcp-system - kubectl create ns kyma-system - kyma alpha deploy -k https://github.com/kyma-project/lifecycle-manager/config/control-plane - ``` - - > [!NOTE] - > The link to the `kustomization.yaml` file works fine with the command but returns 404 when called directly. - -If the deployment was successful, you should see all the required resources. For example: - -* The `klm-controller-manager` Pod in the `kcp-system` Namespace -* A Kyma CR that uses the `regular` channel but without any module configured, sync disabled, named `default-kyma` under `kyma-system` Namespace - -### Manage Modules in the Control-Plane Mode - -To manage Kyma modules in the control-plane mode, Lifecycle Manager requires: - -1. Deployment with the control-plane kustomize profile. -2. A Kubernetes Secret resource that contains remote cluster kubeconfig access data deployed on KCP cluster. - -In order to manage remote cluster modules, Lifecycle Manager needs to know the authentication credentials. Just like with any other native Kubernetes tool, the natural way to communicate with Kubernetes API server is through a kubeconfig file. - -That brings us the design idea to rely on the Secret resource to provide the credentials. Each Secret, has the `operator.kyma-project.io/kyma-name` label. The user must configure the label values with the same name and Namespace as the Kyma CR so that Lifecycle Manager can knows which authentication credentials to use. - -1. Create a Secret yaml file named `default-kyma` (the same as the Kyma CR name) in the `kyma-system` Namespace (the same as the Kyma CR Namespace), which contains the remote cluster kubeconfig as `data.config`. Run: - - ```bash - export KUBECONFIG=[path to your remote cluster kubeconfig yaml file] - ./hack/k3d-secret-gen.sh default-kyma kyma-system - ``` - -2. Deploy the Secret on the local KCP cluster: - - ```bash - kubectl config use-context k3d-kyma - kubectl apply -f default-kyma-secret.yaml - ``` - -After the successful deployment of the access Secrete, you can start to use Kyma CLI to manage modules on remote clusters. - -## Next Steps - -* To learn how to publish ModuleTemplate CRs in a private OCI registry, refer to the [Provide credentials for private OCI registry authentication](../developer-tutorials/config-private-registry.md) tutorial -* To learn how to manage module enablement with the provided strategies, refer to the [Manage module enablement with CustomResourcePolicy](02-10-manage-module-with-custom-resource-policy.md) tutorial diff --git a/docs/user-tutorials/02-10-manage-module-with-custom-resource-policy.md b/docs/user-tutorials/02-10-manage-module-with-custom-resource-policy.md deleted file mode 100644 index e8e7f33635..0000000000 --- a/docs/user-tutorials/02-10-manage-module-with-custom-resource-policy.md +++ /dev/null @@ -1,41 +0,0 @@ -# Manage Module Enablement with CustomResourcePolicy - -## Context - -During the Module CR enablement, the default behavior of Lifecycle Manager is to: - -1. Apply the configuration from the ModuleTemplate CR to the module CR, -2. Reset any direct changes to the module CR during reconciliation. - -This can be inconvenient for some use cases that require more flexibility and external control over the module CR enablement. - -To address this issue, we propose a CustomResourcePolicy feature that allows users to specify how Lifecycle Manager should handle the configuration of the module CR during enablement and reconciliation. - -## Procedure - -With Kyma CLI, enable a module with the `kyma alpha enable` command. Using the CLI, you can manage the CustomResourcePolicy for each module individually. - -By default, the CustomResourcePolicy of the enabled module is `CreateAndDelete`. With the default, you let the Lifecycle Manager take full control over the module enablement. - -For example, to enable the Keda module with the default policy for the `default-kyma` Kyma CR, run: - -```bash -kyma alpha enable module keda -n kyma-system -k default-kyma -``` - -This will result in the `default-kyma` Kyma CR spec like this: - -```bash -spec: - channel: alpha - modules: - - customResourcePolicy: CreateAndDelete - name: keda -``` - -Lifecycle Manager will create a corresponding Keda CR in your target cluster and propagate all the values from the ModuleTemplate `spec.data.spec` to the `spec.resource` of the related Manifest CR. This way, you can configure and manage your Keda resources using Lifecycle Manager. - -To skip this enablement process, you can set the `--policy` flag to `Ignore` when you enable the module. This will result in no Keda CR created in your target cluster. It will also prevent Lifecycle Manager from adding any `spec.resource` to the related Manifest CR. - -> [!WARNING] -> Setting up the flag to `Ignore` also means that Lifecycle Manager does not monitor or manage any Keda CR's readiness status. Therefore, you should exercise caution and discretion when using the `Ignore` policy for your module CR. diff --git a/docs/user/01-10-kyma-crd.md b/docs/user/01-10-kyma-crd.md new file mode 100644 index 0000000000..d1e893f6b5 --- /dev/null +++ b/docs/user/01-10-kyma-crd.md @@ -0,0 +1,11 @@ +# Kyma + + + +The `kymas.operator.kyma-project.io` Custom Resource Definition (CRD) is a comprehensive specification that defines the structure and format used to manage a cluster and its desired state. It contains the list of added modules and their state. + +To get the latest CRD in the YAML format, run the following command: + +```bash +kubectl get crd kymas.operator.kyma-project.io -o yaml +``` diff --git a/docs/user/README.md b/docs/user/README.md new file mode 100644 index 0000000000..8a05d305bd --- /dev/null +++ b/docs/user/README.md @@ -0,0 +1,3 @@ +# Lifecycle Manager + + diff --git a/internal/declarative/README.md b/internal/declarative/README.md deleted file mode 100644 index 26bc5d7c52..0000000000 --- a/internal/declarative/README.md +++ /dev/null @@ -1,95 +0,0 @@ -# Declarative Reconciliation Library - -This library uses declarative reconciliation to perform resource synchronization in clusters. - -The easiest way to explain the difference between declarative and imperative code, would be that imperative code focuses on writing an explicit sequence of commands to describe how you want the computer to do things, and declarative code focuses on specifying the result of what you want. - -Thus, in the declarative library, instead of writing "how" the reconciler behaves, you instead describe "what" it's behavior should be like. Of course, that does not mean that the behavior has to be programmed, but instead that the declarative reconciler is built on a set of [declarative options](v2/options.go) that can be applied and result in changes to the reconciliation behavior of the actual underlying [reconciler](v2/reconciler.go). - -The declarative reconciliation is strongly inspired by the [kubebuilder declarative pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern), however it brings it's own spin to it as it contains it's own form of applying and tracking changes after using a `rendering` engine, with different implementations: - -## Core - -The core of the reconciliation is compromised of two main components: -1. [A Client interface](v2/client.go), its [factory implementation](v2/factory.go) that is optimized for [caching Resource Mappings](v2/client_cache.go) of the API-Server and [Dynamic Lookup of Resources](v2/client_proxy.go) to avoid heavy reload of schemata -2. A generic [Reconciler](v2/reconciler.go) which delegates various parts of the resource synchronization based on the [object specification](v2/spec.go) and the [options for reconciliation](v2/options.go). It owns the central conditions maintained and reported in the [object status](v2/object.go). - -While the client is the main subsystem of the `Reconciler` implementation, the `Reconciler` also redirects to other subsystems to achieve its goal: -- A `Converter` which converts the rendered resources into internal objects for synchronization -- A `ReadyCheck` which introduces more detailed status checks rather than `Exists/NotFound` to the synchronization process, allowing more fine-grained error reporting. -- A `Status` which is embedded in the reconciled object and which returns the state of the synchronization - -For more details on the subsystems used within the main `Reconciler`, check out the sections below. - -## Configuration of the Library via declarative Options - -The declarative Reconciliation is especially interesting because it is configured with a set of pre-defined [options for reconciliation](v2/options.go). - -An example configuration can look like this: - -```golang -func ManifestReconciler( - mgr manager.Manager, codec *v1beta1.Codec, - checkInterval time.Duration, -) *declarative.Reconciler { - return declarative.NewFromManager( - mgr, &v1beta1.Manifest{}, - declarative.WithSpecResolver( - internalv1beta1.NewManifestSpecResolver(codec), - ), - declarative.WithCustomReadyCheck(internalv1beta1.NewManifestCustomResourceReadyCheck()), - declarative.WithRemoteTargetCluster( - (&internalv1beta1.RemoteClusterLookup{KCP: &declarative.ClusterInfo{ - Client: mgr.GetClient(), - Config: mgr.GetConfig(), - }}).ConfigResolver, - ), - declarative.WithClientCacheKeyFromLabelOrResource(v1beta1.KymaName), - declarative.WithPostRun{internalv1beta1.PostRunCreateCR}, - declarative.WithPreDelete{internalv1beta1.PreDeleteDeleteCR}, - declarative.WithPeriodicConsistencyCheck(checkInterval), - ) -} -``` - -These options include but are not limited to: -- Configuration of custom Readiness Checks that verify the integrity of the installation -- Configuration of the Cluster where the installation should be located in the end -- A resolver to translate from a custom API-Version (such as our `Manifest`) into the internal [object specification](v2/spec.go) -- Post/Pre Run Hooks to inject additional logic and side-effects before and after specific installation steps -- Consistency Check configurations determining frequency of reconciliation efforts. - -## Resource Conversion - -Every resource is converted to a generic [resource](v2/resource_converter.go), which translates all objects into a [k8s cli-runtime compliant resource represenation](https://pkg.go.dev/k8s.io/cli-runtime/pkg/resource#Info), which contains information about the object, its API Version and Mappings towards a specific API Resource of the API server. - -## Resource Synchronization - -All [create/update](v2/ssa.go) and [delete](../pkg/resources/cleanup.go) cluster interactions of the library are done by -leveraging highly concurrent [ServerSideApply](https://kubernetes.io/docs/reference/using-api/server-side-apply/) -implementations that are written to: -1. Always apply the latest available version of resources in the schema by using inbuilt schema conversions -2. Delegate as much compute to the api-server to reduce overall load of the controller even with several hundred concurrent reconciliations. -3. Use a highly concurrent process that is rather focusing on retrying an apply and failing early and often instead of introducing dependencies between different resources. The library will always attempt to reconcile all resources in parallel and will simply ask for a retry in case it is determined there is a missing interdependency (e.g. a missing CustomResourceDefinition that is applied in parallel, only leading to sucessful reconciliations in subsequent reconciliations). - -## Resource Tracking - -Every resource rendered is tracked through a set of fields in the [declarative status in the object](v2/object.go). This can be embedded in objects through implementing the [Object interface](v2/object.go), a superset of the [`client.Object` from controller-runtime](https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/client/object.go). - -The library will use this status to report important events and status changes by levaring the `SetStatus` method. One can choose to either directly embed the `Status` and to implement `GetStatus()` and `SetStatus()` on the object, or to use a conversion instead that translates the declarative status to a versioned API status. - -Important Parts of the `Status` are: -- A `State` representing the overall state of the installation -- Various `Conditions` compliant with [KEP-1623: Standardize Conditions](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1623-standardize-conditions) -- `Synced`, a list of resources with Name/Namespace as well as a [GroupVersionKind from the kubernetes apimachinery](https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionKind), which is used to track individual resources resulting from the `renderer` -- `LastOperation`, a combination of a message / timestamp that is always updated whenever the library reconciles the [object specification](v2/spec.go) and issues more details than the current state (e.g. detailed error messages or success details of a step during the reconciliation) - -While all synchronized resources are tracked in the `Synced` list, they are regularly checked against and pruned or created newly based on the reconciliation interval provided through the [options for reconciliation](v2/options.go). - -## Resource Readiness - -While the deletion and creation of resources is quite straight-forward, oftentimes the readiness of a given resource cannot be determined purely by its `existence` but also by specific reporting states derived from the status of an object, for example a Deployment: just because the deployment exists, it does not mean the image could be pulled and the container started. - -To combat this problem, we introduce the [`StateCheck` interface](v2/state_check.go), a simple interface that can provide custom Readiness Evaluations after Resource Synchronization. By default, we use the inbuilt [readiness implementations from `HELM`](https://github.com/helm/helm/blob/main/pkg/kube/ready.go) as it contains many widely accepted standards for Readiness checks. However, we are planning to eventually switch to a more generic and better solution once it becomes available. - -Once synchronized all resources are passed to the readiness checker, and if determined as not ready, the `LastOperation` and the appropriate `State` and Conditions will evaluate to a `Processing` or `Error` state.