diff --git a/docs/for-devs/console/builds.md b/docs/for-devs/console/builds.md index 7a1786809..262c55708 100644 --- a/docs/for-devs/console/builds.md +++ b/docs/for-devs/console/builds.md @@ -12,21 +12,21 @@ A Build in Otomi is a self-service feature for building OCI compliant images bas Ask your platform administrator to activate Harbor to use this feature. ::: -:::info -The Otomi Builds feature can only be used with private repo's in the local Gitea. Images will always be pushed to a registry in the local Harbor. -::: - ## Builds (all) -All known Builds of the team are listed here. +All Builds of the team are listed here. -| Property | Description | -| ------------- | ------------------------------------------------------ | -| Name | The name of the build | -| Type | Type of the build. `buildpacks` or `docker` | -| Webhook url | The webhook URL if a trigger is configured for the build | +![Team builds](../../img/team-builds.png) + +| Property | Description | +| ------------- | --------------------------------------------------------------- | +| Name | The name of the build | +| Type | Type of the build. `buildpacks` or `docker` | +| Webhook url | The `copy to clipboard` webhook URL if a trigger is configured for the build | | Tekton | Link to the `PipelineRun`` of the build in the Tekton dashboard | -| Repository | The repository of the image | +| Repository | The `copy to clipboard` repository name of the image | +| Tag | The tag of the image | +| Status | The status of the Build. If the Build has failed. click on the Tekton link to see more details | ## Create a build @@ -45,21 +45,43 @@ Now choose the type of the build: ### Docker -1. Add the URL of the Gitea repository that contains the application source code -2. (optional) Change the path of the `Dockerfile` -3. (optional) Change the revision. This can be a commit, a tag, or a branch -4. (optional) Select to create an event listener to trigger the build based on a Gitea webhook. +1. Add the URL of the repository that contains the application source code. +2. (optional) Change the path of the `Dockerfile`. Default is `./Dockerfile`. To use a Dockerfile is a specific folder, use `./folder/Dockerfile`. +3. (optional) Change the revision. This can be a commit, a tag, or a branch. +4. (Optional) Select `External Repo` if the repository used for the Build is not a public or a private Git repo in the local Gitea. When selected, fill in the secret name that contains the required SSH credentials. Read more [here](https://tekton.dev/docs/how-to-guides/clone-repository/#git-authentication) about how to setup SSH authentication with your Git provider. +5. (optional) Select to create an event listener to trigger the build based on a Gitea webhook. ### Buildpacks 1. Add the URL of the Git repository that contains the application source code 2. (optional) Add the path. This is a subpath within the repo where the source to build is located 3. (optional) Change the revision. This can be a commit, a tag, or a branch -4. (optional) Select to create an event listener to trigger the build based on a Gitea webhook. +4. (optional) Add Environment variables to set during build-time +5. (Optional) Select `External Repo` if the repository used for the Build is not a public or a private Git repo in the local Gitea. When selected, fill in the secret name that contains the required SSH credentials. Read more [here](https://tekton.dev/docs/how-to-guides/clone-repository/#git-authentication) about how to setup SSH authentication with your Git provider. +6. (optional) Select to create an event listener to trigger the build based on a Gitea webhook. + +### Build status details + +To see the more status details of the build, click on the `PipelineRun` link of the build in the list of builds. If a trigger is configured, the link will show all PipelineRuns. + +### Configure a webhook for the Git repo in Gitea + +1. In Otomi Console, click on `apps` the left menu and then open `Gitea` +2. In the top menu of Gitea, click on `Explore` and then on the `green` repo +3. Go to `Settings` (top right) and then to `Webhooks` +4. Click `Add Webhook` and select `Gitea` +5. In the `Target URL`, paste the webhook URL from your clipboard. +6. Click `Add Webhook` + +### Expose the trigger listener publicly -### Build status +When using an external (private) Git repository, the trigger event listener that is created by Otomi can also be exposed publicly. To expose the event listener publicly: -To see the status of the build, click on the `PipelineRun` link of the build in the list of builds. If a trigger is configured, the link will show all PipelineRuns. +1. Go to Services +2. Click create new service +3. Select the `el-gitea-webhook-` internal service +4. Under `Exposure`, select `External` +5. Click `Submit` and the `Deploy Changes` ### Restart a build diff --git a/docs/for-devs/console/catalog.md b/docs/for-devs/console/catalog.md index b1e0a8894..af92ee541 100644 --- a/docs/for-devs/console/catalog.md +++ b/docs/for-devs/console/catalog.md @@ -18,16 +18,24 @@ The `otomi-quickstart-k8s-deployment` Helm chart can be used to create a Kuberne ### k8s-deployment-otel -The `otomi-quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount`, an `OpenTelemetryCollector` and an `Instrumentation`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. +The `otomi-quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount` and an `Instrumentation` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. ### k8s-deployments-canary -The `otomi-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created. +The `otomi-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created. ### knative-service The `otomi-quickstart-knative-service` Helm chart can be used to create a Knative `Service` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a Prometheus `ServiceMonitor` can be created. +### Otomi quick start for creating a PostgreSQL cluster + +The `otomi-quickstart-postgresql` Helm chart can be used to create a cloudnativepg PostgreSQL `Cluster`. Optionally a Prometheus `PodMonitor` and a `Configmap` (for adding a postgresql dashboard to Grafana) can be created. + +### Otomi quick start for creating a Redis master-replica cluster + +The `otomi-quickstart-redis` Helm chart can be used to create a Redis master-replica cluster. + ## Using the Catalog @@ -35,7 +43,7 @@ The `otomi-quickstart-knative-service` Helm chart can be used to create a Knativ 2. You will now see all the templates that are available to use -![catalog](../../img/catalog-1.png) +![catalog](../../img/catalog.png) 3. Click on the `k8s-deployment` template diff --git a/docs/for-devs/console/dashboard.md b/docs/for-devs/console/dashboard.md new file mode 100644 index 000000000..e2b0f189f --- /dev/null +++ b/docs/for-devs/console/dashboard.md @@ -0,0 +1,47 @@ +--- +slug: dashboard +title: Team Dashboard +sidebar_label: Dashboard +--- + +The team dashboard gives a global overview of information most relevant to the team. + +## Prerequisites + +The Team dashboard uses the Team's Grafana instance to get it's information from. Make sure Grafana is enabled for the team. To enable Grafana: + +- Go to `Settings` +- Managed monitoring +- Enable Grafana + +## Dashboard elements + +The dashboard has 5 elements + +- [Inventory](#inventory) +- [Resource Status](#resource-status) +- [Resource Utilization](#resource-utilization) +- [Vulnerabilities](#vulnerabilities) +- [Compliance](#compliance) + +![Team dashboard](../../img/team-dashboard.png) + +### Inventory + +The inventory shows the Otomi resources within the team. Click on an inventory item to go directly to the full list. + +### Resource Status + +The Resource Status panels show if there are any issues with Pods deployed by the team. + +### Resource Utilization + +The Resource Utilization panels show the total amount of CPU and Memory consumed by the team. + +### Vulnerabilities + +The Vulnerabilities panels show the total amount of LOW, MEDIUM, HIGH and CRITICAL vulnerabilities in running containers deployed by the Team. + +### Compliance + +The Compliance panel shows the total amount of policy violations in workloads deployed by the Team. diff --git a/docs/for-devs/console/projects.md b/docs/for-devs/console/projects.md index 63236e331..c9486e40f 100644 --- a/docs/for-devs/console/projects.md +++ b/docs/for-devs/console/projects.md @@ -13,9 +13,11 @@ A Project in Otomi is a collection of a Build, a Workload and a Service in ONE f Note: The name of the project will be used for all created otomi resources (build, workload and service). -1. Select `Create build form source` or `Use an existing image` -2. If `Create build from source` is selected: follow the [instruction](builds.md) for creating a Build -3. If `Use an existing image` is selected: follow the [instruction](workloads.md) for creating a Workload -4. Follow the [instruction](services.md) for creating a Service to expose the workload +3. Select `Create build form source` or `Use an existing image` +4. If `Create build from source` is selected: follow the [instruction](builds.md) for creating a Build +5. If `Use an existing image` is selected: follow the [instruction](workloads.md) for creating a Workload -5. Click `Submit` and then `Deploy changes` +Note: The `image.repository` and `image.tag` parameters in the values of the workload are automatically set when `Create build form source` is used. If `Use an existing image` is selected, the `image.repository` and `image.tag` parameters need to be set manually. + +6. Follow the [instruction](services.md) for creating a Service to expose the workload +7. Click `Submit` and then `Deploy changes` diff --git a/docs/for-devs/console/services.md b/docs/for-devs/console/services.md index bd1016d7f..dccae6748 100644 --- a/docs/for-devs/console/services.md +++ b/docs/for-devs/console/services.md @@ -14,13 +14,16 @@ A service in Otomi is a self-service feature for: ## Services `(team )` -All known Services of the team are listed here. Services can be sorted based on: +All Services of the team are listed here. + +![Team services](../../img/team-services.png) | Property | Description | | ------------ | ------------------------------------------------------ | | Service Name | The name of the service | -| Ingress class | The ingress class configured. This is the ingress controller that exposes the service | +| Ingress class | The ingress class configured. This is the ingress controller that exposes the service | | URL | The URL of the service if the service is configured for external exposure | +| Status | The status of the service | ## Create a Service diff --git a/docs/for-devs/console/settings.md b/docs/for-devs/console/settings.md index d1e3d56c7..170004da6 100644 --- a/docs/for-devs/console/settings.md +++ b/docs/for-devs/console/settings.md @@ -8,13 +8,39 @@ Based on self-service options allowed by the platfrom administrator, team member ## Configure OIDC group mapping -Change the OIDC group-mapping to allow access to the team based on group membership. +:::note +The OIDC group mapping will only be visible when Otomi is configured with an external Identity Provider (IdP). +::: + +Change the OIDC group-mapping to allow access based on group membership. + +## Managed monitoring + +Activate a (platform) managed Grafana, Prometheus or Alertmanager instance for the team. The installed Grafana, Prometheus and Alertmanager will be monitored by the Platform administrator. + +### Grafana + +Enable to install a Grafana instance for the team. + +**Dependencies:** + +- The Grafana instance is automatically configured with a datasource for the Team's Prometheus. +- If Loki (for logs) is enabled on the Platform, Grafana needs to be enabled here. +- Grafana is provisioned with multiple dashboards that rely on the Platform Prometheus. If Prometheus on the Platform is not enabled, these dashboards will not work! + +### Prometheus + +Enable to install a Prometheus instance for the team. The Prometheus instance is configured to only scrape metrics from `PodMonitors` and `ServiceMonitors` that have the label `prometheus: team-`. + +### Alertmanger + +Enable to install an Alertmanager instance for the team. The Alertmanger instance will only show alerts based on `Rules` from the Team's Prometheus. + ## Configure alert settings :::note - -Alerts settings will only be active when Alertmanager is active. +Alerts settings will only be active when Alertmanager is enabled for the team. ::: Change the alert settings and preferred notification receivers. @@ -40,7 +66,7 @@ There is no validation as there is no schema published. Add/change resource quot :::note -Configuring Azure Monitor settings will only be active when `cluster.provider=azure`) and when multi-tenancy is enabled. +Configuring Azure Monitor settings will only be active when `cluster.provider=azure`. ::: Azure Monitor is the platform service that provides a single source for monitoring Azure resources. @@ -61,3 +87,24 @@ Azure Monitor is the platform service that provides a single source for monitori ## Team self service flags The self-service flags (what is a team allowed to) can only be configured by an admin user. + +### Service + +| Option | Permission | +| ---------------- | -------------------------------------------------------------------------------------- | +| Ingress | The Team is allowed to configure External Exposure for a Service | +| Network policy | The team is allowed to configure network (ingress and egress) for a Service | + +### Team + +| Option | Permission | +| ---------------------- | -------------------------------------------------------------------------------------- | +| Alerts | The Team is allowed to configure Alert settings for the team | +| Billing alert quotas | The team is allowed to configure Billing alert quotas for the team | +| OIDC | The team is allowed to configure the OIDC group mapping for the team | +| Resource quotas | The team is allowed to configure resource quotas for the team | +| Download kube config | The team is allowed to download the Kube Config | +| Download docker config | The team is allowed to download the Docker Config | +| Network policy | The team is allowed to the Network policy configuration for the team | + + diff --git a/docs/for-devs/console/shell.md b/docs/for-devs/console/shell.md index 8960d781d..6c885fd9a 100644 --- a/docs/for-devs/console/shell.md +++ b/docs/for-devs/console/shell.md @@ -8,7 +8,7 @@ The Shell feature allows to start a web based shell in Console with Kube API acc - [Kubectl](https://kubernetes.io/docs/reference/kubectl/) - [K9s](https://k9scli.io/) -- Tekton CLI +- [Tekton CLI](https://tekton.dev/docs/cli/) When running the shell as a member of a team, the shell will allow only provide acccess to resources in the team namespace. @@ -18,41 +18,39 @@ The Shell provides an easy and efficient way to access and manage Kubernetes res - **Identity-Based Access**: Leverage your group membership from an Identity Provider, such as Azure AD, for secure access to your Kubernetes namespace. - **Efficient Interface**: Utilize essential Kubernetes management tools and perform tasks seamlessly. -## Getting Started +## Using the Shell -### Logging In +1. Log in into the Otomi Console +2. Click on the "Shell" option in the left menu. +3. You'll be connected to the TTY Console interface, granting direct access to the Kubernetes namespace of the Team. -1. Log in to your Otomi account. -2. Upon successful login, you'll be directed to the platform dashboard. +### Basic Commands and Shortcuts -### Accessing Your Kubernetes Namespace +- Utilize the `kubectl` command to interact with your Kubernetes cluster +- Benefit from the convenient `k` shortcut for `kubectl` with bash-completion -1. Locate and click on the "Shell" option on the left-hand side of the dashboard. -2. You'll be connected to the TTY Console interface, granting direct access to your Kubernetes namespace. -## Using the Shell -### Basic Commands and Shortcuts +### Integrated CLI tools + +The Shell comes with a set of integrated CLI tools: -- Utilize the `kubectl` command to interact with your Kubernetes cluster. -- Benefit from the convenient `k` shortcut for `kubectl` with bash-completion. -- Explore various tools, all available within the console: - - **k9s**: Gain insights into your Kubernetes resources with an intuitive UI. - - **Velero Cli**: Manage cluster backups effortlessly. - - **Tekton Cli**: Monitor Project pipelines efficiently. - - **Other Tools**: Tools like `jq`, `yq`, and `curl` are at your disposal for enhanced functionality. +- **k9s**: Gain insights into your Kubernetes resources with an intuitive UI +- **Tekton ClI**: Monitor Project pipelines efficiently +- **Other Tools**: Tools like `jq`, `yq`, and `curl` are at your disposal for enhanced functionality ### Working with Tmux -- If you're a Tmux enthusiast, enjoy the ability to create multiple windows and panes for multitasking. -- This feature enhances your productivity by allowing you to organize your workspace effectively. +- If you're a Tmux enthusiast, enjoy the ability to create multiple windows and panes for multitasking +- This feature enhances your productivity by allowing you to organize your workspace effectively ## Session Management + ### Browser Crash Resilience -- The TTY Console is designed to be resilient in the face of browser crashes. -- If your browser unexpectedly crashes, your session remains intact. -- You can simply reopen the browser and resume your Kubernetes management tasks. +- The TTY Console is designed to be resilient in the face of browser crashes +- If your browser unexpectedly crashes, your session remains intact +- You can simply reopen the browser and resume your Kubernetes management tasks ### Ending Sessions -- When you're finished with your Kubernetes management tasks, remember to properly end your session by clicking the recycle bin button on the top right of the TTY window. This will delete your session. -- Logging out of your session will also have the same effect +- When you're finished with your Kubernetes management tasks, remember to properly end your session by clicking the recycle bin button on the top right of the TTY window. This will delete your session +- Logging out of your session will have the same effect diff --git a/docs/for-devs/console/workloads.md b/docs/for-devs/console/workloads.md index b25f9d776..a0afb27c5 100644 --- a/docs/for-devs/console/workloads.md +++ b/docs/for-devs/console/workloads.md @@ -6,20 +6,20 @@ sidebar_label: Workloads -A Workload in Otomi is a self-service feature for creating Kubernetes resources using Helm charts form the Otomi Developer Catalog. - -:::info -Ask your platform administrator to activate Argo CD to be able to use this feature. -::: +A Workload in Otomi is a self-service feature for creating Kubernetes resources using Helm charts from the Otomi Developer Catalog. ## Workloads (all) -All known Workloads of the team are listed here. +All Workloads of the team are listed here. + +![Team workloads](../../img/team-workloads.png) | Property | Description | | -------- | ------------------------------------------------- | | Name | The name of the workload | | Argocd | Link to the Argo CD application in the Argo CD UI | +| Image update strategy | The configured update strategy for the workload | +| Status | The status of the workload. Click on the Argo CD application link to see more status details | ## Create a Workload diff --git a/docs/for-devs/get-started/lab-1.md b/docs/for-devs/get-started/lab-1.md index b3c5ff0e5..8e3f24cbc 100644 --- a/docs/for-devs/get-started/lab-1.md +++ b/docs/for-devs/get-started/lab-1.md @@ -15,7 +15,6 @@ We assume you (or the platform administrator) have: 2. Activated the following applications: - Harbor -- ArgoCD - Prometheus - Loki - Grafana @@ -24,14 +23,14 @@ We assume you (or the platform administrator) have: For the [Use OpenTelemery](lab-27.md) Lab, the Tempo app needs to be enabled together with tracing in the `Istio` and `Nginx Ingress` apps. -3. Created a team called `demo` with `Managed prometheus and alert manager` enabled -4. [Created an account](/docs/apps/keycloak#create-a-user-in-keycloak) and added your user to the team group in Keycloak +3. Created a team called `labs` with `Grafana`, `Prometheus` and `Alertmanager` activated +4. [Created an account](/docs/apps/keycloak#create-a-user-in-keycloak) and added your account to the `labs` team group in Keycloak. In the labs we'll be using the user `labs-user` 5. Provided you with the following information: - The URL to access the Otomi web UI (Otomi Console) - Your login credentials -When you have received the URL of the web UI and have a username/password, then it's time to sign-in +When you have received the URL of the web UI and have a username/password, then it's time to sign-in. ## Sign in to the Console diff --git a/docs/for-devs/get-started/lab-10.md b/docs/for-devs/get-started/lab-10.md index e14a5be0c..119455154 100644 --- a/docs/for-devs/get-started/lab-10.md +++ b/docs/for-devs/get-started/lab-10.md @@ -36,6 +36,9 @@ apiVersion: apps/v1 kind: Deployment metadata: name: nginx + labels: + otomi.io/app: nginx + app: nginx spec: replicas: 1 selector: diff --git a/docs/for-devs/get-started/lab-11.md b/docs/for-devs/get-started/lab-11.md index 85efdda54..310d50308 100644 --- a/docs/for-devs/get-started/lab-11.md +++ b/docs/for-devs/get-started/lab-11.md @@ -17,11 +17,8 @@ Before creating a workload from the developer catalog, we'll need the `repositor You can now create a workload from the developer catalog: 1. Go to `Workloads` in the left menu and click on `New Workload` - 2. Add the Name `green` for the workload - 3. Select `otomi-quickstart-k8s-deployment` from the catalog - 4. Set the `Auto image updater` to `Digest` and fill in the `ImageRepository` from the clipboard. `Digest` is the update strategy and will update the image to the most recent pushed version of a given tag. diff --git a/docs/for-devs/get-started/lab-12.md b/docs/for-devs/get-started/lab-12.md index e680786e2..827fddcd3 100644 --- a/docs/for-devs/get-started/lab-12.md +++ b/docs/for-devs/get-started/lab-12.md @@ -15,7 +15,6 @@ In this Lab you're going to create a workload in Otomi to install your're own He ## Creating a Workload to install your Helm chart - Click on `Workloads` in the left menu. - - Click on `Create Workload` ![kubecfg](../../img/create-workload.png) @@ -39,9 +38,7 @@ In this Lab you're going to create a workload in Otomi to install your're own He ![kubecfg](../../img/byo-chart-workload-2.png) - Click `Next` - - We are going to use the default chart values, so there is no need to fill in any values here - - Click `Submit` The values of a workload can be changed at any time. Changes will automatically be deployed. \ No newline at end of file diff --git a/docs/for-devs/get-started/lab-13.md b/docs/for-devs/get-started/lab-13.md index 77f104bc2..cd15f6f6e 100644 --- a/docs/for-devs/get-started/lab-13.md +++ b/docs/for-devs/get-started/lab-13.md @@ -13,13 +13,9 @@ Before creating a workload from the Catalog, we'll need the `repository` and `ta You can create a workload from the developer catalog: 1. Go to `Catalog` in the left menu and click on the `k8s-deployment`template - 2. Click on `Values` - 3. Add the Name `blue` - 4. Leave the `Auto image updater` to `Disabled` - 5. In the workload `values`, change the following parameters: ```yaml diff --git a/docs/for-devs/get-started/lab-14.md b/docs/for-devs/get-started/lab-14.md index 399ba965e..e61f84a80 100644 --- a/docs/for-devs/get-started/lab-14.md +++ b/docs/for-devs/get-started/lab-14.md @@ -23,7 +23,6 @@ In this lab we're going to create a workload in Otomi to create a Knative servic You can create a workload to deploy your own Helm chart, or you can use one of the `otomi-charts` Helm charts. In this case we'll use the deployment chart in the `otomi-charts` repository. 1. Go to `Workloads` in the left menu and click on `New Workload` - 2. Choose `Function as a Service` ![kubecfg](../../img/ksvc-app.png) @@ -53,7 +52,6 @@ Note: When creating a Function as a Service workload, the Min Instances will by ::: 6. Click `Next` - 7. Review the values. Here you can add more values supported by the [otomi-charts](https://github.com/redkubes/otomi-charts) ![kubecfg](../../img/ksvc-app-3.png) @@ -69,10 +67,7 @@ The values of a workload can be changed at any time. Changes will automatically ## (optionally) Publicly expose the service - In the left menu panel under click `Services` then click on `Create Service` - - Select the name of the (existing) knative service: `hello-ksvc` - - Under `Exposure Ingress`, select `Ingress` and use the default configuration - - Click on `Submit` - Click on `Deploy Changes` (the Deploy Changes button in the left panel will light-up after you click on submit). diff --git a/docs/for-devs/get-started/lab-15.md b/docs/for-devs/get-started/lab-15.md index c30df1296..aaff8706a 100644 --- a/docs/for-devs/get-started/lab-15.md +++ b/docs/for-devs/get-started/lab-15.md @@ -12,11 +12,8 @@ When the platform administrator has enabled Gatekeeper and configured policies, ## View policy violations - Open the Grafana app - - Click on `Dashboards` / `Browse` - - In the list of dashboards you will see a dashboard called `Policy Violations`. Click on it - - Now you will see the following dashboard, showing all detected policy violations within your team workloads diff --git a/docs/for-devs/get-started/lab-16.md b/docs/for-devs/get-started/lab-16.md index a72fe3f06..6cab05a29 100644 --- a/docs/for-devs/get-started/lab-16.md +++ b/docs/for-devs/get-started/lab-16.md @@ -12,11 +12,8 @@ When the platform administrator has enabled Falco, you might like to check and s ## View detected threads - Open the Grafana app - - Click on `Dashboards` / `Browse` - - In the list of dashboards you will see a dashboard called `Detected Threads`. Click on it - - Now you will see the following dashboard, showing all the detected threads in your team workloads diff --git a/docs/for-devs/get-started/lab-18.md b/docs/for-devs/get-started/lab-18.md index 45c2c1feb..ee1b1f455 100644 --- a/docs/for-devs/get-started/lab-18.md +++ b/docs/for-devs/get-started/lab-18.md @@ -4,21 +4,21 @@ title: Publicly expose your application sidebar_label: Expose services --- -When you have deployed your app, you will probably like to expose it publicly. Maybe you noticed that in the previous labs, we created a Kubernetes service of type `ClusterIP` and not `LoadBalancer` and also that the Pod(s) created by the deployment have an Istio sidecar. All Pods created in your team will automatically be added to the service mesh. In this part we'll create a Service in Otomi to expose your app publicly. When you create a Service, Otomi will then create the Istio virtual service and configure ingress for your application. +When you have deployed your application using the Workloads feature, you will probably like to expose it publicly. In this lab we'll create a Service in Otomi to expose your application publicly. When you create a Service, Otomi will create the Istio virtual service and configure ingress for your application. ## Create a Service - In the left menu panel under click `Services` then click on `Create Service` -![harbor-projects](../../img/create-svc.png) +![expose services](../../img/create-svc.png) -- Select a service that you already deployed: +- Select the `blue` service of the Workload we created in the previous lab: -![harbor-projects](../../img/create-svc-2.png) +![expose services](../../img/create-svc-2.png) - Under `Exposure Ingress`, select `Ingress` and use the default configuration -![harbor-projects](../../img/create-svc-3.png) +![expose services](../../img/create-svc-3.png) - Click `Submit` - Click `Deploy Changes` (the Deploy Changes button in the left panel will light-up after you click on submit). diff --git a/docs/for-devs/get-started/lab-19.md b/docs/for-devs/get-started/lab-19.md index 47a520e8f..041870950 100644 --- a/docs/for-devs/get-started/lab-19.md +++ b/docs/for-devs/get-started/lab-19.md @@ -9,48 +9,152 @@ In some cases you want to explicitly allow access to your application. This can - Policies for ingress traffic inside the cluster - Policies for egress traffic to go outside of the cluster (to access external FQDNs) -## Prerequisites - -Before you can configure network policies, first make sure to add the `otomi.io/app: ` label to all pods belonging to the service. - -## configuring network policies for internal ingress +## About network policies for internal ingress The internal ingress network policies alllow you to: -- Deny all traffic to your application -- Allow selected applications running on the cluster to access your application +- Deny all traffic to the Pods of a Workload +- Allow selected Workload Pods running on the cluster to access your Workload's Pods `Deny all` and `Allow all` we don't need to explain right? -To allow other applications running on the cluster to access your application, do the following: +:::info +The Ingress Network Policies in Otomi rely on the `otomi.io/app` label. All Workloads in Otomi need to use this label. When your using an Otomi quick start template from the Catalog, this label is always added. +::: + +To allow other Workloads on the cluster to access your Workload's Pods, do the following: + +**If the `ClusterIP` service of your workload has the same name as the `otomi.io/app` label value:** + +- Register the Kubernetes ClusterIP service of the Workload as a Service in Otomi. If no public ingress is required, then just use the `Private` Exposure option +- In the `Network policies` section leave the `PodSelector` field blanc +- In the `Ingress traffic inside the cluster` select `Allow selected` +- Add the team name (without `team-`) and `otomi.io/app` label value of the Workload Pods that are allowed access + +**If the `ClusterIP` service of your workload does NOT have the same name as the `otomi.io/app` label value:** + +This is sometimes the case when a Workload has multiple `ClusterIP` services. In this scenario you will only need to configure the network policies in one of the Workload services. + +- Register the Kubernetes ClusterIP service of the Workload as a Service in Otomi. If no public ingress is required, then just use the `Private` Exposure option +- In the `Network policies` section leave the `PodSelector` add the `PodSelector`. Use a custom value for the "otomi.io/app:" label. +- In the `Ingress traffic inside the cluster` select `Allow selected` +- Add the team name (without `team-`) and `otomi.io/app` label value of the Workload Pods that are allowed access + +## Configure network policies for the Example Voting App + +### Building the images + +Build the `Vote`, `Worker` and `Result` images from this [repo](https://github.com/redkubes/example-voting-app). + +Use the Build feature in Otomi to build the images with `mode-Docker`. Set the `path` to `./vote/Dockerfile` for the Vote image (and `./worker/Dockerfile` for the Worker and `./result/Dockerfile` for Result). + +### Create a Redis cluster and a PostgreSQL database + +Use the `postgresql` and the `redis` charts from the Catalog to create a Redis master-replica cluster and a PostgreSQL database. For this lab, Redis authentication needs to be turned off by setting `auth.enabled=false`. + +### Deploy the Vote app + +Use the `k8s-deployment` chart to deploy the vote app. Use the following values: -- Register the Kubernetes ClusterIP service of your app as a Service in Otomi. If no public ingress is required, then just use the `Cluster` ingress option +Name: `vote` -- In the `Ingress traffic inside the cluster` block in the `Network policies` section of the Service, select `Allow selected` -- Add the team name and the service name (a service also registered in Otomi) +```yaml +containerPorts: + - name: http + containerPort: 80 + protocol: TCP +env: + - name: REDIS_HOST + value: -master +``` -In the example below, you are part of the team backend and you would like to allow the service frontend running in team frontend to be able to access your service: +### Deploy the Worker app -![harbor-projects](../../img/create-netpols.png) +Use the `k8s-deployment` chart to deploy the worker app. Use the following values: -- Click `Submit` and then `Deploy Changes` +Name: `worker` -## Configuring network policies for external egress +```yaml +containerPorts: + - name: http + containerPort: 80 + protocol: TCP +env: + - name: DATABASE_USER + valueFrom: + secretKeyRef: + name: -superuser + key: username + - name: DATABASE_PASSWORD + valueFrom: + secretKeyRef: + name: -superuser + key: password + - name: REDIS_HOST + value: -master + - name: DATABASE_HOST + value: -rw +``` -The external egress policies allow you to: +### Deploy the Result app -- Allow your application to access resources outside of the cluster +Use the `k8s-deployment` chart to deploy the result app. Use the following values: -By default this is not allowed. +Name: `result` -To allow your application to access resources outside of the cluster, do the following: +```yaml +containerPorts: + - name: http + containerPort: 80 + protocol: TCP +env: + - name: DATABASE_USER + valueFrom: + secretKeyRef: + name: -superuser + key: username + - name: DATABASE_PASSWORD + valueFrom: + secretKeyRef: + name: -superuser + key: password + - name: DATABASE_HOST + value: -rw +``` -- In the `External egress filtering` block in the `Network policies` section of the Service, click on `Add item` -- Add the Fully Qualified Domain Name (FQDN) or the IP address of the resource your application needs to access -- Add the port number -- Select the protocol +### Register the services for Exposure and configure network policies -![harbor-projects](../../img/create-netpols-2.png) +#### Postgres database -- Click `Submit` and then `Deploy Changes` +- Register the `-rw` Postgresql service +- Set exposure to `Private` (default) +- In `Network policies` add the Pod Selector `` +- Select `Allow selected` +- Add From team name `` and From label value `` +- Add From team name `` and From label value `` +- Add From team name `` and From label value `` + +#### Redis + +- Register the `-master` Redis service +- Set exposure to `Private` (default) +- In `Network policies` add the Pod Selector `` +- Select `Allow selected` +- Add From team name `` and From label value `` +- Add From team name `` and From label value `` +- Add From team name `` and From label value `` + +#### Vote + +- Register the `vote` service +- Set exposure to `External` + +#### Result + +- Register the `` service +- Set exposure to `External` + +### Test the app + +Go to the external URL of the `vote` application. Click on `Cats` or `Dogs`. Now go to the external URL of the `result` application. You should see the result of your vote. diff --git a/docs/for-devs/get-started/lab-20.md b/docs/for-devs/get-started/lab-20.md index 7136c0a99..fa88e9284 100644 --- a/docs/for-devs/get-started/lab-20.md +++ b/docs/for-devs/get-started/lab-20.md @@ -31,15 +31,10 @@ Select the label `app` and then select `blue`. You will now see all the `blue` c When you created a custom query that you would like to use more often, or would like to share with the team, you can create a shortcut in Otomi. - Copy the absolute path of your query - - In the apps section, click on the `Settings` icon of the Loki app - - Click on the `Shortcuts` tab - - Click `edit` - - Click on `Edd item` - - Fill in the `Title`, `Description` and the `Path` for the shortcut ![kubecfg](../../img/new-loki-shortcut.png) diff --git a/docs/for-devs/get-started/lab-21.md b/docs/for-devs/get-started/lab-21.md index dcb6b949b..4360e6ab6 100644 --- a/docs/for-devs/get-started/lab-21.md +++ b/docs/for-devs/get-started/lab-21.md @@ -5,50 +5,37 @@ sidebar_label: View container metrics --- :::info -Prometheus and Grafana need to be activated for this lab. +Prometheus and Grafana for the Team need to be activated for this lab. ::: When your application is deployed, you would of course like to be able to see container metrics for debugging purposes. Prometheus is used in Otomi for metrics. When Prometheus is enabled, you'll see the Prometheus app in your apps. :::info -When Otomi is configured in multi-tenant mode, each team will get a dedicated Prometheus and Grafana instance. Container metrics are provided by the platform Prometheus and you can use the dedicated team Prometheus to collect custom application metrics. +When Grafana, Prometheus and Alertmanger are enabled for the Team, the team will get it's own instance of Grafana, Prometheus and/or Alertmanager. Container metrics are provided by the platform Prometheus and you can use the Team's Prometheus to collect custom application metrics. ::: -## View container metrics (no multi-tenancy) +## View dashboards - Open the Grafana app in your team apps ![kubecfg](../../img/grafana-teams.png) -- Grafana will open the default Welcome to Grafana page. On the right, click on `Dashboards` +- Grafana will open the Dashboards page: ![kubecfg](../../img/grafana-dashboards.png) +The dashboards are dynamically added based on the enabled platform capabilities: -Here you will see a long list of dashboards that are added by Otomi. +| Dashboard | When added | +| --------- | ---------- | +| Kubernetes / Deployment | When Prometheus on platform level is enabled | +| Kubernetes / Pods | When Prometheus on platform level is enabled | +| Team status | When Prometheus on platform level is enabled | +| Container scan results | When Trivy on platform level is enabled | +| Policy violations | When Gatekeeper on platform level is enabled | +| Detected threads in containers | When Falco on platform level is enabled | -- Select the `Kubernetes / Compute Resources / Namespace (Pods)` dashboard +## View container metrics -![kubecfg](../../img/dashboard-1.png) - -- Select your team namespace - -![kubecfg](../../img/dashboard-2.png) - - -## View container metrics (in multi-tenancy mode) - -When Otomi runs in multi-tenant mode, using Grafana for Prometheus is a little different. If you go to the dashboards, you'll only see 2 dashboards: - -1. Kubernetes / deployment -2. Kubernetes / Pods - -- Click on the Kubernetes / Pods dashboard. - -Note that you will not see any data. This is because the dedicated team Prometheus is used as a datasource, but the team Prometheus instance does not collect container metrics. - -- Select the `Prometheus-platform` data source - -![kubecfg](../../img/prometheus-platform.png) - -Now you will see metrics of containers running in your team namespace. \ No newline at end of file +- Click on the `Kubernetes / Pods` dashboard +- Select the required Pod and Container diff --git a/docs/for-devs/get-started/lab-22.md b/docs/for-devs/get-started/lab-22.md index 646de3b53..d96dc9ad5 100644 --- a/docs/for-devs/get-started/lab-22.md +++ b/docs/for-devs/get-started/lab-22.md @@ -56,11 +56,8 @@ serviceMonitor: Check if the ServiveMonitor has been picked up by Prometheus: 1. In the left menu go to `Apps` - 2. Click on the `Prometheus` app - 3. In Prometheus, click on `Status` in the top menu and then click `Targets` - 4. You will now see that the ServiceMonitor has the `State` UP: ![metrics](../../img/custom-metrics.png) @@ -82,11 +79,8 @@ for i in {1..1000}; do curl https://custom-metrics-labs./hello; sle To see the metrics: 1. Open the `Prometheus` app - 2. In Prometheus, fill in the following Expression: `application_greetings_total` - 3. Click on `Graph` - 4. You should now see the following: ![metrics](../../img/custom-metrics-1.png) diff --git a/docs/for-devs/get-started/lab-23.md b/docs/for-devs/get-started/lab-23.md index b55df2014..eb33ef474 100644 --- a/docs/for-devs/get-started/lab-23.md +++ b/docs/for-devs/get-started/lab-23.md @@ -8,20 +8,23 @@ When your application is deployed, you would of course like to get an alert when ## Monitor your application for availability -- [Create a Service](lab-7.md) for your app in Otomi. The service can have an Exposure ingress of type `Cluster` or `Ingress` +1. [Create a Service](lab-7.md) for your app in Otomi. The service can have an Exposure ingress of type `Cluster` or `Ingress` -- Open the Prometheus +2. Open Prometheus ![kubecfg](../../img/prometheus-teams.png) -- In Prometheus, Go to `Status` and click on `Targets` +3. In Prometheus, Go to `Status` and click on `Targets` -You will see the `prope-service-` endpoint. First in an `UNKNOWN` state: +![kubecfg](../../img/targets-up.png) -![kubecfg](../../img/target-unknown.png) +In the list of targets you will see: -But after a couple of minutes the state will be `UP`: +- The `PodMonitor` endpoints of the `istio sidecars` os the Team Workloads +- The `Probes` of all the Team services that are exposed -![kubecfg](../../img/target-up.png) +4. In Prometheus, Go to `Alerts` -When alertmanager is enabled, and an alert notification receiver is configured, you will automatically receive an alert when your service is down. \ No newline at end of file +![kubecfg](../../img/prometheus-alerts.png) + +In the alerts you will see an (inactive) alert for `ProbeFailing`. If the `State` of a Servive Probe is `Down` the Prometheus `Rule` for this alert will fire. When alertmanager is enabled, and an alert notification receiver is configured, you will automatically receive an alert when your exposed Service is down. \ No newline at end of file diff --git a/docs/for-devs/get-started/lab-24.md b/docs/for-devs/get-started/lab-24.md index 81dfb1502..7dcaa16c2 100644 --- a/docs/for-devs/get-started/lab-24.md +++ b/docs/for-devs/get-started/lab-24.md @@ -4,46 +4,37 @@ title: Create a PostgreSQL database sidebar_label: Create a database --- -Otomi by default installs the Cloudnative POstgreSQL database operator. Teams can use the operator to create their own PostgreSQL databases. +Otomi by default installs the Cloudnative POstgreSQL database operator. Teams can use the operator and the `postgresql` quick start to create their own PostgreSQL databases. ## Create a database -1. In the apps section in Otomi console, click on Gitea. In the list of repo's you'll now see a new repo called `otomi/team-demo-argocd`. -2. Create a new file in the repo called `my-db.yaml` +You can create a postgresql database from the developer catalog: -```yaml -apiVersion: postgresql.cnpg.io/v1 -kind: Cluster -metadata: - name: my-db -spec: - description: "Postgres Database used in Otomi labs" - imageName: ghcr.io/cloudnative-pg/postgresql:15.3 - instances: 1 - primaryUpdateStrategy: unsupervised - storage: - size: 1Gi - monitoring: - enablePodMonitor: false -``` -Note that we do not enable the pod monitor. - -3. Save the file and commit the changes +1. Go to `Catalog` in the left menu and click on the `postgresql` template +2. Click on `Values` +3. Fill in a name for the database +4. Change other parameter values as required +5. Click `Submit` and the `Deploy Changes` -The operator will now create the database and add a secret to the team's namespace called `my-db-superuser`. This secret contains the username and password for the database with the keys `username` and `password`. +The operator will now create the database and add a secret to the team's namespace called `-superuser`. This secret contains the username and password for the database with the keys `username` and `password`. -If your application requires to use different keys, create the following `secretKeyRef` variables: +You can now provide the username and password to a container as environment variables using a `secretKeyRef`: ```yaml env: - name: DB_PASSWORD valueFrom: secretKeyRef: - name: my-db-superuser + name: -superuser key: password - name: SECRET_KEY valueFrom: secretKeyRef: - name: my-db-superuser + name: -superuser key: username ``` + +## Monitoring + +The `postgresql` quick start template includes two parameters that can be used to create a `PodMonitor` and a Grafana Dashboard. Set the `monitoring` parameter to `true` to create a PodMonitor and set the `dashboard` parameter to `true` to add a cloudnativepg dashboard to the Team's Grafana. Note that this dashboard can be used to monitor multiple databases so you'll just need to create it once. + diff --git a/docs/for-devs/get-started/lab-25.md b/docs/for-devs/get-started/lab-25.md index 7a1d5a13c..b97c6b827 100644 --- a/docs/for-devs/get-started/lab-25.md +++ b/docs/for-devs/get-started/lab-25.md @@ -9,7 +9,6 @@ If you previously created a database, you'll noticed that we did not let the ope ## Create a PodMonitor 1. In the apps section in Otomi console, click on Gitea. In the list of repo's you'll now see a new repo called `otomi/team--argocd`. - 2. Create a new file called `my-db-pod-monitor.yaml` ```yaml @@ -30,13 +29,13 @@ spec: matchLabels: cnpg.io/cluster: my-db ``` -2. Save the file and commit the changes. +3. Save the file and commit the changes. The pod monitor will be picked-up by the team's own Prometheus. You can now add a dashboard to the team's Grafana instance. ## Add a custom dashboard to the team's Grafana -1. Create a new file called `my-db-dashboard.yaml` +4. Create a new file called `my-db-dashboard.yaml` ```yaml apiVersion: v1 diff --git a/docs/for-devs/get-started/lab-26.md b/docs/for-devs/get-started/lab-26.md index 360fcac02..420fc6307 100644 --- a/docs/for-devs/get-started/lab-26.md +++ b/docs/for-devs/get-started/lab-26.md @@ -19,6 +19,10 @@ In the previous lab we created a build in Otomi using the `blue` repo in Gitea. Before we can configure the webhook for the `green` repo in Gitea, we will need the webhook URL. You can find this webhook URL for your build in the list of Builds. Add the webhook URL to your clipboard. +![trigger build](../../img/trigger-builds.png) + +Also notice that the status of the Build shows an exclamation mark. This is because Otomi created the Pipeline, but the PipelineRun is not yet created because it was not triggered yet. + ## Create a Webhook 1. In Otomi Console, click on `apps` the left menu and then open `Gitea` @@ -41,17 +45,16 @@ You can now trigger the build by doing a commit in the `green` repo, or by testi The build should now have started. Based on the webhook, Tekton has now created a `PipelineRun`. Let's check the status of the PipelineRun: 1. In Otomi Console, click on `Builds` -2. In the list of Builds, click on the `PipelineRun` link of the `green` build -3. Tekton Dashboard will open and show a list of all the PipelineRuns -4. Click on the PipelineRun with the name `docker-trigger-build-green-*` -5. You can now see the status of the build -## Find your image in Harbor +Because the Build was triggered, a PipelineRun is now running and the status of the Build will show `in progress`: -The build succeeded. Now it is time to see artifacts +![trigger build](../../img/trigger-builds-2.png) -1. In Otomi Console, got to Apps and click on `Harbor` -2. Click `LOGIN VIA OIDC PROVIDER` -3. Navigate to the `team-demo` project -4. In the `Repositories` tab, click `team-demo/blue` link -5. Observe artifacts +When the Build is completed, the status will show `healthy`: + +![trigger build](../../img/trigger-builds-3.png) + +2. In the list of Builds, click on the `PipelineRun` link of the `green` build +3. Tekton Dashboard will open and show a list of all the PipelineRuns. It will show all PipelineRuns because when using a Trigger, the PipelineRun resource is created based on a template and Otomi will not know the exact name of the PipelineRun because the name is automatically generated. +4. Click on the PipelineRun with the name `docker-trigger-build-green-*` +5. You can now see the the full log of the build diff --git a/docs/for-devs/get-started/lab-27.md b/docs/for-devs/get-started/lab-27.md index f6569ae19..b5344d4ea 100644 --- a/docs/for-devs/get-started/lab-27.md +++ b/docs/for-devs/get-started/lab-27.md @@ -22,7 +22,6 @@ Using a Gitea repository is not required. You can also build using public reposi ::: 1. Create a new repo called `petclinic` - 2. Clone the Spring PetClinic Sample Application: ```bash @@ -37,13 +36,9 @@ git push --mirror https://gitea.//petclinic.git ``` 4. Go to `Builds` in the left menu and click `Create Build` - 5. Fill in the Build name `petclinic` - 6. Choose `Buildpacks` - 7. Fill in the `Repo URL` with the `petclinic` Gitea repo you created - 8. Click `Submit` ## Create a workload from the developer catalog @@ -51,13 +46,9 @@ git push --mirror https://gitea.//petclinic.git Go to the list of Builds and add the repository of the `petclinc` build to your clipboard. Remember that the tag is latest. 1. Go to `Workloads` in the left menu and click on `New Workload` - 2. Add the Name `petclinic` for the workload - 3. Select `otomi-quickstart-k8s-deployment-otel` from the catalog - 4. Leave the `Auto image updater` to `Disabled` - 5. In the workload `values`, change the following parameters: ```yaml @@ -85,13 +76,9 @@ Now click on `Deploy Changes` ## Expose the service - In the left menu panel under click `Services` then click on `Create Service` - - Select the `petclinic` service - - Under `Exposure Ingress`, select `Ingress` and use the default configuration - - Click `Submit` - - Click `Deploy Changes` ## See traces @@ -100,7 +87,11 @@ To be able to see traces, we'll first need to generate some requests. Click on t To see traces, you'll first need to find a `TraceID` of a trace. Go to `Apps` in the left menu and then click op `Loki`. Select the label `App` and select value `petclinic`. -Click on a log entry of a request. Note that the requests are logged by the Istio Envoy proxy. You will now see a link to Tempo. Click on it. +Click on a log entry of a request. Note that the requests are logged by the Istio Envoy proxy. You will now see a link to the full trace in Grafana Tempo. Click on it. + +:::note +If you don't see any traces, check and see if the pod runs the `ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.26.0` container. Sometimes the pod starts before the `Instrumentation` resource has been created. If this is the case, restart the Pod +::: ![Team apps](../../img/traces-loki.png) diff --git a/docs/for-devs/get-started/lab-28.md b/docs/for-devs/get-started/lab-28.md index df52919ba..4389645e2 100644 --- a/docs/for-devs/get-started/lab-28.md +++ b/docs/for-devs/get-started/lab-28.md @@ -20,17 +20,12 @@ For this lab we need the 2 images (`blue` and `green`) we already created in the Go to the list of Builds and add the repository of the `green` build to your clipboard. 1. Go to `Workloads` in the left menu and click on `New Workload` - 2. Add the Name `canary` for the workload - 3. Select `otomi-quickstart-k8s-deployment-canary` from the catalog - 4. Set the `Auto image updater` to `Digest` and fill in: - imageRepository = paste from the clipboard - - imageParameter = `versionTwo.image.repository` - - tagParameter = `versionTwo.image.tag` 5. In the workload `values`, change the following parameters: @@ -57,15 +52,10 @@ We now created 2 deployments. One for `blue` and one for `green`. The `green` im ## Expose the service - In the left menu panel under click `Services` then click on `Create Service` - - Select the `canary` service - - Under `Traffic Control` click `enabled` (and use the default weights for v1 and v2) - - Under `Exposure Ingress`, select `Ingress` and use the default configuration - - Click `Submit` - - Click `Deploy Changes` ## See the results diff --git a/docs/for-devs/get-started/lab-29.md b/docs/for-devs/get-started/lab-29.md index 727c25768..a59347454 100644 --- a/docs/for-devs/get-started/lab-29.md +++ b/docs/for-devs/get-started/lab-29.md @@ -26,13 +26,20 @@ The `otomi-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 The `otomi-quickstart-knative-service` Helm chart can be used to create a Knative `Service` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a Prometheus `ServiceMonitor` can be created. +### Otomi quick start for creating a PostgreSQL cluster + +The `otomi-quickstart-postgresql` Helm chart can be used to create a cloudnativepg PostgreSQL `Cluster`. Optionally a Prometheus `PodMonitor` and a `Configmap` (for adding a postgresql dashboard to Grafana) can be created. + +### Otomi quick start for creating a Redis master-replica cluster + +The `otomi-quickstart-redis` Helm chart can be used to create a Redis master-replica cluster. + ## Using the Catalog 1. Click on `Catalog` in the left menu - 2. You will now see all the templates that are available to use -![catalog](../../img/catalog-1.png) +![catalog](../../img/catalog.png) 3. Click on the `k8s-deployment` template @@ -40,7 +47,7 @@ The `otomi-quickstart-knative-service` Helm chart can be used to create a Knativ In the Info tab you'll see some information about the Chart like the version and additional instructions. -3. Click on the `Values` tab +4. Click on the `Values` tab ![catalog](../../img/catalog-3.png) diff --git a/docs/for-devs/get-started/lab-3.md b/docs/for-devs/get-started/lab-3.md index 472b5175c..cedaf0b46 100644 --- a/docs/for-devs/get-started/lab-3.md +++ b/docs/for-devs/get-started/lab-3.md @@ -18,7 +18,7 @@ The `otomi-admin` account is unable to login with OpenID, this account needs to In these labs we'll be using a Team called `labs` and a user called `labs-user`. -## Create a private repository +## Create the private repository In the apps section in Otomi console, you'll see an app called Gitea. Click on it. @@ -34,29 +34,21 @@ Now follow these steps: ![kubecfg](../../img/new-gitea-repo.png) -- Fill in a Repository Name +- Add the name `blue` for the repository - Optional: Enable `Initialize Repository` -- Optional: Make Repository Private +- Make Repository Private - Click on `Create Repository` Your repo is now ready to be used! ![kubecfg](../../img/new-gitea-repo-ready.png) -## Create 2 repositories for the labs - -For the next labs we're going to need two repo's. Create the following 2 repo's: - -- `blue` -- `green` - -And add the following 2 files to each repo. Make sure to change `blue` to `green` in the `green` repo: +Add the following 2 files to the repository: Add `Dockerfile`: ```Dockerfile FROM nginxinc/nginx-unprivileged:stable -# change to green.html in the green repo! COPY blue.html /usr/share/nginx/html/index.html EXPOSE 8080 ``` @@ -72,17 +64,15 @@ Add `blue.html`: