Skip to content

Commit

Permalink
feat: docs update for latest version (#40)
Browse files Browse the repository at this point in the history
  • Loading branch information
srodenhuis authored Jan 4, 2024
1 parent ff1e464 commit 447c5dc
Show file tree
Hide file tree
Showing 89 changed files with 617 additions and 366 deletions.
58 changes: 40 additions & 18 deletions docs/for-devs/console/builds.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,21 @@ A Build in Otomi is a self-service feature for building OCI compliant images bas
Ask your platform administrator to activate Harbor to use this feature.
:::

:::info
The Otomi Builds feature can only be used with private repo's in the local Gitea. Images will always be pushed to a registry in the local Harbor.
:::

## Builds (all)

All known Builds of the team are listed here.
All Builds of the team are listed here.

| Property | Description |
| ------------- | ------------------------------------------------------ |
| Name | The name of the build |
| Type | Type of the build. `buildpacks` or `docker` |
| Webhook url | The webhook URL if a trigger is configured for the build |
![Team builds](../../img/team-builds.png)

| Property | Description |
| ------------- | --------------------------------------------------------------- |
| Name | The name of the build |
| Type | Type of the build. `buildpacks` or `docker` |
| Webhook url | The `copy to clipboard` webhook URL if a trigger is configured for the build |
| Tekton | Link to the `PipelineRun`` of the build in the Tekton dashboard |
| Repository | The repository of the image |
| Repository | The `copy to clipboard` repository name of the image |
| Tag | The tag of the image |
| Status | The status of the Build. If the Build has failed. click on the Tekton link to see more details |

## Create a build

Expand All @@ -45,21 +45,43 @@ Now choose the type of the build:

### Docker

1. Add the URL of the Gitea repository that contains the application source code
2. (optional) Change the path of the `Dockerfile`
3. (optional) Change the revision. This can be a commit, a tag, or a branch
4. (optional) Select to create an event listener to trigger the build based on a Gitea webhook.
1. Add the URL of the repository that contains the application source code.
2. (optional) Change the path of the `Dockerfile`. Default is `./Dockerfile`. To use a Dockerfile is a specific folder, use `./folder/Dockerfile`.
3. (optional) Change the revision. This can be a commit, a tag, or a branch.
4. (Optional) Select `External Repo` if the repository used for the Build is not a public or a private Git repo in the local Gitea. When selected, fill in the secret name that contains the required SSH credentials. Read more [here](https://tekton.dev/docs/how-to-guides/clone-repository/#git-authentication) about how to setup SSH authentication with your Git provider.
5. (optional) Select to create an event listener to trigger the build based on a Gitea webhook.

### Buildpacks

1. Add the URL of the Git repository that contains the application source code
2. (optional) Add the path. This is a subpath within the repo where the source to build is located
3. (optional) Change the revision. This can be a commit, a tag, or a branch
4. (optional) Select to create an event listener to trigger the build based on a Gitea webhook.
4. (optional) Add Environment variables to set during build-time
5. (Optional) Select `External Repo` if the repository used for the Build is not a public or a private Git repo in the local Gitea. When selected, fill in the secret name that contains the required SSH credentials. Read more [here](https://tekton.dev/docs/how-to-guides/clone-repository/#git-authentication) about how to setup SSH authentication with your Git provider.
6. (optional) Select to create an event listener to trigger the build based on a Gitea webhook.

### Build status details

To see the more status details of the build, click on the `PipelineRun` link of the build in the list of builds. If a trigger is configured, the link will show all PipelineRuns.

### Configure a webhook for the Git repo in Gitea

1. In Otomi Console, click on `apps` the left menu and then open `Gitea`
2. In the top menu of Gitea, click on `Explore` and then on the `green` repo
3. Go to `Settings` (top right) and then to `Webhooks`
4. Click `Add Webhook` and select `Gitea`
5. In the `Target URL`, paste the webhook URL from your clipboard.
6. Click `Add Webhook`

### Expose the trigger listener publicly

### Build status
When using an external (private) Git repository, the trigger event listener that is created by Otomi can also be exposed publicly. To expose the event listener publicly:

To see the status of the build, click on the `PipelineRun` link of the build in the list of builds. If a trigger is configured, the link will show all PipelineRuns.
1. Go to Services
2. Click create new service
3. Select the `el-gitea-webhook-<build-name>` internal service
4. Under `Exposure`, select `External`
5. Click `Submit` and the `Deploy Changes`

### Restart a build

Expand Down
14 changes: 11 additions & 3 deletions docs/for-devs/console/catalog.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,24 +18,32 @@ The `otomi-quickstart-k8s-deployment` Helm chart can be used to create a Kuberne

### k8s-deployment-otel

The `otomi-quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount`, an `OpenTelemetryCollector` and an `Instrumentation`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created.
The `otomi-quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount` and an `Instrumentation` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created.

### k8s-deployments-canary

The `otomi-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created.
The `otomi-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created.

### knative-service

The `otomi-quickstart-knative-service` Helm chart can be used to create a Knative `Service` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a Prometheus `ServiceMonitor` can be created.

### Otomi quick start for creating a PostgreSQL cluster

The `otomi-quickstart-postgresql` Helm chart can be used to create a cloudnativepg PostgreSQL `Cluster`. Optionally a Prometheus `PodMonitor` and a `Configmap` (for adding a postgresql dashboard to Grafana) can be created.

### Otomi quick start for creating a Redis master-replica cluster

The `otomi-quickstart-redis` Helm chart can be used to create a Redis master-replica cluster.


## Using the Catalog

1. Click on `Catalog` in the left menu

2. You will now see all the templates that are available to use

![catalog](../../img/catalog-1.png)
![catalog](../../img/catalog.png)

3. Click on the `k8s-deployment` template

Expand Down
47 changes: 47 additions & 0 deletions docs/for-devs/console/dashboard.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
slug: dashboard
title: Team Dashboard
sidebar_label: Dashboard
---

The team dashboard gives a global overview of information most relevant to the team.

## Prerequisites

The Team dashboard uses the Team's Grafana instance to get it's information from. Make sure Grafana is enabled for the team. To enable Grafana:

- Go to `Settings`
- Managed monitoring
- Enable Grafana

## Dashboard elements

The dashboard has 5 elements

- [Inventory](#inventory)
- [Resource Status](#resource-status)
- [Resource Utilization](#resource-utilization)
- [Vulnerabilities](#vulnerabilities)
- [Compliance](#compliance)

![Team dashboard](../../img/team-dashboard.png)

### Inventory

The inventory shows the Otomi resources within the team. Click on an inventory item to go directly to the full list.

### Resource Status

The Resource Status panels show if there are any issues with Pods deployed by the team.

### Resource Utilization

The Resource Utilization panels show the total amount of CPU and Memory consumed by the team.

### Vulnerabilities

The Vulnerabilities panels show the total amount of LOW, MEDIUM, HIGH and CRITICAL vulnerabilities in running containers deployed by the Team.

### Compliance

The Compliance panel shows the total amount of policy violations in workloads deployed by the Team.
12 changes: 7 additions & 5 deletions docs/for-devs/console/projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ A Project in Otomi is a collection of a Build, a Workload and a Service in ONE f

Note: The name of the project will be used for all created otomi resources (build, workload and service).

1. Select `Create build form source` or `Use an existing image`
2. If `Create build from source` is selected: follow the [instruction](builds.md) for creating a Build
3. If `Use an existing image` is selected: follow the [instruction](workloads.md) for creating a Workload
4. Follow the [instruction](services.md) for creating a Service to expose the workload
3. Select `Create build form source` or `Use an existing image`
4. If `Create build from source` is selected: follow the [instruction](builds.md) for creating a Build
5. If `Use an existing image` is selected: follow the [instruction](workloads.md) for creating a Workload

5. Click `Submit` and then `Deploy changes`
Note: The `image.repository` and `image.tag` parameters in the values of the workload are automatically set when `Create build form source` is used. If `Use an existing image` is selected, the `image.repository` and `image.tag` parameters need to be set manually.

6. Follow the [instruction](services.md) for creating a Service to expose the workload
7. Click `Submit` and then `Deploy changes`
7 changes: 5 additions & 2 deletions docs/for-devs/console/services.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,16 @@ A service in Otomi is a self-service feature for:

## Services `(team <team-name>)`

All known Services of the team are listed here. Services can be sorted based on:
All Services of the team are listed here.

![Team services](../../img/team-services.png)

| Property | Description |
| ------------ | ------------------------------------------------------ |
| Service Name | The name of the service |
| Ingress class | The ingress class configured. This is the ingress controller that exposes the service |
| Ingress class | The ingress class configured. This is the ingress controller that exposes the service |
| URL | The URL of the service if the service is configured for external exposure |
| Status | The status of the service |

## Create a Service

Expand Down
55 changes: 51 additions & 4 deletions docs/for-devs/console/settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,39 @@ Based on self-service options allowed by the platfrom administrator, team member

## Configure OIDC group mapping

Change the OIDC group-mapping to allow access to the team based on group membership.
:::note
The OIDC group mapping will only be visible when Otomi is configured with an external Identity Provider (IdP).
:::

Change the OIDC group-mapping to allow access based on group membership.

## Managed monitoring

Activate a (platform) managed Grafana, Prometheus or Alertmanager instance for the team. The installed Grafana, Prometheus and Alertmanager will be monitored by the Platform administrator.

### Grafana

Enable to install a Grafana instance for the team.

**Dependencies:**

- The Grafana instance is automatically configured with a datasource for the Team's Prometheus.
- If Loki (for logs) is enabled on the Platform, Grafana needs to be enabled here.
- Grafana is provisioned with multiple dashboards that rely on the Platform Prometheus. If Prometheus on the Platform is not enabled, these dashboards will not work!

### Prometheus

Enable to install a Prometheus instance for the team. The Prometheus instance is configured to only scrape metrics from `PodMonitors` and `ServiceMonitors` that have the label `prometheus: team-<team-name>`.

### Alertmanger

Enable to install an Alertmanager instance for the team. The Alertmanger instance will only show alerts based on `Rules` from the Team's Prometheus.


## Configure alert settings

:::note

Alerts settings will only be active when Alertmanager is active.
Alerts settings will only be active when Alertmanager is enabled for the team.
:::

Change the alert settings and preferred notification receivers.
Expand All @@ -40,7 +66,7 @@ There is no validation as there is no schema published. Add/change resource quot

:::note

Configuring Azure Monitor settings will only be active when `cluster.provider=azure`) and when multi-tenancy is enabled.
Configuring Azure Monitor settings will only be active when `cluster.provider=azure`.
:::

Azure Monitor is the platform service that provides a single source for monitoring Azure resources.
Expand All @@ -61,3 +87,24 @@ Azure Monitor is the platform service that provides a single source for monitori
## Team self service flags

The self-service flags (what is a team allowed to) can only be configured by an admin user.

### Service

| Option | Permission |
| ---------------- | -------------------------------------------------------------------------------------- |
| Ingress | The Team is allowed to configure External Exposure for a Service |
| Network policy | The team is allowed to configure network (ingress and egress) for a Service |

### Team

| Option | Permission |
| ---------------------- | -------------------------------------------------------------------------------------- |
| Alerts | The Team is allowed to configure Alert settings for the team |
| Billing alert quotas | The team is allowed to configure Billing alert quotas for the team |
| OIDC | The team is allowed to configure the OIDC group mapping for the team |
| Resource quotas | The team is allowed to configure resource quotas for the team |
| Download kube config | The team is allowed to download the Kube Config |
| Download docker config | The team is allowed to download the Docker Config |
| Network policy | The team is allowed to the Network policy configuration for the team |


46 changes: 22 additions & 24 deletions docs/for-devs/console/shell.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Shell feature allows to start a web based shell in Console with Kube API acc

- [Kubectl](https://kubernetes.io/docs/reference/kubectl/)
- [K9s](https://k9scli.io/)
- Tekton CLI
- [Tekton CLI](https://tekton.dev/docs/cli/)

When running the shell as a member of a team, the shell will allow only provide acccess to resources in the team namespace.

Expand All @@ -18,41 +18,39 @@ The Shell provides an easy and efficient way to access and manage Kubernetes res
- **Identity-Based Access**: Leverage your group membership from an Identity Provider, such as Azure AD, for secure access to your Kubernetes namespace.
- **Efficient Interface**: Utilize essential Kubernetes management tools and perform tasks seamlessly.

## Getting Started
## Using the Shell

### Logging In
1. Log in into the Otomi Console
2. Click on the "Shell" option in the left menu.
3. You'll be connected to the TTY Console interface, granting direct access to the Kubernetes namespace of the Team.

1. Log in to your Otomi account.
2. Upon successful login, you'll be directed to the platform dashboard.
### Basic Commands and Shortcuts

### Accessing Your Kubernetes Namespace
- Utilize the `kubectl` command to interact with your Kubernetes cluster
- Benefit from the convenient `k` shortcut for `kubectl` with bash-completion

1. Locate and click on the "Shell" option on the left-hand side of the dashboard.
2. You'll be connected to the TTY Console interface, granting direct access to your Kubernetes namespace.
## Using the Shell
### Basic Commands and Shortcuts
### Integrated CLI tools

The Shell comes with a set of integrated CLI tools:

- Utilize the `kubectl` command to interact with your Kubernetes cluster.
- Benefit from the convenient `k` shortcut for `kubectl` with bash-completion.
- Explore various tools, all available within the console:
- **k9s**: Gain insights into your Kubernetes resources with an intuitive UI.
- **Velero Cli**: Manage cluster backups effortlessly.
- **Tekton Cli**: Monitor Project pipelines efficiently.
- **Other Tools**: Tools like `jq`, `yq`, and `curl` are at your disposal for enhanced functionality.
- **k9s**: Gain insights into your Kubernetes resources with an intuitive UI
- **Tekton ClI**: Monitor Project pipelines efficiently
- **Other Tools**: Tools like `jq`, `yq`, and `curl` are at your disposal for enhanced functionality

### Working with Tmux

- If you're a Tmux enthusiast, enjoy the ability to create multiple windows and panes for multitasking.
- This feature enhances your productivity by allowing you to organize your workspace effectively.
- If you're a Tmux enthusiast, enjoy the ability to create multiple windows and panes for multitasking
- This feature enhances your productivity by allowing you to organize your workspace effectively

## Session Management

### Browser Crash Resilience

- The TTY Console is designed to be resilient in the face of browser crashes.
- If your browser unexpectedly crashes, your session remains intact.
- You can simply reopen the browser and resume your Kubernetes management tasks.
- The TTY Console is designed to be resilient in the face of browser crashes
- If your browser unexpectedly crashes, your session remains intact
- You can simply reopen the browser and resume your Kubernetes management tasks

### Ending Sessions

- When you're finished with your Kubernetes management tasks, remember to properly end your session by clicking the recycle bin button on the top right of the TTY window. This will delete your session.
- Logging out of your session will also have the same effect
- When you're finished with your Kubernetes management tasks, remember to properly end your session by clicking the recycle bin button on the top right of the TTY window. This will delete your session
- Logging out of your session will have the same effect
12 changes: 6 additions & 6 deletions docs/for-devs/console/workloads.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,20 @@ sidebar_label: Workloads

<!-- ![Console: new service](img/team-services.png) -->

A Workload in Otomi is a self-service feature for creating Kubernetes resources using Helm charts form the Otomi Developer Catalog.

:::info
Ask your platform administrator to activate Argo CD to be able to use this feature.
:::
A Workload in Otomi is a self-service feature for creating Kubernetes resources using Helm charts from the Otomi Developer Catalog.

## Workloads (all)

All known Workloads of the team are listed here.
All Workloads of the team are listed here.

![Team workloads](../../img/team-workloads.png)

| Property | Description |
| -------- | ------------------------------------------------- |
| Name | The name of the workload |
| Argocd | Link to the Argo CD application in the Argo CD UI |
| Image update strategy | The configured update strategy for the workload |
| Status | The status of the workload. Click on the Argo CD application link to see more status details |

## Create a Workload

Expand Down
Loading

0 comments on commit 447c5dc

Please sign in to comment.