Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding kubernetes manifests to deploy and test kernelci-pipeline on minikube #285

Closed

Conversation

sbdtu5498
Copy link
Contributor

@sbdtu5498 sbdtu5498 commented Jul 5, 2023

Fixes kernelci/kernelci-api#273
@gctucker @JenySadadia
Will be adding README.md soon.

…inikube

Signed-off-by: Sanskar Bhushan <sbdtu5498@gmail.com>
Copy link
Collaborator

@JenySadadia JenySadadia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the directory structure of manifest files the same as API.

@JenySadadia
Copy link
Collaborator

@sbdtu5498
After running API, I created an admin user and API token.
I added pipeline secrets and configmap.

Then tried to run notifier and it is not able to connect to API.

$ kubectl logs kernelci-pipeline-notifier-7fb67bcb7b-6c2gg

raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8001): Max retries exceeded with url: /latest/subscribe/node (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8aff8fcf40>: Failed to establish a new connection: [Errno 111] Connection refused'))

I think I miss some configuration here.

@gctucker
Copy link
Contributor

gctucker commented Aug 3, 2023

Maybe just try with curl If you can access the endpoints? That would be enough to get this PR merged.

@JenySadadia
Copy link
Collaborator

JenySadadia commented Aug 3, 2023

Maybe just try with curl If you can access the endpoints? That would be enough to get this PR merged.

From the ssh pod:

# curl -X GET 'http://kernelci-api.default.svc.cluster.local:8001/latest'
curl: (28) Failed to connect to kernelci-api.default.svc.cluster.local port 8001: Connection timed out

I tried to access API server from SSH service to check the connection issue.

@JenySadadia
Copy link
Collaborator

@sbdtu5498
After you have API deployment up and running you can follow below steps to setup pipeline:

  • Get into api pod:
kubectl exec -it api-pod /bin/bash
  • Run command to create an admin user
    (This will prompt for the password for admin user)
python3 -m api.admin --mongo mongodb://db.default.svc.cluster.local:27017 --email test@kernelci.org

Exit from the pod now.

  • Get token: run the command from your local system and not in k8s cluster
    (Replace PASSWORD with the password you set in the above step)
curl -X 'POST' 'http://localhost:8001/latest/token' -H 'accept: application/json' -H 'Content-Type: application/x-www-form-urlencoded' -d 'grant_type=&username=admin&password=PASSWORD&scope=admin users'
  • Add kci_api_token to pipeline secrets
    (Replace 'TOKEN' with the access_token you created from the above step)
kubectl create secret generic kernelci-pipeline-secrets --from-literal=kci-api-token=TOKEN

@JenySadadia
Copy link
Collaborator

After I changed API service's ports.port to 8001 and updated config/pipeline.yaml with the below:

api_configs:
  docker-host:
    url: http://kernelci-api:8001

Notifier successfully connects to API.

$ kubectl logs kernelci-pipeline-notifier-7569c76ffd-c4tst
08/04/2023 05:34:41 AM UTC [INFO] Listening for events... 
08/04/2023 05:34:41 AM UTC [INFO] Press Ctrl-C to stop.
08/04/2023 05:34:41 AM UTC [INFO] Time                        Commit        Node Id                   State      Result    Name

@JenySadadia
Copy link
Collaborator

Hi @sbdtu5498 ,
I have created configmap for settings variable. Seems like it does not work with pipeline service for command
--settings=${SETTINGS:-/home/kernelci/config/kernelci.toml}.
Can you please take a look?

@JenySadadia
Copy link
Collaborator

Testing with minikube locally after tweaking deployment files for SETTINGS:

Trigger:

$ kubectl logs kernelci-pipeline-trigger-8449cdf49f-mcrsh
08/07/2023 12:01:00 PM UTC [INFO] Delay: 3s
08/07/2023 12:01:06 PM UTC [INFO] Resubmitting existing revision   kernelci_staging-mainline        056b1b13baf1e7a2b5a7880d7f628c53444c55b3
08/07/2023 12:01:07 PM UTC [INFO] Resubmitting existing revision   kernelci_staging-next            5279231637b1ebbc49a75b1aa03967fc917de336
08/07/2023 12:01:10 PM UTC [INFO] Resubmitting existing revision   kernelci_staging-stable          8269e4121fbee4d3f0c8e40e2de7c501990443d1
08/07/2023 12:01:11 PM UTC [INFO] Resubmitting existing revision   mainline                         52a93d39b17dc7eb98b6aa3edb93943248e03b2f
08/07/2023 12:01:11 PM UTC [INFO] Not polling.

Notifier:

$ kubectl logs kernelci-pipeline-notifier-7569c76ffd-vzzcs
08/07/2023 11:59:53 AM UTC [INFO] Listening for events... 
08/07/2023 11:59:53 AM UTC [INFO] Press Ctrl-C to stop.
08/07/2023 11:59:53 AM UTC [INFO] Time                        Commit        Node Id                   State      Result    Name
08/07/2023 12:01:06 PM UTC [INFO] 2023-08-07 12:01:06.487099  056b1b13baf1  64d0dd025b10f7d416118a27  Running    -         checkout
08/07/2023 12:01:07 PM UTC [INFO] 2023-08-07 12:01:07.536413  5279231637b1  64d0dd035b10f7d416118a28  Running    -         checkout
08/07/2023 12:01:10 PM UTC [INFO] 2023-08-07 12:01:10.271740  8269e4121fbe  64d0dd065b10f7d416118a29  Running    -         checkout
08/07/2023 12:01:11 PM UTC [INFO] 2023-08-07 12:01:11.324116  52a93d39b17d  64d0dd075b10f7d416118a2a  Running    -         checkout

Runner:

$ kubectl logs kernelci-pipeline-runner-7b4dff49fb-sgwlm
08/07/2023 11:59:50 AM UTC [INFO] Listening for available checkout events
08/07/2023 11:59:50 AM UTC [INFO] Press Ctrl-C to stop.

Tarball:

$ kubectl logs kernelci-pipeline-tarball-6597c9554-dcpqp
08/07/2023 11:59:04 AM UTC [INFO] Listening for new trigger events
08/07/2023 11:59:04 AM UTC [INFO] Press Ctrl-C to stop.
08/07/2023 11:59:28 AM UTC [INFO] Updating repo for kernelci_staging-mainline
Cloning into '/home/kernelci/data/src/linux'...

Tarball service crashes after a while. Not able to see the logs as it creates another pod as per the restart policy.
Although I have uploaded SSH key to minikube source, seems like it's an issue to find the SSH key.
I will take an another look.

@gctucker
Copy link
Contributor

gctucker commented Aug 7, 2023

Not able to see the logs as it creates another pod as per the restart policy.

That's a good point, we once mentioned that it would be nice to be able to collect all the Kubernetes logs somewhere.

@broonie How would you suggest we did that? I believe you've already looked into this kind of thing before.

@broonie
Copy link
Member

broonie commented Aug 7, 2023

fluentd or fluentbit are the standard things here - they deploy a monitor process onto each node which harvests the logs of each pod as it goes past and has configurable pipeline based stuff for deciding what to do with them. I've not actually got so far as deploying them in anger yet, though I do have a half configured fluentbit deployed in my cluster (and will probably get to it soonish since I'm doing some other logging stuff).

Structure k8s manifest files as below:
kube/
└── minikube
    ├── deployments
    │   ├── notifier-deployment.yaml
    │   ├── regression-tracker-deployment.yaml
    │   ├── runner-deployment.yaml
    │   ├── runner-docker-deployment.yaml
    │   ├── runner-k8s-deployment.yaml
    │   ├── runner-lava-deployment.yaml
    │   ├── tarball-deployment.yaml
    │   ├── test-report-deployment.yaml
    │   ├── timeout-closing-deployment.yaml
    │   ├── timeout-deployment.yaml
    │   ├── timeout-holdoff-deployment.yaml
    │   └── trigger-deployment.yaml
    └── init
        └── init-pod.yaml

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
@JenySadadia
Copy link
Collaborator

Thanks for the pointer @broonie
In case of Minikube, I am able to access logs of crashed pod with below command:

$ kubectl logs kernelci-pipeline-tarball-6597c9554-dcpqp --previous

Use settings file value from the ConfigMap
variable i.e. `SETTINGS`.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
@sbdtu5498
Copy link
Contributor Author

Hi @sbdtu5498 ,
I have created configmap for settings variable. Seems like it does not work with pipeline service for command
--settings=${SETTINGS:-/home/kernelci/config/kernelci.toml}.
Can you please take a look?

@JenySadadia can you please give me the link of the settings file.

@gctucker
Copy link
Contributor

gctucker commented Aug 8, 2023

Thanks for the pointer @broonie In case of Minikube, I am able to access logs of crashed pod with below command:

$ kubectl logs kernelci-pipeline-tarball-6597c9554-dcpqp --previous

OK great. I guess fluentd and fluentbit will become much more relevant when we actually deploy the services in the Cloud e.g. Azure rather than MiniKube. Maybe it's still worth giving it a try with MiniKube as preparation work, to get some common config set up. But we can look into that once everything is merged for the actual deployment I think.

@JenySadadia
Copy link
Collaborator

Hi @sbdtu5498 ,
I have created configmap for settings variable. Seems like it does not work with pipeline service for command
--settings=${SETTINGS:-/home/kernelci/config/kernelci.toml}.
Can you please take a look?

@JenySadadia can you please give me the link of the settings file.

This is the settings file https://github.com/kernelci/kernelci-pipeline/blob/main/config/kernelci.toml.
Bdw I have fixed the issue by using ConfigMap variable for it. Also, restructured the manifest files.

@JenySadadia
Copy link
Collaborator

@sbdtu5498 We need to fix indentation issue for runner-docker-deployment.yaml:

Error from server (BadRequest): error when creating "kube/minikube/deployments/runner-docker-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.resources"

@sbdtu5498
Copy link
Contributor Author

Hi @sbdtu5498 ,
I have created configmap for settings variable. Seems like it does not work with pipeline service for command
--settings=${SETTINGS:-/home/kernelci/config/kernelci.toml}.
Can you please take a look?

@JenySadadia can you please give me the link of the settings file.

This is the settings file https://github.com/kernelci/kernelci-pipeline/blob/main/config/kernelci.toml. Bdw I have fixed the issue by using ConfigMap variable for it. Also, restructured the manifest files.

Great! Please let me know if the pipeline is working fine. I will add README and deployment scripts. Also, There were few things left in pipeline such as removing DinD etc. I will take a look on them as well.

@JenySadadia
Copy link
Collaborator

tarball was not able to create linux kernel source on the pod.
Fixed the issue by adding volume mounts for tarball-deployment. Added an SSH key to pipeline source tree on minikube node. Also, updated user permissions for kernelci-pipeline/data to enable the tarball service to create data/src/linux on the Pod.

$ kubectl logs kernelci-pipeline-tarball-b977954b8-c2xh4
08/09/2023 05:16:27 AM UTC [INFO] Listening for new trigger events
08/09/2023 05:16:27 AM UTC [INFO] Press Ctrl-C to stop.
08/09/2023 05:16:27 AM UTC [INFO] Updating repo for mainline
fatal: could not create work tree dir '/home/kernelci/data/src/linux': Permission denied
08/09/2023 05:16:27 AM UTC [ERROR] Traceback (most recent call last):
  File "/home/kernelci/pipeline/base.py", line 69, in run
    status = self._run(context)
  File "/home/kernelci/pipeline/tarball.py", line 136, in _run
    self._update_repo(build_config)
  File "/home/kernelci/pipeline/tarball.py", line 59, in _update_repo
    kernelci.build.update_repo(config, self._kdir)
  File "/usr/local/lib/python3.10/site-packages/kernelci-1.1-py3.10.egg/kernelci/build.py", line 154, in update_repo
    shell_cmd("git clone {ref} -o {remote} {url} {path}".format(
  File "/usr/local/lib/python3.10/site-packages/kernelci-1.1-py3.10.egg/kernelci/__init__.py", line 29, in shell_cmd
    return subprocess.check_output(cmd, shell=True).decode()
  File "/usr/local/lib/python3.10/subprocess.py", line 420, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/local/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'git clone  -o mainline https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git /home/kernelci/data/src/linux' returned non-zero exit status 128.

Jeny Sadadia added 3 commits August 9, 2023 14:33
Add `k8s-host` to `api-configs` and `storage-configs`.
This is the configuration URLs for API and SSH services
running in k8s. Added storage credential filename
to `kernelci.toml` for `k8s-host`.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
Add k8s configuration options to deployment commands.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
Enable tarball service to upload source tarball via SSH.
This needs an SSH key from `data/ssh` directory.
Add volume mounts for all sub-directories of `kernelci-pipeline/data`.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
@JenySadadia
Copy link
Collaborator

In addition to the commits I have added, the below changes are required to run the pipeline:

  • SSH to Minikube and change ownership of kernelci-api and kernelci-pipeline repositories
    • chown -R docker:docker kernelci-api
    • chown -R docker:docker kernelci-pipeline
  • Add SSH public key to kernelci-api/docker/ssh/user-data/authorized-keys
  • Add SSH private key to kernelci-pipeline/data/ssh

Note: Folder ownership changes are required to resolve Permission Denied error while trying to fetch and create kernel tarball (kernelci-pipeline/data/src/linux) from pipeline and upload it to the API storage (kernelci-api/docker/api/storage/data).

Tested OK with Minikube:

Trigger

$ kubectl logs kernelci-pipeline-trigger-d78cc4d47-jllch
08/09/2023 10:24:34 AM UTC [INFO] Delay: 3s
08/09/2023 10:24:38 AM UTC [INFO] Resubmitting existing revision   kernelci_staging-mainline        5da2f04be0b11a80d44b6a3992d4a428e832da87
08/09/2023 10:24:41 AM UTC [INFO] Resubmitting existing revision   kernelci_staging-next            ec04f3b0bffae33592116668b9fd982f94336ac8
08/09/2023 10:24:42 AM UTC [INFO] Resubmitting existing revision   kernelci_staging-stable          d9ed6d27cafbce7fe05d1b1078883dbe08966ae5
08/09/2023 10:24:44 AM UTC [INFO] Resubmitting existing revision   mainline                         13b9372068660fe4f7023f43081067376582ef3c
08/09/2023 10:24:44 AM UTC [INFO] Not polling.

Tarball

$ kubectl logs kernelci-pipeline-tarball-b977954b8-kqzwd
08/09/2023 10:21:43 AM UTC [INFO] Listening for new trigger events
08/09/2023 10:21:43 AM UTC [INFO] Press Ctrl-C to stop.
08/09/2023 10:23:00 AM UTC [INFO] Updating repo for kernelci_staging-mainline
From https://github.com/kernelci/linux
 * branch                      HEAD       -> FETCH_HEAD
From git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux
 * branch                      HEAD       -> FETCH_HEAD
HEAD is now at 5da2f04be0b1 staging-mainline-20230809.0
08/09/2023 10:23:11 AM UTC [INFO] Repo updated
08/09/2023 10:23:11 AM UTC [INFO] Making tarball linux-kernelci-staging-mainline-staging-mainline-20230809.0.tar.gz
08/09/2023 10:23:11 AM UTC [INFO] set -e
cd /home/kernelci/data/src/linux
git archive --format=tar --prefix=linux-kernelci-staging-mainline-staging-mainline-20230809.0/ HEAD | gzip > ../../output/linux-kernelci-staging-mainline-staging-mainline-20230809.0.tar.gz

08/09/2023 10:23:43 AM UTC [INFO] Tarball created
08/09/2023 10:23:43 AM UTC [INFO] Uploading /home/kernelci/data/output/linux-kernelci-staging-mainline-staging-mainline-20230809.0.tar.gz
08/09/2023 10:23:43 AM UTC [INFO] Connected (version 2.0, client OpenSSH_8.4p1)
08/09/2023 10:23:43 AM UTC [INFO] Authentication (publickey) successful!
08/09/2023 10:23:44 AM UTC [INFO] Upload complete: http://kernelci-api-storage:8002/linux-kernelci-staging-mainline-staging-mainline-20230809.0.tar.gz
08/09/2023 10:23:45 AM UTC [INFO] Updating repo for kernelci_staging-next
From https://github.com/kernelci/linux
 * branch                      HEAD       -> FETCH_HEAD
From git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux
 * branch                      HEAD       -> FETCH_HEAD
Previous HEAD position was 5da2f04be0b1 staging-mainline-20230809.0
HEAD is now at ec04f3b0bffa staging-next-20230808.0
08/09/2023 10:23:59 AM UTC [INFO] Repo updated
08/09/2023 10:24:05 AM UTC [INFO] Making tarball linux-kernelci-staging-next-staging-next-20230808.0.tar.gz
08/09/2023 10:24:05 AM UTC [INFO] set -e
cd /home/kernelci/data/src/linux
git archive --format=tar --prefix=linux-kernelci-staging-next-staging-next-20230808.0/ HEAD | gzip > ../../output/linux-kernelci-staging-next-staging-next-20230808.0.tar.gz

08/09/2023 10:24:35 AM UTC [INFO] Tarball created
08/09/2023 10:24:35 AM UTC [INFO] Uploading /home/kernelci/data/output/linux-kernelci-staging-next-staging-next-20230808.0.tar.gz
08/09/2023 10:24:36 AM UTC [INFO] Upload complete: http://kernelci-api-storage:8002/linux-kernelci-staging-next-staging-next-20230808.0.tar.gz

Runner

$ kubectl logs kernelci-pipeline-runner-664fbf59bd-ztskj
08/09/2023 10:24:52 AM UTC [INFO] Listening for available checkout events
08/09/2023 10:24:52 AM UTC [INFO] Press Ctrl-C to stop.
08/09/2023 10:25:18 AM UTC [INFO] 64d3698e00fe2051db2648d8 shell shell kver 7
Getting kernel source tree...
Source directory: /tmp/kci/linux-kernelci-staging-stable-staging-stable-20230809.0
Running job...
Checking kernel revision
Result: fail
08/09/2023 10:26:03 AM UTC [INFO] 64d369bb00fe2051db2648d9 shell shell kver 8
Getting kernel source tree...
Source directory: /tmp/kci/linux-mainline-master-v6.5-rc5-53-g13b937206866
Running job...
Checking kernel revision
Result: pass

Test Report

$ kubectl logs kernelci-pipeline-test-report-b857dd848-66nwv
08/09/2023 10:25:28 AM UTC [INFO] No SMTP settings provided, not sending email
[STAGING] mainline/master None: 0 runs 0 failures
Summary
=======
Tree:     mainline
Branch:   master
Describe: None
URL:      https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
SHA1:     13b9372068660fe4f7023f43081067376582ef3c
08/09/2023 10:25:28 AM UTC [INFO] No SMTP settings provided, not sending email

Notifier

$ kubectl logs kernelci-pipeline-notifier-c688c89c8-5df6q
08/09/2023 10:21:30 AM UTC [INFO] Listening for events... 
08/09/2023 10:21:30 AM UTC [INFO] Press Ctrl-C to stop.
08/09/2023 10:21:30 AM UTC [INFO] Time                        Commit        Node Id                   State      Result    Name
08/09/2023 10:24:44 AM UTC [INFO] 2023-08-09 10:24:44.082918  13b937206866  64d3696c00fe2051db2648d7  Running    -         checkout
08/09/2023 10:25:18 AM UTC [INFO] 2023-08-09 10:25:18.752969  d9ed6d27cafb  64d3690800fe2051db2648d2  Available  -         checkout
08/09/2023 10:25:18 AM UTC [INFO] 2023-08-09 10:25:18.797514  d9ed6d27cafb  64d3698e00fe2051db2648d8  Running    -         kver
08/09/2023 10:25:31 AM UTC [INFO] 2023-08-09 10:25:31.139251  d9ed6d27cafb  64d3698e00fe2051db2648d8  Done       Fail      kver
08/09/2023 10:26:03 AM UTC [INFO] 2023-08-09 10:26:03.098413  13b937206866  64d3690900fe2051db2648d3  Available  -         checkout
08/09/2023 10:26:03 AM UTC [INFO] 2023-08-09 10:26:03.127539  13b937206866  64d369bb00fe2051db2648d9  Running    -         kver
08/09/2023 10:26:15 AM UTC [INFO] 2023-08-09 10:26:15.600796  13b937206866  64d369bb00fe2051db2648d9  Done       Pass      kver

Timeout

$ kubectl logs kernelci-pipeline-timeout-9f98b674b-kb55j
08/09/2023 10:21:27 AM UTC [INFO] Looking for nodes with lapsed timeout...
08/09/2023 10:21:27 AM UTC [INFO] Press Ctrl-C to stop.
08/09/2023 10:25:28 AM UTC [DEBUG] 64d35b5f807425bd1e0e8d56 TIMEOUT
08/09/2023 10:25:28 AM UTC [DEBUG] 64d35b61807425bd1e0e8d57 TIMEOUT
08/09/2023 10:25:28 AM UTC [DEBUG] 64d35b63807425bd1e0e8d58 TIMEOUT

@JenySadadia
Copy link
Collaborator

JenySadadia commented Aug 9, 2023

Requesting review from @gctucker as I have authored a few commits.

@gctucker
Copy link
Contributor

gctucker commented Aug 9, 2023

SSH to Minikube and change ownership of kernelci-api and kernelci-pipeline repositories

  • chown -R docker:docker kernelci-api
  • chown -R docker:docker kernelci-pipeline

Why does it require the docker user rather than kernelci, and could we add a step in the deployment commands to do this automatically?

@JenySadadia
Copy link
Collaborator

SSH to Minikube and change ownership of kernelci-api and kernelci-pipeline repositories

  • chown -R docker:docker kernelci-api
  • chown -R docker:docker kernelci-pipeline

Why does it require the docker user rather than kernelci, and could we add a step in the deployment commands to do this automatically?

The username is based on the init script. Not sure why it is docker. Maybe @sbdtu5498 can answer it.
Yes, these steps should be added to the script as well. Could you please add it? @sbdtu5498

Add `pipeline-configmap.yaml` to generate a `ConfigMap`.
Add `settings` variable to the pipeline configmap
named `kernelci-pipeline-config`.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
@JenySadadia
Copy link
Collaborator

Added manifest file to generate ConfigMap for settings variable.

Add script to automate pipeline deployment
in k8s.
Make sure to setup admin user from API deployment
and get user token. Export the token as an environment
variable with name `KCI_API_TOKEN`.
The deployment script will use the env variable and generate
kubernetes secret.

Signed-off-by: Jeny Sadadia <jeny.sadadia@collabora.com>
@JenySadadia
Copy link
Collaborator

Added scripts/deploy.sh to automate pipeline deployment. The script is based on apply-all.sh script written by @sbdtu5498 for API deployment.
Before running pipeline deployment, we need to setup admin user from API deployment (scripts/setup_admin_user.sh from API) and generate a user token.
That token should be exported as an environment variable KCI_API_TOKEN.
It will be used to generate pipeline secrets.
I have added a check to deploy.sh script to verify if the secret has been generated with the token.

@JenySadadia
Copy link
Collaborator

@sbdtu5498 Could you please update your fork of kernelci-pipeline?
I need to rebase this PR.

@gctucker
Copy link
Contributor

Replaced with #322

@gctucker gctucker closed this Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pipeline deployment tested with MiniKube
4 participants