Skip to content

Commit

Permalink
Merge pull request #55 from bbkz/dev
Browse files Browse the repository at this point in the history
Release 0.2.1

* fixes #54 Database migration fails
* fix celery redis password
* update development setup
  • Loading branch information
bbkz authored Sep 17, 2024
2 parents 1cb7ac0 + 796ba85 commit e06c70a
Show file tree
Hide file tree
Showing 5 changed files with 99 additions and 143 deletions.
114 changes: 24 additions & 90 deletions DEVEL.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ The following is done on a Fedora Desktop to run a minikube rootless setup. For

## Prepare the system

First install the required network component `slirp4netns`
The network namespace of the Node components has to have a non-loopback interface, which can be for example configured with slirp4netns, VPNKit, or lxc-user-nic(1).

Let's install the network component `slirp4netns`

```bash
sudo dnf install slirp4netns
Expand All @@ -27,6 +29,8 @@ So this was the only parts where root priviledges are needed.

Now install and setup minikube with the calico network driver. Assuming you have `~/bin` in your `$PATH` environment variable.

Check your `~/.kube` folder if you have a old minikube config and (re)move it.

```bash
wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 -O ~/bin/minikube
chmod 755 ~/bin/minikube
Expand All @@ -38,132 +42,62 @@ minikube config set container-runtime containerd
minikube start --cni calico
```

Now you have a running cluster on your machine.

Minikube comes with a integrated `kubectl` command. So you can run `kubectl` commands, without downloaded `kubectl` binary:

```bash
minikube kubectl -- get pods -A
```

But for using `helm` and our convenience, we install `kubectl` alongside `minikube`:
Download `kubectl`, into `~/bin`:

```bash
wget "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" -O ~/bin/kubectl
```

Finally we install `helm`, into `~/bin`:
Download `helm`, into `~/bin`:

```bash
export HELM_INSTALL_DIR=~/bin; export USE_SUDO=false; curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```

## Setup wger
In your `~/.kube/config` set the IP of your host instead of `127.0.0.1`, to make helm work (may not be neccessary).

You can install wger without any changes to the `values.yaml`, this will run wger in development mode.
## Setup wger

First clone the `wger-helm-charts` repository and optionally create `your_values.yaml` file:
First clone the `wger-helm-charts` repository and optionally create your own values file based on `example/devel.yaml`:

```bash
git clone https://github.com/wger-project/helm-charts.git
cd helm-charts
vi your_values.yaml
```

The following is a example of `your_values.yaml`:

```yaml
app:
environment:
# x-real-ip - remote ip - x-forward-for -
- name: GUNICORN_CMD_ARGS
value: "--timeout 240 --workers=2 --access-logformat '%({x-real-ip}i)s %(l)s %(h)s %(l)s %({x-forwarded-for}i)s %(l)s %(t)s \"%(r)s\" %(s)s %(b)s \"%(f)s\" \"%(a)s\"' --access-logfile - --error-logfile -"
nginx:
enabled: true
axes:
enabled: true
celery:
enabled: true
flower:
enabled: true
```

Deploy the helm chart from the cloned git repo. Omit `-f ../../your_values.yaml` when you don't have the file:
Deploy the helm chart from your local files:

```bash
cd helm-charts/charts/wger
cd charts/wger
helm dependency update
helm upgrade --install wger . -n wger --create-namespace -f ../../your_values.yaml
helm upgrade --install wger . -n wger --create-namespace -f ../../example/devel.yaml
```

To access the webinterface, you can port forward `8000` from the wger app to a port on your machine, be aware you need a high port number, which doesn't require root priviledges.
To access the webinterface, you can port forward `8080` from the wger app to a port on your machine, be aware you need a high port number, which doesn't require root priviledges.

Also note you need to connect to the container port directly not the service port.

```bash
export POD=$(kubectl get pods -n wger -l "app.kubernetes.io/name=wger-app" -o jsonpath="{.items[0].metadata.name}")
echo "wger runs on: http://localhost:10001"; kubectl -n wger port-forward ${POD} 10001:8000
echo "wger runs on: http://localhost:10001"; kubectl -n wger port-forward ${POD} 10001:8080
```

Go to http://localhost:10001 and login as `admin` `adminadmin` ;-)

## Advanced Setup

When you activated `nginx` persistent storage will be automatically activated as a requirement. You can see the volumes (pv) and it's claims (pvc):

```bash
kubectl get pv
kubectl get pvc -n wger
```

**@todo sorry but, mounting with rootless podman and minikube doesn't work yet**

There is a special claim `code` which will not be created but will overload the wger django code, this can be used to mount your local development code into the setup.

First checkout the code to in the example i use `$HOME/test/wger`.

As minikube is running in a VM we first need to mount the local files into the minikube VM to make it available for the kubernetes cluster. You can login to the minikube VM with `minikube ssh`.

Now mount the folder into the minikube system, i use `/wger-code` here.

```bash
minikube stop
minikube start --cni calico --mount-string="$HOME/test/wger:/wger-code"
# or
minikube mount $HOME/test/wger:/wger-code
```

Add the following to `your_values.yaml`.

```yaml
app:
persistence:
existingClaim:
code: wger-code
```

Manually create a volume and claim for your local wger code. For this add a new file `wger-code.yaml` and apply it to the cluster:

```yaml
TBD
```

```bash
kubectl create ns wger
kubectl apply -n wger -f ../../wger-code-volume.yaml
```
## Uninstall wger

Activate the new values with the `wger-code` volume in the containers:
To uninstall:

```bash
helm upgrade --install wger . -n wger --create-namespace -f ../../your_values.yaml
helm -n wger uninstall wger
kubectl delete ns wger
```

## Uninstall wger
## Stop and remove minikube setup

To uninstall:
You can delete the whole cluster, including your config settings:

```bash
helm -n wger uninstall wger
kubectl -n wger delete -f ../../wger-code-volume.yaml
kubectl delete ns wger
minikube delete
```

52 changes: 1 addition & 51 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -443,57 +443,7 @@ Generally persistent volumes needs to be configured depending on your setup.

## Developing locally

The following is a basic instruction, for a more in-depth manual please have a look at [DEVEL.md](DEVEL.md). It also covers mounting the wger django code into the container.

In order to develop locally, you will need [minikube](https://minikube.sigs.k8s.io/docs/) installed.
It sets a local Kubernetes cluster that you can use for testing the Helm chart.

If this is your first time developing a Helm chart, you'd want to try the following:

```bash
# start minikube
$ minikube start

# deploy the helm chart from the cloned git repo
$ cd charts/wger
$ helm dependency update
$ helm upgrade --install wger . -n wger --create-namespace -f ../../your_values.yaml

# observe that the pods start correctly
$ watch kubectl -n wger get pods
NAME READY STATUS RESTARTS AGE
wger-app-86c65dcbb9-9ftr6 5/5 Running 0 12h
wger-postgres-0 1/1 Running 0 39h
wger-redis-65b686bf87-cphzm 1/1 Running 0 39h

# read the logs from the init container (postgres & redis check)
$ kubectl -n wger logs -f -l app.kubernetes.io/name=wger-app -c init-container

# read the logs from the wger django app
$ kubectl -n wger logs -f -l app.kubernetes.io/name=wger-app -c wger
PostgreSQL started :)
*** Database does not exist, creating one now
Operations to perform:
Apply all migrations: auth, authtoken, config, contenttypes, core, easy_thumbnails, exercises, gallery, gym, mailer, manager, measurements, nutrition, sessions, sites, weight
Running migrations:
Applying contenttypes.0001_initial... OK
.....

# if you need to debug something in the pods, you can start a shell
$ export POD=$(kubectl get pods -n wger -l "app.kubernetes.io/name=wger-app" -o jsonpath="{.items[0].metadata.name}")
$ kubectl -n wger exec -it $POD -c wger -- bash
wger@wger-app-86c65dcbb9-9ftr6:~/src$

# start a port forwarding to access the webinterface
$ echo "wger runs on: http://localhost:10001"
$ kubectl -n wger port-forward ${POD} 10001:8000

# when you are finished with the testing, stop minikube
$ minikube stop

# if you'd like to start clean, you can delete the whole cluster
$ minikube delete
```
Please have a look at [DEVEL.md](DEVEL.md).


## Contact
Expand Down
2 changes: 1 addition & 1 deletion charts/wger/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
apiVersion: v2
version: 0.2.1-rc.1
version: 0.2.1
appVersion: latest
name: wger
description: A Helm chart for Wger installation on Kubernetes
Expand Down
2 changes: 1 addition & 1 deletion charts/wger/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -79,4 +79,4 @@ metadata:
name: wger-pg-init
data:
40-grantSuperuser.sql: |
ALTER USER {{ .Values.postgres.userDatabase.name }} WITH SUPERUSER;
ALTER USER {{ .Values.postgres.userDatabase.user.value }} WITH SUPERUSER;
72 changes: 72 additions & 0 deletions example/devel.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
#
# Development Setup
# see -> https://https://github.com/wger-project/helm-charts/blob/master/DEVEL.md
#
# Have a look at the packaged values.yaml for defaults and more settings:
# * https://github.com/wger-project/helm-charts/blob/master/charts/wger/values.yaml
#
# App settings
app:
global:
replicas: 1
# image:
# PullPolicy: IfNotPresent
nginx:
enabled: true
axes:
enabled: true
failureLimit: 10
# in minutes
cooloffTime: 30
# number of reverse proxies involved
ipwareProxyCount: 1
# order of magnitude from last proxy for the real ip
ipwareMetaPrecedenceOrder: "X_FORWARDED_FOR,REMOTE_ADDR"
persistence:
enabled: true
environment:
- name: CSRF_TRUSTED_ORIGINS
value: "http://localhost:10001,http://127.0.0.1:10001"
- name: GUNICORN_CMD_ARGS
value: "--timeout 240 --workers 2 --worker-class gthread --threads 3 --forwarded-allow-ips * --proxy-protocol True --access-logformat='%(h)s %(l)s %({client-ip}i)s %(l)s %({x-real-ip}i)s %(l)s %({x-forwarded-for}i)s %(l)s %(t)s \"%(r)s\" %(s)s %(b)s \"%(f)s\" \"%(a)s\"' --access-logfile - --error-logfile -"

celery:
enabled: true

ingress:
enabled: false

postgres:
enabled: true
settings:
superuser:
value: postgres
superuserPassword:
value: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
userDatabase:
name:
value: wger
user:
value: wger
password:
value: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
service:
port: 5432

redis:
enabled: true
auth:
enabled: true
# Additional environment variables (Redis server and Sentinel)
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
# Arguments for the container entrypoint process (Redis server)
args:
- "--requirepass $(REDIS_PASSWORD)"
service:
serverPort: 6379

0 comments on commit e06c70a

Please sign in to comment.