Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/BuildImage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ env:
ENDPOINT: "linuxserver/mods" #don't modify
BASEIMAGE: "swag" #replace
MODNAME: "auto-uptime-kuma" #replace
MULTI_ARCH: "false" #set to false if not needed

jobs:
set-vars:
Expand All @@ -19,6 +20,7 @@ jobs:
echo "ENDPOINT=${{ env.ENDPOINT }}" >> $GITHUB_OUTPUT
echo "BASEIMAGE=${{ env.BASEIMAGE }}" >> $GITHUB_OUTPUT
echo "MODNAME=${{ env.MODNAME }}" >> $GITHUB_OUTPUT
echo "MULTI_ARCH=${{ env.MULTI_ARCH }}" >> $GITHUB_OUTPUT
# **** If the mod needs to be versioned, set the versioning logic below. Otherwise leave as is. ****
MOD_VERSION=""
echo "MOD_VERSION=${MOD_VERSION}" >> $GITHUB_OUTPUT
Expand All @@ -27,6 +29,7 @@ jobs:
ENDPOINT: ${{ steps.outputs.outputs.ENDPOINT }}
BASEIMAGE: ${{ steps.outputs.outputs.BASEIMAGE }}
MODNAME: ${{ steps.outputs.outputs.MODNAME }}
MULTI_ARCH: ${{ steps.outputs.outputs.MULTI_ARCH }}
MOD_VERSION: ${{ steps.outputs.outputs.MOD_VERSION }}

build:
Expand All @@ -42,4 +45,5 @@ jobs:
ENDPOINT: ${{ needs.set-vars.outputs.ENDPOINT }}
BASEIMAGE: ${{ needs.set-vars.outputs.BASEIMAGE }}
MODNAME: ${{ needs.set-vars.outputs.MODNAME }}
MULTI_ARCH: ${{ needs.set-vars.outputs.MULTI_ARCH }}
MOD_VERSION: ${{ needs.set-vars.outputs.MOD_VERSION }}
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,5 @@ $RECYCLE.BIN/
Network Trash Folder
Temporary Items
.apdisk

__pycache__
62 changes: 58 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@ This mod gives SWAG the ability to automatically add Uptime Kuma "Monitors" for

## Requirements

Running [Uptime Kuma](https://github.com/louislam/uptime-kuma) instance with `username` and `password` configured. The container should be in the same [user defined bridge network](https://docs.linuxserver.io/general/swag#docker-networking) as SWAG.
- This mod needs the [universal-docker mod](https://github.com/linuxserver/docker-mods/tree/universal-docker) installed and set up with either mapping docker.sock or setting the environment variable `DOCKER_HOST=remoteaddress`.
- Other containers to be auto-detected and reverse proxied should be in the same [user defined bridge network](https://docs.linuxserver.io/general/swag#docker-networking) as SWAG.
- A running [Uptime Kuma](https://github.com/louislam/uptime-kuma) instance (at least version `1.21.3`) with `username` and `password` configured. Also in the same network as mentioned above.

## Installation

In SWAG docker arguments, set an environment variable `DOCKER_MODS=linuxserver/mods:swag-auto-uptime-kuma`.
In SWAG docker arguments, set an environment variable `DOCKER_MODS=linuxserver/mods:universal-docker|linuxserver/mods:swag-auto-uptime-kuma`.

Add additional environment variables to the SWAG docker image:

Expand All @@ -20,6 +22,8 @@ Add additional environment variables to the SWAG docker image:

Unfortunately Uptime Kuma does not provide API keys for it's Socket.io API at the moment and Username/Password have to be used.

This mod additionaly reads the `URL` environment variable which is part of the SWAG configuration itself.

Finally, add `swag.uptime-kuma.enabled=true` label at minimum to each of your containers that you wish to monitor. More types of labels are listed in next section.

## Labels
Expand All @@ -33,11 +37,59 @@ This mod is utilizing the wonderful [Uptime Kuma API](https://github.com/lucashe
| `swag.uptime-kuma.monitor.url` | `https://{containerName}.{domainName}` | `https://radarr.domain.com/` <br> `https://pihole.domain.com/admin/` | By default the URL of each container if build based of the actual container name (`{containerName}`) defined in docker and the value of `URL` environment variable (`{domainName}`) defined in SWAG (as required by SWAG itself). |
| `swag.uptime-kuma.monitor.type` | http | http | While technically possible to override the monitor type the purpose of this mod is to monitor HTTP endpoints. |
| `swag.uptime-kuma.monitor. description` | Automatically generated by SWAG auto-uptime-kuma | My own description | The description is only for informational purposes and can be freely changed |
| `swag.uptime-kuma.monitor. parent` | | `"Media Servers"`, `"Tools"`, `"2137"` | A "special" label that can be used to create Monitor Groups. The value can be a name of the group which then will by dynamically created if it does not exist. A group name has to be unique (different than any of your container names). Alternatively an ID of the group can be used (can be found in the URL when editing the group in Uptime Kuma). Please note that in this mod only a name of the group can be defined. In case you want to edit additional parameters of the group then its best to create it manually and use an ID as value here. |
| `swag.uptime-kuma.monitor.*` | | `swag.uptime-kuma.monitor. maxretries=5` <br> `swag.uptime-kuma.monitor. accepted_statuscodes= 200-299,404,501` | There are many more properties to configure. The fact that aything can be changed does not mean that it should. Some properties or combinations could not work and should be changed only if you know what you are doing. Please check the [Uptime Kuma API](https://uptime-kuma-api.readthedocs.io/en/latest/api.html#uptime_kuma_api.UptimeKumaApi.add_monitor) for more examples. Properties that are expected to be lists should be separated by comma `,` |

### Setting default values for all containers

This mod does not have an ability to set global default values for your Monitors. In case you would like to set some label value to be same for all of the monitored containers you have few options:

- In case you are using docker-compose then there are many ways of setting defaults such as [Extensions](https://docs.docker.com/compose/multiple-compose-files/extends/), [Fragments](https://docs.docker.com/compose/compose-file/10-fragments/) or [Extends](https://docs.docker.com/compose/multiple-compose-files/extends/).

Here is how I am using `extends` myself:

`docker-compose.template.yml`
```
services:
monitored:
labels:
swag.uptime-kuma.enabled: true
swag.uptime-kuma.monitor.interval: 69
swag.uptime-kuma.monitor.retryInterval: 300
swag.uptime-kuma.monitor.maxretries: 10
```
`docker-compose.yml`
```
services:
bitwarden:
extends:
file: docker-compose.template.yml
service: monitored
# ... some other stuff
labels:
swag: enable
whatever.else: hello
swag.uptime-kuma.monitor.interval: 123 # label specific to this container
```
If you define it as above then the labels will be merged and/or overriden and result with:
```
...
labels:
swag: enable
whatever.else: hello
swag.uptime-kuma.enabled: true
swag.uptime-kuma.monitor.interval: 123 # overriden
swag.uptime-kuma.monitor.retryInterval: 300
swag.uptime-kuma.monitor.maxretries: 10
```

- In case you are using docker cli you could either define your labels with a common variable or use a common label file for the monitored containers [more info here](https://docs.docker.com/reference/cli/docker/container/run/#label)

- In case you are using any other way to deploy your containers then please look into documentation of your tool for any templating features.

## Notifications

While ultimately this mod makes it easier to setup notifications for your docker containers it does not configure more than Uptime Kuma Monitors. In order to receive Notifications you should configure them manually and then either enable one type to be default for all your Monitors or specify the Notifications by using the `swag.uptime-kuma.monitor.notificationIDList` label.
While ultimately this mod makes it easier to setup notifications for your docker containers it does not configure more than Uptime Kuma Monitors. In order to receive Notifications you should configure them manually and then either enable one type to be default for all your Monitors or specify the Notifications by using the `swag.uptime-kuma.monitor.notificationIDList` label. Please note that if you define one or more notifications in Uptime Kuma to be default (enabled by default for new monitors) then even if you specify custom `notificationIDList` via labels then the default notifications will be always appended to the list.

## Known Limitations

Expand All @@ -47,6 +99,8 @@ While ultimately this mod makes it easier to setup notifications for your docker

- Due to limitations of the Uptime Kuma API whenever you make changes to your container or labels that already have a Monior setup then the **Update** action will be performed by running **Delete** followed by **Add**. What it means that all changes will result in a new Monitor for the same container that will lose history of the heartbeats, all manual changes and get a new 'id' number.

## Purge data
## Command Line mode

For the purpose of development or simply if you feel that you want to purge all the Monitors and files created by this mod you can run following command via `ssh`: `docker exec swag python3 /app/auto-uptime-kuma.py -purge` (where `swag` is your container name of the SWAG instance).

It is also possible to fetch and print the raw API data of a Monitor from Uptime Kuma API `ssh`: `docker exec swag python3 /app/auto-uptime-kuma.py -monitor container_name` (where `container_name` is the name of the container that Monitor belongs to).
195 changes: 131 additions & 64 deletions root/app/auto-uptime-kuma.py
Original file line number Diff line number Diff line change
@@ -1,68 +1,135 @@
from swagDocker import SwagDocker
from swagUptimeKuma import SwagUptimeKuma
import sys
import argparse
import os


def parseCommandLine():
"""
Different application behavior if executed from CLI
"""
parser = argparse.ArgumentParser()
parser.add_argument('-purge', action='store_true')
args = parser.parse_args()

if (args.purge == True):
swagUptimeKuma.purgeData()
swagUptimeKuma.disconnect()
sys.exit(0)


def addOrUpdateMonitors(domainName, swagContainers):
for swagContainer in swagContainers:
containerConfig = swagDocker.parseContainerLabels(
swagContainer.labels, ".monitor.")
containerName = swagContainer.name
monitorData = swagUptimeKuma.parseMonitorData(
containerName, domainName, containerConfig)

if (not swagUptimeKuma.monitorExists(containerName)):
swagUptimeKuma.addMonitor(containerName, domainName, monitorData)
from auto_uptime_kuma.config_service import ConfigService
from auto_uptime_kuma.uptime_kuma_service import UptimeKumaService
from auto_uptime_kuma.docker_service import DockerService
from auto_uptime_kuma.log import Log
import sys, os


def add_or_update_monitors(
docker_service: DockerService,
config_service: ConfigService,
uptime_kuma_service: UptimeKumaService,
):
for container in docker_service.get_swag_containers():
container_config = docker_service.parse_container_labels(
container.labels, ".monitor."
)
container_name = container.name
monitor_data = uptime_kuma_service.build_monitor_data(
container_name, container_config
)

if not uptime_kuma_service.monitor_exists(container_name):
uptime_kuma_service.create_monitor(container_name, container_config)
else:
swagUptimeKuma.updateMonitor(
containerName, domainName, monitorData)


def getMonitorsToBeRemoved(swagContainers, apiMonitors):
# Monitors to be removed are those that no longer have an existing container
# Monitor <-> Container link is done by comparing the container name with the monitor swag tag value
existingMonitorNames = [swagUptimeKuma.getMonitorSwagTagValue(
monitor) for monitor in apiMonitors]
existingContainerNames = [container.name for container in swagContainers]

monitorsToBeRemoved = [
containerName for containerName in existingMonitorNames if containerName not in existingContainerNames]
return monitorsToBeRemoved
if not config_service.config_exists(container_name):
Log.info(
f"Monitor '{monitor_data['name']}' for container '{container_name}'"
" exists but no preset config found, generating from scratch"
)
config_service.create_config(container_name, monitor_data)
uptime_kuma_service.edit_monitor(container_name, monitor_data)


def delete_removed_monitors(
docker_service: DockerService, uptime_kuma_service: UptimeKumaService
):
Log.info("Searching for Monitors that should be deleted")
# Monitors to be deleted are those that no longer have an existing container
# Monitor <-> Container link is done by comparing the container name
# with the monitor swag tag value
existing_monitor_names = [
uptime_kuma_service.get_monitor_swag_tag_value(monitor)
for monitor in uptime_kuma_service.monitors
]
existing_container_names = [
container.name for container in docker_service.get_swag_containers()
]

monitors_to_be_deleted = [
containerName
for containerName in existing_monitor_names
if containerName not in existing_container_names
]

monitors_to_be_deleted = list(filter(None, monitors_to_be_deleted))

uptime_kuma_service.delete_monitors(monitors_to_be_deleted)


def delete_removed_groups(uptime_kuma_service: UptimeKumaService):
Log.info("Searching for Groups that should be deleted")
# Groups to be deleted are those that no longer have any child Monitors
existing_monitor_group_ids = [
monitor["parent"] for monitor in uptime_kuma_service.monitors
]

# remove empty values
existing_monitor_group_ids = list(filter(None, existing_monitor_group_ids))
# get unique values
existing_monitor_group_ids = list(set(existing_monitor_group_ids))

groups_to_be_deleted = []

for group in uptime_kuma_service.groups:
if group["id"] not in existing_monitor_group_ids:
groups_to_be_deleted.append(group["name"])

uptime_kuma_service.delete_groups(groups_to_be_deleted)


def execute_cli_mode(
config_service: ConfigService, uptime_kuma_service: UptimeKumaService
):
Log.info("Mod was executed from CLI. Running manual tasks.")
args = config_service.get_cli_args()
if args.purge:
uptime_kuma_service.purge_data()

config_service.purge_data()
if args.monitor:
Log.info(f"Requesting data for Monitor '{args.monitor}'")
print(uptime_kuma_service.get_monitor(args.monitor))

uptime_kuma_service.disconnect()


if __name__ == "__main__":
url = os.environ['UPTIME_KUMA_URL']
username = os.environ['UPTIME_KUMA_USERNAME']
password = os.environ['UPTIME_KUMA_PASSWORD']
domainName = os.environ['URL']

swagDocker = SwagDocker("swag.uptime-kuma")
swagUptimeKuma = SwagUptimeKuma(url, username, password)

parseCommandLine()

swagContainers = swagDocker.getSwagContainers()

addOrUpdateMonitors(domainName, swagContainers)

monitorsToBeRemoved = getMonitorsToBeRemoved(
swagContainers, swagUptimeKuma.apiMonitors)
swagUptimeKuma.deleteMonitors(monitorsToBeRemoved)

swagUptimeKuma.disconnect()
Log.init("mod-auto-uptime-kuma")

url = os.environ["UPTIME_KUMA_URL"]
username = os.environ["UPTIME_KUMA_USERNAME"]
password = os.environ["UPTIME_KUMA_PASSWORD"]
domainName = os.environ["URL"]

configService = ConfigService(domainName)
uptimeKumaService = UptimeKumaService(configService)
dockerService = DockerService("swag.uptime-kuma")
is_connected = uptimeKumaService.connect(url, username, password)

if not is_connected:
sys.exit()

uptimeKumaService.load_data()
if uptimeKumaService.default_notifications:
notification_names = [
f"{notification['id']}:{notification['name']}"
for notification in uptimeKumaService.default_notifications
]
Log.info(
f"The following notifications are enabled by default: {notification_names}"
)

if configService.is_cli_mode():
execute_cli_mode(configService, uptimeKumaService)
sys.exit()

add_or_update_monitors(dockerService, configService, uptimeKumaService)

# reload data after the sync above
uptimeKumaService.load_data()
# cleanup
delete_removed_monitors(dockerService, uptimeKumaService)
delete_removed_groups(uptimeKumaService)

uptimeKumaService.disconnect()
Empty file.
Loading