Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uptime Kuma v2 support #99

Open
BigBoot opened this issue Nov 14, 2024 · 7 comments
Open

Uptime Kuma v2 support #99

BigBoot opened this issue Nov 14, 2024 · 7 comments

Comments

@BigBoot
Copy link
Owner

BigBoot commented Nov 14, 2024

This issues is meant to keep track of support for the upcoming version 2.0 of uptime-kuma, currently available as a beta.

Since the api of v1 and v2 have api incompatibilities there's a separate version of autokuma for v2. This is made available as docker tags with the prefix uptime-kuma-v2- e.g. to get the latest dev version with v2 support use

docker pull ghcr.io/bigboot/autokuma:uptime-kuma-v2-master

to pin a specific commit:

docker pull ghcr.io/bigboot/autokuma:uptime-kuma-v2-sha-23287bc

For source builds v2 support can be enabled using a feature flag, i.e.:

cargo install --git https://github.com/BigBoot/AutoKuma.git --features uptime-kuma-v2 kuma-cli
Status Notes
Monitor
Docker Host
Notification ⚠️* Same issues as v1
Status Page Untested
Maintenance Untested
Tags There currently seems to be some problems with assigning tags/values to monitors in V2, this also happens using the UI
@BigBoot BigBoot pinned this issue Nov 14, 2024
@BigBoot BigBoot changed the title UptimeKuma v2 support Uptime Kuma v2 support Nov 15, 2024
@barcar
Copy link

barcar commented Nov 15, 2024

This works perfectly. Thank you for implementing it so quickly.

@undaunt
Copy link

undaunt commented Nov 15, 2024

One thing I noticed with my current config is when I did a fresh compose up -d on uptime kuma itself, autokuma created a second copy of the monitors I currently had instead of detecting them. They were created both times by Autokuma. Is this related to the tagging issue?

image

I've also noted a lot of this:


WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was pro

Compose:

networks:
  web-proxy:
    name: ${PROXY_NETWORK}
    external: true
  app-bridge:
    name: ${APP_NETWORK}
    external: true
  socket-proxy:
    name: ${SOCKET_NETWORK}
    external: true
  gluetun-bridge:
    name: ${GLUETUN_NETWORK}
    external: true

services:
  uptime-kuma:
    image: louislam/uptime-kuma:beta-slim
    container_name: uptime-kuma
    restart: unless-stopped
    profiles: ["all","kuma"]
    networks:
      - ${PROXY_NETWORK}
      - ${APP_NETWORK}
      - ${SOCKET_NETWORK}
      - ${GLUETUN_NETWORK}
    depends_on:
      - uptime-kuma-db
    deploy:
      resources:
        limits:
          memory: 512M
    volumes:
      - ${APPDATA_DIR}/uptime-kuma/data:/app/data
    environment:
      PUID: ${PUID}
      PGID: ${PGID}
    labels:
      logging.promtail: true
      traefik.enable: true
      traefik.external.cname: true
      traefik.docker.network: ${PROXY_NETWORK}
      traefik.http.routers.uptime-kuma.entrypoints: https
      traefik.http.routers.uptime-kuma.rule: Host(`${SUBDOMAIN_UPTIME_KUMA}.${DOMAINNAME}`)
      traefik.http.routers.uptime-kuma.middlewares: chain-private@file
      #kuma.__app: '{ "name": "Uptime-Kuma", "type": "web-group", "url": "https://${SUBDOMAIN_UPTIME_KUMA}.${DOMAINNAME}", "internal_port": "3001" }'

  uptime-kuma-db:
    image: lscr.io/linuxserver/mariadb:latest
    container_name: uptime-kuma-db
    restart: always
    profiles: ["all","kuma"]
    networks:
      - ${APP_NETWORK}
    volumes:
      - ${APPDATA_DIR}/uptime-kuma/db:/config
    environment:
      TZ: ${TZ}
      PUID: ${PUID}
      PGID: ${PGID}
      MYSQL_ROOT_PASSWORD: ${UPTIME_KUMA_MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${UPTIME_KUMA_MYSQL_DB}
      MYSQL_USER: ${UPTIME_KUMA_MYSQL_USER}
      MYSQL_PASSWORD: ${UPTIME_KUMA_MYSQL_PASSWORD}
    labels:
      logging.promtail: true
      #kuma.__app: '{ "name": "Uptime-Kuma MySQL", "type": "mysql", "service": "Uptime-Kuma", "db_url": "mysql://${UPTIME_KUMA_MYSQL_USER}:${UPTIME_KUMA_MYSQL_PASSWORD}@uptime-kuma-db:3306" }'

  autokuma:
    image: ghcr.io/bigboot/autokuma:master
    container_name: autokuma
    restart: unless-stopped
    profiles: ["all","kuma"]
    networks:
      - ${SOCKET_NETWORK}
    depends_on:
      - uptime-kuma
    environment:
      AUTOKUMA__KUMA__URL: http://uptime-kuma:3001
      AUTOKUMA__KUMA__USERNAME: ${KUMA_USER}
      AUTOKUMA__KUMA__PASSWORD: ${KUMA_PASSWORD}
      AUTOKUMA__TAG_NAME: AutoKuma
      AUTOKUMA__DEFAULT_SETTINGS: |- 
        *.notification_id_list: { "1": true }
      AUTOKUMA__ON_DELETE: delete
      AUTOKUMA__SNIPPETS__APP: |-
        {# Assign the first snippet arg for readability #}
        {% set args = args[0] %}

        {# Generate IDs with slugify #}
        {% set id = args.name | slugify %}
        {% if args.service %}
          {% set service_id = args.service | slugify %}
        {% endif %}

        {# Define the top level services/app naming conventions #}
        {% if args.type == "web" %}
          {{ id }}-group.group.name: {{ args.name }}
        {% elif args.type == "web-group" %}
          {{ id }}-group.group.name: {{ args.name }}
          {{ id }}-svc-group.group.parent_name: {{ id }}-group
          {{ id }}-svc-group.group.name: {{ args.name }} App
        {% elif service_id is defined and args.type in ["redis", "mysql", "postgres", "web-support"] %}
          {{ id }}-svc-group.group.parent_name: {{ service_id }}-group
          {{ id }}-svc-group.group.name: {{ args.name }}{% if args.type == "web-support" %} App{% endif %}
        {% endif %}

        {# Web containers get http & https checks #}
        {% if args.type in ["web-group", "web", "web-support"] %}
          {% if args.type == "web" %}
            {% set parent = id ~ "-group" %}
          {% else %}
            {% set parent = id ~ "-svc-group" %}
          {% endif %}
          {{ id }}-https.http.parent_name: {{ parent }}
          {{ id }}-https.http.name: {{ args.name }} (Web)
          {{ id }}-https.http.url: {{ args.url }}
          {{ id }}-http.http.parent_name: {{ parent }}
          {{ id }}-http.http.name: {{ args.name }} (Internal)
          {% if args.network and args.network == "host" %}
            {{ id }}-http.http.url: http://10.0.20.15:{{ args.internal_port }}
          {% elif args.network and args.network == "vpn" %}
            {{ id }}-http.http.url: http://{{ container_name }}-vpn:{{ args.internal_port }}
          {% else %}
            {{ id }}-http.http.url: http://{{ container_name }}:{{ args.internal_port }}
          {% endif %}
          {# Check for authentication and set basic auth details #}
          {% if args.auth and args.auth == "basic" %}
            {{ id }}-http.http.authMethod: {{ args.auth }}
            {{ id }}-http.http.basic_auth_user: {{ args.auth_user }}
            {{ id }}-http.http.basic_auth_pass: {{ args.auth_pass }}
            {{ id }}-https.http.authMethod: {{ args.auth }}
            {{ id }}-https.http.basic_auth_user: {{ args.auth_user }}
            {{ id }}-https.http.basic_auth_pass: {{ args.auth_pass }}
          {% endif %}
        {% endif %}

        {# Database containers get db specific checks #}
        {% if args.type in ["redis", "mysql", "postgres"] %}
          {{ id }}-db.{{ args.type }}.name: {{ args.name }} (DB)
          {{ id }}-db.{{ args.type }}.parent_name: {{ id }}-svc-group
          {{ id }}-db.{{ args.type }}.database_connection_string: {{ args.db_url }}
        {% endif %}

        {# All containers get a container check #}
        {% if args.type == "web" %}
          {% set parent_name = id ~ "-group" %}
          {{ id }}-container.docker.parent_name: {{ parent_name }}
        {% elif args.type not in ["solo", "support"] %}
          {% set parent_name = id ~ "-svc-group" %}
          {{ id }}-container.docker.parent_name: {{ parent_name }}
        {% endif %}
        {% if args.type == "support" %}
          {{ id }}-container.docker.parent_name: {{ service_id }}-group
        {% endif %}
        {% if args.type in ["solo", "support"] %}
          {{ id }}-container.docker.name: {{ args.name }}
        {% else %}
          {{ id }}-container.docker.name: {{ args.name }} (Container)
        {% endif %}
        {{ id }}-container.docker.docker_container: {{ container_name }}
        {{ id }}-container.docker.docker_host: 1
      DOCKER_HOST: http://socket-proxy:2375
    labels:
      logging.promtail: true
      #kuma.__app: '{ "name": "AutoKuma", "type": "support", "service": "Uptime-Kuma" }'

@BigBoot
Copy link
Owner Author

BigBoot commented Nov 16, 2024

  1. i haven't tested an upgrade yet, maybe uptime kuma recreates it's database tables on the 2.0 migration? That would result in a change of ids and therefore AutoKuma losing it's associations.
  2. Yep seems like the rate-limiting got hardened in 2.0, I may try going back to a long lived connection instead of reconnecting for every sync, I initially switched to this approach because the SocketIO library I use wasn't to reliable at reconnecting, but this seems to have improved in the meantime. For a short term fix, increasing the sync interval should work. Something like AUTOKUMA__SYNC_INTERVAL="30.0".

@undaunt
Copy link

undaunt commented Nov 18, 2024

Thanks, I just bumped up the sync interval.

Re: the first point, this is a net new test setup with only those containers. I just brought the stacks up, down, then up again. There isn't an easy migration path from v1 to v2 (if doing SQLite to MariaDB) with current historical data, they're basically stating due to bandwidth reasons they won't officially support it, but others have posted info on how to create a mysql database and populate it with converted sqlite data via an export.

@bnctth
Copy link

bnctth commented Dec 22, 2024

Hi!
The latest update seems to have broken autokuma for ukv2. I recreated my uptimekuma server but ak can't create any monitors. It throws out the following logs repeatedly:

autokuma-1  | WARN [kuma_client::util] [backups-nasty.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-other_devices.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-vivo.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-self.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-asus_viki.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-db.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [autokuma::entity] Cannot create monitor uptime-kuma because referenced monitor with services is not found
autokuma-1  | WARN [kuma_client::util] Error while parsing uptime-kuma-auto-kuma: data did not match any variant of untagged enum EntityWrapper!
autokuma-1  | WARN [autokuma::sync] Encountered error during sync: Error while trying to parse labels: data did not match any variant of untagged enum EntityWrapper

The referenced groups are supposed to be created by ak too. For the enum problem I haven't had time to actually debug the code, but the latest uk update did have a pr merged for monitor tags: louislam/uptime-kuma#5298
I didn't change any of the already working monitor definitions and am running the images louislam/uptime-kuma:beta (sha256:752118f891ea991180124e3fc7edbc1865a58cb03e15e612ecbc68065b1d4b9f) and ghcr.io/bigboot/autokuma:uptime-kuma-v2-master (sha256:74bccf145554cce2acf63676d4b98fafdf1e710e60150733fcac8b5b1c364301)

Thanks for the help and all the good work you do!

@BigBoot
Copy link
Owner Author

BigBoot commented Dec 22, 2024

Hi @bnctth, I don't think there's any breaking change, this looks rather like a problem with your labels, let's try to break this down.

autokuma-1  | WARN [kuma_client::util] [backups-nasty.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-other_devices.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-vivo.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-self.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-asus_viki.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-db.toml] No monitor named backups-nasty could be found

These are "expected" when creating nested setups, autokuma knows about these dependencies but they have not been created yet, as a result they are skipped till later.

autokuma-1  | WARN [autokuma::entity] Cannot create monitor uptime-kuma because referenced monitor with services is not found

this one says that you have a monitor referencing a parent monitor with the autokuma id "services", however there doesn't seem to exist any such monitor definition.

autokuma-1  | WARN [kuma_client::util] Error while parsing uptime-kuma-auto-kuma: data did not match any variant of untagged enum EntityWrapper!
autokuma-1  | WARN [autokuma::sync] Encountered error during sync: Error while trying to parse labels: data did not match any variant of untagged enum EntityWrapper

This error unfortunately isn't as clear, but it basically means you have a definition (uptime-kuma-auto-kuma) with a missing or invalid "type".

@bnctth
Copy link

bnctth commented Dec 24, 2024

@BigBoot thanks for your reply! Turn out I had a pretty trivial problem mixed with some red herring-y error messages: I had a typo in a snippet (notificationIDList instead of `notificationIdList˙, small d in Id), so I had the correct definition for the parents, but because of the error it never got to their definition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants