Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slack notification configuration issue #91

Open
tschlaepfer opened this issue Nov 5, 2024 · 4 comments
Open

Slack notification configuration issue #91

tschlaepfer opened this issue Nov 5, 2024 · 4 comments

Comments

@tschlaepfer
Copy link

tschlaepfer commented Nov 5, 2024

Thanks for the tool, works very well. I'm using it on EKS with your new Kubernetes integration 💪

However, configuring a Slack notification does not properly work. I configure the Slack notification via the static monitor configuration with the following JSON file

slack_notification.json: |
    {
      "type": "notification",
      "active": true,
      "isDefault": false,
      "name": "Slack",
      "config": {
        "slackwebhookURL": "https://hooks.slack.com/services/YYYYYYYYY/XXXXXXXXXXXXXXXXXXXXXXX",
        "slackchannel": "shared-services-preprod",
        "slackchannelnotify": false,
        "type": "slack"
      }
    }

In Uptime Kuma I see that the Slack notification is created, however, I'm not able to active it on a monitor using its name to assign it. I create the monitor using the following K8S CRD:

apiVersion: "autokuma.bigboot.dev/v1"
kind: KumaEntity
metadata:
  name: website-monitor
spec:
  config: 
    name: Website Monitor
    type: http
    url: https://4data.ch
    parent_name: group-example
    expiry_notification: false
    interval: 30
    notification_names: |
      ["Slack"]
    tag_names: |
      [{"name": "tag-example"}] 

I notice that in AutoKuma the notification configuration is constantly updated, below you see a snippet from the AutoKuma logs.

INFO [autokuma::sync] Creating new Notification: ..2024_11_05_08_15_49.3384930419/slack_notification
INFO [autokuma::sync] Creating new Monitor: group-example
INFO [autokuma::sync] Creating new Tag: tag-example
INFO [autokuma::sync] Updating Notification: ..2024_11_05_08_15_49.3384930419/slack_notification
INFO [autokuma::sync] Updating Notification: ..2024_11_05_08_15_49.3384930419/slack_notification
INFO [autokuma::sync] Updating Notification: ..2024_11_05_08_15_49.3384930419/slack_notification
INFO [autokuma::sync] Updating Notification: ..2024_11_05_08_15_49.3384930419/slack_notification

Reviewing the notification configuration in Uptime Kuma using the kuma-cli I find that the notification configuration from AutoKuma looks different, compared with a manually configured Slack notification.
Screenshot 2024-11-05 at 09 05 47

Is there an issue in AutoKuma with regards to creating the monitor, or am I missing something in my notification configuration that causes this issue?

@BigBoot
Copy link
Owner

BigBoot commented Nov 5, 2024

Hi there, it looks like there's multiple things going wrong here:

First of the value you need to use for the ..._name providers in generally the "autokuma id", how this get's assigned depends on the provider. For the Kubernetes provider this would be the metadata.name of the resource. For the file provider it would be the filename (without extension).

You say you use the Kubernetes implementation, but it also looks like you're using the file provider. The kubernetes providers use CRDs instead of mounted files.
While this basically should work Kubernetes seems to do some weird things with files mounted into the container. i.e. it creates the file with some timestamped name and then symlinks the intended filename to this path.
The file provider will as of now ignore the symlinks (it only looks for actual files) and then use these timestamped files, so you end up with entity names such as ..2024_11_05_08_15_49.3384930419/slack_notification instead of slack_notification

I'd suggest switching to the Kubernetes provider by deploying the CRDs and enabling the the Kubernetes provider by setting AUTOKUMA__KUBERNETES__ENABLED=true.

@tschlaepfer
Copy link
Author

@BigBoot Many thanks for the quick response and the insight.

One can work around this symlink issue by specifying the target subPath in the volumeMount. Please have a look at my deployment, there is also an example if you want to mount multiple keys from a single secret.

kind: Deployment
apiVersion: apps/v1
metadata: 
  labels:
    app.kubernetes.io/name: autokuma
  name: autokuma
spec:
  replicas: 1
  selector: 
    matchLabels:
      app.kubernetes.io/name: autokuma
  strategy:
    type: Recreate
  template: 
    metadata:
      labels:
        app.kubernetes.io/name: autokuma
      name: autokuma
    spec:
      serviceAccountName: autokuma-sa
      containers:
      - name: autokuma
        env:
          - name: AUTOKUMA__KUMA__URL
            value: "http://uptime:3001"
          - name: AUTOKUMA__DOCKER__ENABLED
            value: "false"
          - name: AUTOKUMA__KUBERNETES__ENABLED
            value: "true"
          - name: AUTOKUMA__STATIC_MONITORS
            value: "/config"
          - name: AUTOKUMA__ON_DELETE
            value: delete
            # Seems not to work on monitors configured via CRDs
          - name: AUTOKUMA__DEFAULT_SETTINGS
            value: |-
              http.max_redirects: 5
              *.max_retries: 3
        image: ghcr.io/bigboot/autokuma:master
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: 25m
            memory: 64Mi
          limits:
            cpu: 50m
            memory: 128Mi
        volumeMounts:
          - name: storage
            mountPath: /data
          - name: slack-notification
            mountPath: /config/slack-notification.json
            subPath: slack-notification.json
          - name: pagerduty-notification
            mountPath: /config/pagerduty-notification.json
            subPath: pagerduty-notification.json
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: autokuma-storage
        - name: slack-notification
          secret:
            secretName: uptime-kuma-notification
            items:
              - key: slack-notification.json
                path: slack-notification.json
        - name: pagerduty-notification
          secret:
            secretName: uptime-kuma-notification
            items:
              - key: pagerduty-notification.json
                path: pagerduty-notification.json

I prefer to configure the notification via files (secret mount) as these configurations contain secrets that I would not want to commit to the Git repository (we use ArgoCD to deploy all our stuff on Kubernetes), hence using a combination of file and CRD configurations would be optimal in our case.

When I configure AutoKuma as displayed above I get the following logs:

INFO [autokuma::sync] Creating new Monitor: vault-active-monitor
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification

The "vault-active-monitor" is now configured with the following CRD

apiVersion: "autokuma.bigboot.dev/v1"
kind: KumaEntity
metadata:
  name: vault-active-monitor
spec:
  config: 
    name: Active Vault Monitor
    type: http
    url: http://vault-active.vault.svc.cluster.local:8200/v1/sys/health
    parent_name: vault-group
    expiry_notification: false
    interval: 30
    tag_names: |
      [{"name": "vault-tag"}]
    notification_names: |
      ["slack-notification"]

While the monitor is created in Uptime Kuma, the Slack alert is still not activated. Any idea what I'm doing wrong?

If it is not possible to use the CRDs and the static monitor configuration in combination, we will have to solely use the static monitor configuration.

FYI: The group and tag are created with CRDs, which works as expected.

@BigBoot
Copy link
Owner

BigBoot commented Nov 5, 2024

INFO [autokuma::sync] Creating new Monitor: vault-active-monitor
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification
INFO [autokuma::sync] Updating Notification: pagerduty-notification
INFO [autokuma::sync] Updating Notification: slack-notification

This is a known issue currently as stated in #81. Uptime kuma will duplicate a lot of information inside the config into different places, you'd need to make sure you match exactly the config you get from kuma-cli otherwise autokuma will always consider the definition out of date, unfortunately I don't think a can do a lot there short of creating a huge white/blacklist for each notification provider, which would be a hell to maintain, I think I will just disable the update message for notifications.

I'll need to look into the activating notifications, this might need some additional work.

@tschlaepfer
Copy link
Author

@BigBoot Thanks for looking into activating the notifications, let me know if there is something I can test for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants