-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slack notification configuration issue #91
Comments
Hi there, it looks like there's multiple things going wrong here: First of the value you need to use for the You say you use the Kubernetes implementation, but it also looks like you're using the file provider. The kubernetes providers use CRDs instead of mounted files. I'd suggest switching to the Kubernetes provider by deploying the CRDs and enabling the the Kubernetes provider by setting |
@BigBoot Many thanks for the quick response and the insight. One can work around this symlink issue by specifying the target subPath in the volumeMount. Please have a look at my deployment, there is also an example if you want to mount multiple keys from a single secret.
I prefer to configure the notification via files (secret mount) as these configurations contain secrets that I would not want to commit to the Git repository (we use ArgoCD to deploy all our stuff on Kubernetes), hence using a combination of file and CRD configurations would be optimal in our case. When I configure AutoKuma as displayed above I get the following logs:
The "vault-active-monitor" is now configured with the following CRD
While the monitor is created in Uptime Kuma, the Slack alert is still not activated. Any idea what I'm doing wrong? If it is not possible to use the CRDs and the static monitor configuration in combination, we will have to solely use the static monitor configuration. FYI: The group and tag are created with CRDs, which works as expected. |
This is a known issue currently as stated in #81. Uptime kuma will duplicate a lot of information inside the config into different places, you'd need to make sure you match exactly the config you get from kuma-cli otherwise autokuma will always consider the definition out of date, unfortunately I don't think a can do a lot there short of creating a huge white/blacklist for each notification provider, which would be a hell to maintain, I think I will just disable the update message for notifications. I'll need to look into the activating notifications, this might need some additional work. |
@BigBoot Thanks for looking into activating the notifications, let me know if there is something I can test for you. |
Thanks for the tool, works very well. I'm using it on EKS with your new Kubernetes integration 💪
However, configuring a Slack notification does not properly work. I configure the Slack notification via the static monitor configuration with the following JSON file
In Uptime Kuma I see that the Slack notification is created, however, I'm not able to active it on a monitor using its name to assign it. I create the monitor using the following K8S CRD:
I notice that in AutoKuma the notification configuration is constantly updated, below you see a snippet from the AutoKuma logs.
Reviewing the notification configuration in Uptime Kuma using the kuma-cli I find that the notification configuration from AutoKuma looks different, compared with a manually configured Slack notification.
Is there an issue in AutoKuma with regards to creating the monitor, or am I missing something in my notification configuration that causes this issue?
The text was updated successfully, but these errors were encountered: