-
-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow alerting on external endpoints that do not receive a push within a configurable time frame #741
Comments
Thank you for creating the issue. Now that external endpoints have been implemented (#722, #724), this should probably be the next feature related to external endpoints that gets implemented as they kind of go hand in hand, especially for those using external endpoints to test connectivity. If there's no connectivity, Gatus' API won't be reachable, which means that Gatus wouldn't be able to trigger an alert without this feature. The feature in question should allow the user to configure a duration under which an update is expected to be received. Should the duration elapse with no new status update, a status should be created to indicate a failure to receive an update within the expected time frame. This should in turn cause Lines 13 to 22 in 2833968
handleAlertsToTrigger being called (due to the new result indicating failure to receive an update having its Success field set to false ), incrementing NumberOfFailuresInARow Lines 24 to 27 in 2833968
The only proper name I can think of for this feature is "dead man's switch", but as silly as it may sound, I don't like how that'd look on the configuration: external-endpoints:
- name: ...
dead-man-switch:
blackout-duration-until-automatic-failure: 1h
alerts:
- type: slack
send-on-resolved: true Another consideration to make is the interaction between this feature and maintenance. While the maintenance period should prevent alerts from being triggered, should failure status be pushed anyways? Perhaps this should be an additional parameter on the maintenance configuration (e.g. Some food for thoughts. |
I've seen other services call this external-endpoints:
- name: ...
heartbeat:
interval: 5m
grace-period: 5m
alerts:
- type: slack
send-on-resolved: true |
I would love to see a feature similar to this feature request, just like implemented in healthchecks.io This would allow to provide a cron schedule and then monitor crons and alert for crons that do not run or jobs that take too long to complete. An example configuration might look like this:
|
If you look at betterstack.com or Healthchecks.io, they both implement similar settings, although betterstack does not really go into much detail. They offer a separate maintenance setting per heartbeat, but this is already covered. Seems like the settings that make most sense to me would be: external-endpoints:
- name: ...
heartbeat:
# Define the scheduling method to use: "interval" or "cron".
# "interval" runs based on a fixed duration between heartbeats.
# "cron" runs at specific times defined by a cron expression.
method: 'cron' # Choose either "cron" or "interval"
# Use this field if the method is "interval".
# Defines the duration between heartbeats. Valid units are:
# "ns" (nanoseconds), "us"/"µs" (microseconds), "ms" (milliseconds),
# "s" (seconds), "m" (minutes), "h" (hours).
# -> https://pkg.go.dev/time#ParseDuration
interval: '24h'
# Use these fields if the method is "cron".
# "cron" defines the schedule using a cron expression.
# For example, "0 0 * * *" means midnight every day.
cron: '0 0 * * *'
# Specifies the timezone to use when evaluating the cron expression.
# This is required only for "cron" and is ignored for "interval".
timezone: 'Europe/Berlin'
# Specifies the grace period to tolerate delayed heartbeats.
# This applies to both "interval" and "cron".
grace-period: '1h' This would be awesome as i am currently running healthchecks and gatus but only need healthchecks for 2 pings from a backup every night. Not really feeling the need of running Django for this... |
Originally posted by @r3mi in #722 (comment)
The text was updated successfully, but these errors were encountered: