Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The "Received alerts by status" panel in the grafana dashboard for alertmanager does not show information #291

Open
drencrom opened this issue Sep 13, 2024 · 0 comments

Comments

@drencrom
Copy link

Bug Description

The Received alerts by status panel in the grafana dashboard only shows alerts in the firing status. Its default expression is something like:

sum by(status) (increase(alertmanager_alerts_received_total{instance=~"cos_36a28f08-3411-4265-8d9f-e89a1447f635_alertmanager",juju_application=~".*",juju_model=~".*",juju_model_uuid=~".*",juju_unit=~".*"}[$__interval]))

I've found that updating it for example to:

sum (increase(alertmanager_alerts_received_total{juju_application=~".*",juju_model=~".*",juju_model_uuid=~".*",juju_unit=~".*"}[$__rate_interval])) by(status)

Actually works and also shows alerts on the resolved status.

To Reproduce

Just deploy cos and use it to observe some application that generates alerts.

cos :: ~ » juju export-bundle                                                                                                                                                                            12:40:32
bundle: kubernetes
saas:
  remote-92af8c083c114879831c3a87bb5720f7: {}
applications:
  alertmanager:
    charm: alertmanager-k8s
    channel: latest/edge
    revision: 135
    resources:
      alertmanager-image: 98
    scale: 1
    constraints: arch=amd64
    storage:
      data: kubernetes,1,1024M
    trust: true
  catalogue:
    charm: catalogue-k8s
    channel: latest/stable
    revision: 59
    resources:
      catalogue-image: 33
    scale: 1
    options:
      description: "Canonical Observability Stack Lite, or COS Lite, is a light-weight,
        highly-integrated, \nJuju-based observability suite running on Kubernetes.\n"
      tagline: Model-driven Observability Stack deployed with a single command.
      title: Canonical Observability Stack
    constraints: arch=amd64
    trust: true
  grafana:
    charm: grafana-k8s
    channel: latest/stable
    revision: 117
    resources:
      grafana-image: 69
      litestream-image: 44
    scale: 1
    constraints: arch=amd64
    storage:
      database: kubernetes,1,1024M
    trust: true
  loki:
    charm: loki-k8s
    channel: latest/edge
    revision: 168
    resources:
      loki-image: 100
      node-exporter-image: 3
    scale: 1
    constraints: arch=amd64
    storage:
      active-index-directory: kubernetes,1,1024M
      loki-chunks: kubernetes,1,1024M
    trust: true
  prometheus:
    charm: prometheus-k8s
    channel: latest/stable
    revision: 209
    resources:
      prometheus-image: 148
    scale: 1
    constraints: arch=amd64
    storage:
      database: kubernetes,1,1024M
    trust: true
  traefik:
    charm: traefik-k8s
    channel: latest/stable
    revision: 194
    resources:
      traefik-image: 160
    scale: 1
    constraints: arch=amd64
    storage:
      configurations: kubernetes,1,1024M
    trust: true
relations:
- - traefik:ingress-per-unit
  - prometheus:ingress
- - traefik:ingress-per-unit
  - loki:ingress
- - traefik:traefik-route
  - grafana:ingress
- - traefik:ingress
  - alertmanager:ingress
- - prometheus:alertmanager
  - alertmanager:alerting
- - grafana:grafana-source
  - prometheus:grafana-source
- - grafana:grafana-source
  - loki:grafana-source
- - grafana:grafana-source
  - alertmanager:grafana-source
- - loki:alertmanager
  - alertmanager:alerting
- - prometheus:metrics-endpoint
  - traefik:metrics-endpoint
- - prometheus:metrics-endpoint
  - alertmanager:self-metrics-endpoint
- - prometheus:metrics-endpoint
  - loki:metrics-endpoint
- - prometheus:metrics-endpoint
  - grafana:metrics-endpoint
- - grafana:grafana-dashboard
  - loki:grafana-dashboard
- - grafana:grafana-dashboard
  - prometheus:grafana-dashboard
- - grafana:grafana-dashboard
  - alertmanager:grafana-dashboard
- - catalogue:ingress
  - traefik:ingress
- - catalogue:catalogue
  - grafana:catalogue
- - catalogue:catalogue
  - prometheus:catalogue
- - catalogue:catalogue
  - alertmanager:catalogue
- - prometheus:receive-remote-write
  - remote-92af8c083c114879831c3a87bb5720f7:send-remote-write
- - grafana:grafana-dashboard
  - remote-92af8c083c114879831c3a87bb5720f7:grafana-dashboards-provider
- - loki:logging
  - remote-92af8c083c114879831c3a87bb5720f7:logging-consumer
--- # overlay.yaml
applications:
  grafana:
    offers:
      grafana:
        endpoints:
        - grafana-dashboard
        acl:
          admin: admin
  loki:
    offers:
      loki:
        endpoints:
        - logging
        acl:
          admin: admin
  prometheus:
    offers:
      prometheus:
        endpoints:
        - metrics-endpoint
        - receive-remote-write
        acl:
          admin: admin

Environment

Running on an lxd VM using microk8s 1.30.4 and juju 3.5.3. The observed model is running on lxd containers on the same VM.

Relevant log output

No relevant log

Additional context

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant