Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-14057: Add runbooks description for prometheus alerts which ingress operator provides #166

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions alerts/cluster-ingress-operator/HAProxyDown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# HAProxyDown

## Meaning

This alert fires when metrics report that HAProxy is down.

## Impact

Access to routes will fail. It may cause a severe outage for critical applications.

## Diagnosis

- Check the router logs:
Copy link

@candita candita May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest adding this to each of the runbooks that tells them to check the logs.

  • Configure access logging:
oc edit -n openshift-ingress-operator ingresscontrollers/default

set spec.logging.access.destination.type: Container

spec:
  logging:
    access:
      destination:
        type: Container

To turn it off later, set spec.logging.access: null

```sh
oc logs <router pod> -n openshift-ingress
```

- Check the events:
```sh
oc get events -n openshift-ingress
```

- Check the load on the system where the routers are hosted.

Copy link

@candita candita May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are so many other things they could check. How about describing prometheus metrics to look at:

  • Use the console to issue a prometheus query:

Container Threads: (We should see a fairly consistent value with some fluctuations based on load if healthy)
avg(container_threads{namespace='openshift-ingress', container='router'}) by (instance)

Container Processes: (We should see a fairly consistent value with some fluctuations based on load if healthy)
avg(container_processes{namespace='openshift-ingress', container='router'}) by (instance)

I got this from https://access.redhat.com/solutions/5721381, but I'm not sure we should link to an access article from a runbook.

## Mitigation

Based on the diagnosis, try to figure out the issue.
If the issue is configuration related then try to fix the haproxy config.
If the issue is load related try to fix the issues at infrastructure level.
26 changes: 26 additions & 0 deletions alerts/cluster-ingress-operator/HAProxyReloadFail.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# HAProxyReloadFail

## Meaning

This alert fires when HAProxy fails to reload its configuration, which will
result in the router not picking up recently created or modified routes.

## Impact
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add:

Warning only.


The router won't pick up recently created or modified routes. This may cause
an outage for critical applications.

## Diagnosis

Check the router logs:
```sh
oc logs <router pod> -n openshift-ingress
```

Check if any recently added configuration in the haproxy config via ingress
controller CR caused the issue.
Comment on lines +20 to +21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Check if any recently added configuration in the haproxy config via ingress
controller CR caused the issue.
Check if any recently added configuration in the haproxy config via ingress
controller spec caused the issue.


Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tell them how to access the container and check the haproxy.config for issues.

## Mitigation

Try to fix the configuration of the haproxy via ingress controller CR on the
basis of the output of the logs.
Comment on lines +25 to +26
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Try to fix the configuration of the haproxy via ingress controller CR on the
basis of the output of the logs.
Try to fix the configuration of the haproxy by editing the ingress controller spec.

42 changes: 42 additions & 0 deletions alerts/cluster-ingress-operator/IngressControllerDegraded.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# IngressControllerDegraded

## Meaning

This alert fires when the IngressController status is degraded.

## Impact

The routers won't be running in the cluster. This will cause an outage while
accessing the applications.

## Diagnosis

Ingress Controller may be degraded due to one or more reasons.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check the status of all operators, looking for error messages:

oc get co 

- Check the ingress operator logs using the following command:
```sh
oc logs <ingress operator pod> -n openshift-ingress-operator
```
- Check the router logs using the following commands:
```sh
oc logs <router pod> -n openshift-ingress
```
- Check the yaml file of the ingress controller and operator to see the reason
for failure:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for failure:
for failure. Look for status.conditions:

```sh
oc get ingresscontroller <ingresscontroller name> -n openshift-ingress-operator -o yaml
```

```sh
oc get deployment -n openshift-ingress-operator -o yaml
```

```sh
oc get events
```

## Mitigation

Try to fix the issue based on what you see in the status of yaml and errors
in the logs from the above mentioned
commands.
41 changes: 41 additions & 0 deletions alerts/cluster-ingress-operator/IngressControllerUnavailable.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# IngressControllerUnavailable

## Meaning

This alert fires when the IngressController is not available.

## Impact

This will cause an outage to the environment as the access to the
applications won't be available.

## Diagnosis

Ingress Controller may be degraded due to one or more reasons.

- Check the ingress operator logs using the following command:
```sh
oc logs <ingress operator pod> -n openshift-ingress-operator
```
- Check the router logs using the following commands:
```sh
oc logs <router pod> -n openshift-ingress
```
- Check the yaml file of the ingress controller and operator to see
the reason for failure:
```sh
oc get ingresscontroller <ingresscontroller name> -n openshift-ingress-operator -o yaml
```

```sh
oc get deployment -n openshift-ingress-operator -o yaml
```

```sh
oc get events
```

## Mitigation

Try to fix the issue based on what you see in the status of yaml
and errors in the logs from the above mentioned commands.
33 changes: 33 additions & 0 deletions alerts/cluster-ingress-operator/IngressWithoutClassName.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# IngressWithoutClassName

## Meaning

This alert fires when there is an Ingress with an unset IngressClassName
for longer than one day.

## Impact

Copy link

@candita candita May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add:

Warning only.

If this is a valid Ingress resource, it needs to have an ingressClassName to stop the Alert. ingressClassName is the name of an ingressClass cluster resource. Otherwise, delete the misconfigured Ingress.

It is possible that a user could have created an Ingress with
some nonempty value for spec.ingressClassName that did not match an
OpenShift IngressClass object, and nevertheless intended for OpenShift
to expose this Ingress. Again, it is impossible to determine reliably
what a user's intent was in such a scenario, but as OpenShift exposed
such an Ingress before this enhancement, changing this behavior could
break existing applications.
Comment on lines +13 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't include design info.

Suggested change
to expose this Ingress. Again, it is impossible to determine reliably
what a user's intent was in such a scenario, but as OpenShift exposed
such an Ingress before this enhancement, changing this behavior could
break existing applications.
to expose this Ingress.


So, we considered modifying the ingress operator
to list all Ingresses and Routes in the cluster and publish a metric
for Routes that were created for Ingresses that OpenShift no longer
manage.
Comment on lines +18 to +21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this.


## Diagnosis

Check for alert messages on the UI.
Inspect the ingress object.
Inspect the route object. Check the status of it.
Check the logs of `cluster-openshift-controller-manager-operator`

## Mitigation
Figure out why the route which was created by ingress which OpenShift
no longer manages.
Delete that ingress and route if it is no longer needed.
31 changes: 31 additions & 0 deletions alerts/cluster-ingress-operator/UnmanagedRoutes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# UnmanagedRoutes

## Meaning

This alert fires when there is a Route owned by an unmanaged Ingress.

## Impact
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add:

Warning only.


The ingress-to-route controller does not remove Routes that earlier versions
of OpenShift created for Ingresses that specify `spec.ingressClassName`.
Thus, these Routes will continue to be in effect.
OpenShift does not update such Routes and does not recreate
them if the user deletes them.

In case any Routes existed in this state the alert would
help the administrator know that the Routes needed to be deleted,
or the Ingress modified to specify an appropriate IngressClass so
that OpenShift would once again reconcile the Routes.

## Diagnosis

Check for alert messages on the UI.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this. The alert is already known.

Inspect the ingress object.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what?

Inspect the route object.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what?

Check the logs of `cluster-openshift-controller-manager-operator`

## Mitigation

This alert will help the administrator to specify an appropriate
IngressClass in the Ingress object so that OpenShift would once
again reconcile the Routes.