-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onboard Synthetics Monitor Status rule type with FAAD #186214
Onboard Synthetics Monitor Status rule type with FAAD #186214
Conversation
/ci |
Pinging @elastic/response-ops (Team:ResponseOps) |
Pinging @elastic/obs-ux-infra_services-team (Team:obs-ux-infra_services) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code LGTM, but unable to test locally. I got stopped at trying to add an agent, that wanted me to install a fleet server. Looks like it wants something (Fleet server) installed, which requires Elastic Agent to be installed - I gave up after that.
LMK if there's an easy way to test locally, or maybe it's possible to point to a Fleet server in the cloud or something?
I was holding Kibana wrong, Alexi showed me the correct way, and was able to verify with the instructions above. Hint - go to Synthetics from the main left nav, not from within the o11y app, as that led me astray, trying to set up Fleet before I could do anything else. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM.
💚 Build Succeeded
Metrics [docs]History
To update your PR or re-run it, just comment with: |
Towards: elastic#169867 This PR onboards the Synthetics Monitor Status rule type with FAAD. ### To verify I can't get the rule to alert, so I modified the status check to report the monitor as down. If you know of an easier way pls let me know 🙂 1. Create a [monitor](http://localhost:5601/app/synthetics/monitors), by default creating a monitor creates a rule. 2. Click on the monitor and grab the id and locationId from the url 3. Go to [the status check code](https://github.com/elastic/kibana/blob/main/x-pack/plugins/observability_solution/synthetics/server/queries/query_monitor_status.ts#L208) and replace the object that is returned with the following using the id and locationId you got from the monitor. ``` { up: 0, down: 1, pending: 0, upConfigs: {}, pendingConfigs: {}, downConfigs: { '${id}-${locationId}': { configId: '${id}', monitorQueryId: '${id}', status: 'down', locationId: '${locationId}', ping: { '@timestamp': new Date().toISOString(), state: { id: 'test-state', }, monitor: { name: 'test-monitor', }, observer: { name: 'test-monitor', }, } as any, timestamp: new Date().toISOString(), }, }, enabledMonitorQueryIds: ['${id}'], }; ``` 5. Your rule should create an alert and should saved it in `.internal.alerts-observability.uptime.alerts-default-000001` Example: ``` GET .internal.alerts-*/_search ``` 6. Recover repeating step 3 using ``` { up: 1, down: 0, pending: 0, downConfigs: {}, pendingConfigs: {}, upConfigs: { '${id}-${locationId}': { configId: '${id}', monitorQueryId: '${id}', status: 'down', locationId: '${locationId}', ping: { '@timestamp': new Date().toISOString(), state: { id: 'test-state', }, monitor: { name: 'test-monitor', }, observer: { name: 'test-monitor', }, } as any, timestamp: new Date().toISOString(), }, }, enabledMonitorQueryIds: ['${id}'], }; ``` 8. The alert should be recovered and the AAD in the above index should be updated `kibana.alert.status: recovered`.
Towards: #169867
This PR onboards the Synthetics Monitor Status rule type with FAAD.
To verify
I can't get the rule to alert, so I modified the status check to report the monitor as down. If you know of an easier way pls let me know 🙂
.internal.alerts-observability.uptime.alerts-default-000001
Example:
kibana.alert.status: recovered
.