-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regularly benchmarking and stress-testing the alerting framework and rule types #119845
Comments
Pinging @elastic/kibana-alerting-services (Team:Alerting Services) |
Dropping this in here, but if we aren't already talking to the rally team, we may be able to use the dataset from these upcoming tracks: elastic/rally-tracks#222, elastic/apm-server#6731 |
I will remove this issue (and assignees) from our iteration plan for now, as we would like for @EricDavisX to pick this up in the coming weeks with the research that is done so far. |
I'm researching this and hoping to finish evaluating what usage the ResponseOps and Security side teams have done in the next few days. With that done I'll be able to come up with a list of requirements and then also a modest plan for what I'll do next/further here. |
Still researching the kbn-alert-load tool - thanks all for the help. Also Finishing a first draft of a requirements document that QA will assess (with Engineering too) - then we'll form a plan and adjust the bullet points above |
MLR-QA team is wrapping up a prototype jenkins job to run kbn-alert-load tool (while security team has a prototype done in build-kite, fyi!) - I'll post details in slack for RespOps team |
I can update where we are. we did a proof of concept in jenkins and have decided to continue iterating on it from the machine-learning-qa-infra jenkins server:
we've enhanced the jenkins run to always delete the ecctl deploys. we'll continue updating this periodically with progress. |
We have achieved an MVP that includes the checked metrics above, it runs nightly against several versions via cloud (CFT region) and reports pass/fail into our slack channel - I'm going to focus on other work, though may help drive QA implementing a few small remaining low-hanging fruit items. |
The alerting system must be regularly benchmarked and stress-tested before every production release. Preferably mirroring known complex customer environments. This ensures we do not introduce any regressions by benchmarking and comparing key health metrics.
There are various ongoing performance testing & framework / tool creation efforts that relate to Kibana, some research has been done to ensure the pros/cons and applicability of each so we can invest where we see the best value proposition balanced with quickest roi we can get. As research continues it seems clear we'll plan to extend one or more tools or frameworks into a given solution. So, while we may start with one tool as an incremental first-step or as a starting point, we're developing this to a set of requirements, foremost.
Front-runner for starting-point tool/library: The Kibana Alerting team / ResponseOps kbn-alert-load Alert / Rule testing tool
... see below for options that were declined for now.
Here are some of the WIP Requirements we are evaluating and building out:
Stretch / next goals:
FYI: Frameworks/Tools that have been researched and ruled out for immediate purposes:
Kibana-QA team created an API load testing tool - kibana-load-testing. It was researched by Patrick M in 2020 and Alert/Rules team did not end up collaborating on it, it uses the Kibana HTTP API and so isn't best suited to assess the (background process) Task Manager at the moment
Kibana Working group's coming tool - (including folks like Spencer A / Tyler S / Daniel M / Liza K - they are discussing and working on a performance testing tool and CI integration for Kibana needs.
The text was updated successfully, but these errors were encountered: