-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create functional test suite for performance benchmarking #54626
Comments
Pinging @elastic/kibana-app-arch (Team:AppArch) |
cc/ @wayneseymour |
@stacey-gammon perhaps both |
@stacey-gammon possible to have access or a clone of your gcp instance, the one you're running the tests on? sounds handy |
Are we starting with http requests? |
@wayneseymour I think this is a case where we just capture the duration of a set of functional UI tests from the log. For example, this test passed and logged that it took 11.5s;
So I think we just need a script that grep's the log for the passing tests, extracts the duration and test name, and sends that to Elasticsearch. (I think it would just ignore any failing tests since timeouts on failures take longer?) We don't want to send the results from every PR, but maybe from a daily build. Maybe from a Jenkins job we already have so we're not adding to the Jenkins load. There's no impact to the tests running since the parsing of the log happens after the tests are completed. But I'm a bit concerned that running multiple test threads (multiple Elasticsearch and Kibana processes) in parallel might make for inconsistent results. We could try it? Or maybe the |
.. another effort will also be started to select a tool for load testing Kibana. I expect these would be new tests using some tool other than FTR. |
Me thinks the cc job is best, at least that what I think now, at the outset. |
This is being worked on with https://github.com/elastic/kibana-team/issues/501 |
Create a new functional test suite that is intentional about running tests specifically for performance benchmarking. All you have to do is give them a name like "performance benchmarking" and I can track only those tests. This would be ideal since running the entire functional test suite takes like 5 hours on my GCP instance.
ideally this would do things like:
If we can name these tests something specific, that will be sufficient for the rudimentary benchmarking system. Each visualization just tracks a set of tests by name, like this:
And the command to run and track the test times uses
--grep "discover app"
in order to only run a subset of functional tests. Could also be one of the ciGroups, and then we could actually opt to not run these as part of ci if they were going to be slow. This really is something that should be run daily, not as part of every single PR.If we wanted to isolate some of our services as well we could think of using functional tests in an example app, though dashboard might be a more realistic end to end test.
The text was updated successfully, but these errors were encountered: