-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[KIC-2.0] performance test cases and data collection #1488
Conversation
Codecov Report
@@ Coverage Diff @@
## next #1488 +/- ##
==========================================
+ Coverage 51.40% 51.53% +0.12%
==========================================
Files 91 91
Lines 6316 6316
==========================================
+ Hits 3247 3255 +8
+ Misses 2774 2769 -5
+ Partials 295 292 -3
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally I was hoping any performance tooling we add would be something we could trigger with a CLI, or perhaps a docker container, e.t.c. This mechanism will lock performance testing tightly into our testing framework, which does not seem to align well with some of the stated goals in #1197.
Particularly, the main goal stated there:
We want to create a performance testing tool that can be applied generically with any particular Kubernetes cluster backend.
Building the performance testing into our KIC tests will specifically not be generic as it will mean:
- anyone wishing to run the performance tests needs to be educated on our git repo
- anyone wishing to run the performance tests needs to understand
go test
And also we currently only support kind
on the github actions
infrastructure, which not only doesn't seem like a strong testing environment (I believe this environment is highly contentious for compute resources being on shared infrastructure) but also doesn't allow for any other cluster options as per the any particular Kubernetes cluster backend
bit.
My recommendation would be for us to transplant the functionality that is currently in railgun/test/integration
into a discreet library which could then be consumed by the integration tests for basic validation and regression testing, but also then can be consumed more generally by some CLI or other caller who can bring whatever Kubernetes cluster (via *rest.Config
) to the table they want to.
Co-authored-by: Shane Utt <shaneutt@linux.com>
Licenses differ between commit 78e5c43215900106af20cfc46de0de8311df3fa2 and base:
|
…troller into performance_test
Co-authored-by: Shane Utt <shaneutt@linux.com>
Co-authored-by: Shane Utt <shaneutt@linux.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think given the context that we're still in the experimenting stage with performance testing and have several follow up issues, plus the fact that this code is behind it's own build tag this is a good starting point: we can merge what we have now, and iterate on it as we further decide as a team what new features we want.
What this PR does / why we need it:
Implement ingress performance test cases and collect test data
Which issue this PR fixes **: fixes #
part of #1197
The performance test cases calculate ingress time consuming after creation until ingress status is changed to ready.
Special notes for your reviewer:
Ingress performance test cases
PR Readiness Checklist:
Complete these before marking the PR as ready to review:
Performance test cases are able to execute and collecting data.