-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update container/pod benchmarking procedures. #894
Conversation
@aznashwan: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
59e8753
to
0153cf3
Compare
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
@feiskyer are you able to take a look here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding those customizations for benchmarking.
/lgtm
/approve
@aznashwan could you fix the build failures reported on https://github.com/kubernetes-sigs/cri-tools/runs/5203016894?check_suite_focus=true? |
@feiskyer thanks a lot for taking a look!
I had noticed that issue but was unable to replicate it outside of GitHub runners (I have run a Either way, I have created this gomega PR which should address the root cause of the issue, and will re-vendor it if/when it gets fixed. |
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Minor update: added a function to defer the check on the pod container timeout here |
@feiskyer I'm glad to say my |
e0b9a56
to
d3df361
Compare
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch defines new types and mechanisms for managing benchmark results using a channel-based appriach, as the previous gmeasure.Stopwatch-based approach did not provide a mechanism for associating operations which are part of a larger lifecycle being benchmarked. (e.g. container CRUD operations) Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Bring the image-related benchmarks in line with the container and pod benchmaks by parametrizing the benchmark settings and switching to `gmeasure.experiment` for running the benchmarks. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aznashwan, feiskyer, saschagrunert The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
This patch leverages the new benchmarking features added in cri-tools in [this PR](kubernetes-sigs/cri-tools#894) to add GitHub workflows for automatically running the benchmarks on Azure-based VMs for both Linux and Windows, as well as adding a Python script which generates plot graphs for the results. Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
What type of PR is this?
/kind documentation
What this PR does / why we need it:
This PR updates the benchmarking procedures for containers and pods.
The number of container/pod benchmarks as well as how many can be run in parallel are now configurable via an external YAML file, and the results of the benchmarks are output as JSON files within a provided directory.
The main purpose behind parametrizing the benchmarks is to allow for analysis of larger sample sizes on the same host to spot any performance degradations within the runtime.
These changes are also intended to be integrated within GitHub workflows on the main
containerd
repo to monitor performance across supported OSes/runtimes for any new RC/full release.Which issue(s) this PR fixes:
None.
Special notes for your reviewer:
Does this PR introduce a user-facing change?