-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Add full program benchmark for kustomize build #5425
base: master
Are you sure you want to change the base?
Conversation
Hi @chlunde. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
One important aspect that I'd like to point out here is that the benchmarks should cover the following cases:
These cases are very different in terms of the performance impact that specific parts of code create, yet each of them is quite valid as far as practical usage is concerned. |
/ok-to-test |
In hindsight, it makes no sense to merge this with the current performance. I was a bit surprised at how long the baseline performance was, as I had quite a few performance patches locally at the time I wrote the benchmark. @shapirus yeah, we should have some benchmarks emulating argo/flux too. Do you think this affects how this PR is structured? |
I agree that it would be beneficial to limit the duration and scope of these performance tests. I really like the suggestion from @shapirus. Ideally, we can run meaningful benchmark tests as a PR check. This would help us understand if a specific commit significantly changed the performance. |
@chlunde As far as I can see, it is basically a separate go program that imports kustomize as a library and then runs tests in its own context. This will only allow to benchmark the "single invocation, big resource tree" scenario, as it makes the startup overhead take place only once (see #5422 (comment)). To emulate argocd/fluxcd we need to, well, do what they do: invoke an actual kustomize binary (which, in the CI flow, should be already built by the time the benchmarks run, if I understand correctly) a certain number of times and measure the total run time. My set of benchmarks at https://github.com/shapirus/kustomize-benchmark-suite (Dockerfile serves as a readme there) does exactly that. Probably some core code could be reused for the kustomize CI. Yes it's somewhat crude, but also small enough to be understood and reused. Unfortunately I lack the knowledge required to convert them into a proper CI workflow step for kustomize, but it should be easy, if not trivial, for those who don't.
There is really no need to do this, I think. Can't tell about fluxcd, but argocd simply runs a standalone kustomize executable binary to build the resulting manifest for each Application that it has been requested for. This is easily simulated with a simple shell script. Let's try to avoid overengineering :).
In the ideal world, yes, but in practice it doesn't make any discernable difference, unless the underlying block device is very slow and/or high latency. On the other hand, it adds extra complications. To reduce the effects of the disk access overhead sufficiently for our specific practical use case, we can run My bigger concern is not the file system, but the shared CPU on the machine where the benchmark runs. If there are other CPU-intensive jobs running on the same machine, they can affect the benchmark execution times quite significantly. There are possible workarounds however:
|
In poking around at this benchmark, I did notice some behavior that surprised me. I adjusted the value of genconfig[1].resources and the run time jump between 1 and 2 stands out dramatically. @chlunde (or others), do you see the same behavior? Admittedly, I haven't taken the time to understand the test case well.
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This change introduces a benchmarking test that constructs a complete kustomization tree using various features of Kustomize. This update aims to address several objectives: * Demonstrating current performance challenges in Kustomize in a reproducible manner. * Evaluating the effects of performance enhancements. * Guarding against potential performance setbacks and inadvertent quadratic behavior in the future. * Considering the possibility of incorporating profile-guided optimization (PGO) in future iterations. Usage: $ make run-benchmarks go test -run=x -bench=BenchmarkBuild ./kustomize/commands/build pkg: sigs.k8s.io/kustomize/kustomize/v5/commands/build BenchmarkBuild-8 1 48035946042 ns/op PASS ok sigs.k8s.io/kustomize/kustomize/v5/commands/build 48.357s *Currently*, this benchmark requires 48 seconds to run on my machine. Updates kubernetes-sigs#5084 Notes on PGO: Real-life profiles would be better, but creating one based on a benchmark should not hurt: https://go.dev/doc/pgo#collecting-profiles > Will PGO with an unrepresentative profile make my program slower than no PGO? > It should not. While a profile that is not representative of production behavior will result in optimizations in cold parts of the application, it should not make hot parts of the application slower. If you encounter a program where PGO results in worse performance than disabling PGO, please file an issue at https://go.dev/issue/new. Collecting a profile: go test -cpuprofile cpu1.pprof -run=^$ -bench ^BenchmarkBuild$ sigs.k8s.io/kustomize/kustomize/v5/commands/build go build -pgo=./cpu1.pprof -o kust-pgo ./kustomize go build -o kust-nopgo ./kustomize Compare PGO and non-PGO-builds: ./kust-pgo build -o /dev/null testdata/ 21.88s user 2.00s system 176% cpu 13.505 total ./kust-nopgo build -o /dev/null testdata/ 22.76s user 1.98s system 174% cpu 14.170 total
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chlunde The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I've added a few more code comments as requested, rebased on master and updated the code to use labels instead of commonLabels. I've also reduced the dataset here to run in less than one minute on my machine (48s). |
fn := fmt.Sprintf("res%d", res) | ||
fmt.Fprintf(&buf, " - %v\n", fn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is meant to be a list of resources to be rendered in the kustomization, is that correct?
Could this be missing a .yaml
extension?
I'm also curious if you need the %v
formatter here -- since these are all strings I imagine %s
is enough.
// as an index into the given configs slice. | ||
// | ||
// The function is recursive and will call itself for config as long as resources > 0. | ||
func makeKustomization(configs []GenConfig, fSys filesys.FileSystem, path, id string, depth int) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of questions on this function:
- Is this being built with string concatenation to avoid the overhead of YAML serialization?
- Is there a reason why passing the
*testing.B
parameter here and marking it as a helper withb.Helper()
would be undesirable?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hi again, @chlunde! Are you still interested in contributing these changes? I had a few open questions in my previous review that are still awaiting for an answer. /remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This change introduces a benchmarking test that constructs a complete kustomization tree using various features of Kustomize.
This update aims to address several objectives:
Usage:
Currently, this benchmark requires 3000 seconds to run on my machine. In order to run it on master today, you need to add
-timeout=30m
to thego test
command.The dataset size was chosen because I believe it represents a real workload which we could get a runtime of less than 10 seconds.
Updates #5084
Notes on PGO:
Real-life profiles would be better, but creating one based on a benchmark should not hurt:
https://go.dev/doc/pgo#collecting-profiles
Collecting a profile:
Compare PGO and non-PGO-builds: