-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance degradation when using a large ConfigMap with many ImageTagTransformers #2869
Comments
Wow, thanks. If you have the time and inclination, some benchmark tests in the high level krusty package (a new file dedicated to the above) would be appreciated. I've not measured this, but my assumption is that - because we are in mid-refactor from apimachinery to kyaml - we're spending too much time in https://github.com/kubernetes-sigs/kustomize/blob/master/kyaml/filtersutil/filtersutil.go#L21. This function converts old data types to RNodes and back again, and it's called constantly. Once were completely switched to kyaml, which remains a goal for kubernetes 1.20, calls to this function can be dropped, as RNode will be the underlying data structure. More background in #2886 and the DepProvider mentioned therein. |
Sorry to say, I haven't had time to come back around here and provide clean tests. I did want to note that my daily runs showed a slowdown in the testcase for #2808 at some point between 76a8f03 and fc06283. Given the ongoing refactor effort, I didn't think it made sense to open a new issue. The Oct 26 7:20am Eastern time run showed no meaningful variance in the execution time. At that point, master was 76a8f03. The Oct 27 7:20am Eastern time run tripped on the execution time chang. At that point, master was fc06283:
Later in the log:
This is from a run I did today, comparing v3.8.5 to v3.8.6:
|
As an update, the current master branch, 60c8f4c, is significantly slower than v3.8.8 (which, in turn, is significantly slower than v3.7.0). Admittedly, my master branch build may not be completely accurate - I'm not certain I have my pin/unpin process set up correctly.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale This one still is an issue (though it's not as bad as it has been). v3.7.0 runs in 0.7 seconds; v4.0.5 runs in 4.0 seconds. So v4.0.5 is an improvement over v3.8.8, but it's still far behind v3.7.0.
|
/remove-lifecycle stale
…On Mon, Mar 15, 2021 at 9:55 AM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community
<https://github.com/kubernetes/community>.
/lifecycle stale
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2869 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOJQXVVGJIFMRXKZ77HYKLTDYUW5ANCNFSM4QGDSHRQ>
.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale There's been improvement over time, but the execution time still is meaningfully slower when compared to v3.7.0. For example, 4.2.0's run time is over 2.5 times the run time in 3.7.0. If that performance difference is acceptable for the devs, I'm comfortable with this ticket getting closed - but I'd rather that closure happen as an explicit closure by a person (instead of an automated closure from a bot). Thanks.
|
Agreed, we are forced to upgrade because of deprecated kubernetes apis and performance is an issue for us. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Starting with ef924a5, the image transformer conversion to kyaml, we started seeing significantly slower performance with our large kustomization runs.
I have created a test case that shows an example of the performance problem. Given a count, the test case generates a file housing a single configmap with count entries, a file housing count image tag transformers, and kustomization.yaml that includes the two files. The default count is 999.
Using files generated with a count of 999, ef924a5 (the commit with kyaml) runs roughly six times slower than b7f7536 (the commit prior to kyaml). The output between the two is the same, so this difference is solely in run time.
FYI @monopole
Test setup
Create make-test.sh with the following content
Then generate the files:
Expected behavior
Here is the run time for b7f7536, the commit prior to kyaml:
Actual behavior
Here is the run time for ef924a5, the commit with kyaml:
The text was updated successfully, but these errors were encountered: