-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use machine type n1-standard-2 to avoid OOM killing #17743
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bart0sh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What does that look like in the logs? I'm looking at https://testgrid.k8s.io/sig-node-kubelet#node-kubelet-benchmark which uses this config. |
Here is what I could find in the latest logs:
This is an extreme case as test process was killed. Sometimes it's less obvious - OOM killer kills runc and even seemingly unrelated processes. |
/hold /cc @karan |
For example, I might be ok with this as a temporary / unblocking fix if there is a commitment to get back under the threshold. But I don't think we should just bump resources and never look back. |
/cc @bsdnet |
@spiffxp: GitHub didn't allow me to request PR reviews from the following users: bsdnet. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
+1 to what @spiffxp said. What jobs are scheduled on the node? What is their resource consumption? Can we instead tune them rather than double the machine size itself? |
For this issue, we need to explore more. Why 105 is picked, and whether system memory increase like systemd, containerd, runc or there is some memory leak. Any step to run this test specifically, I can help debug in the background. |
Sure, I'll try to investigate this further. Just want to point out that n1-standard-2 machine type is not something new here. It started to be used in this config 2.5 years ago.
Does anybody know what was the reason for this? |
I do not know. But when I read code, came accross the following comment: |
I'm wondering if these need to run at all anymore. Tracing through history, the original proto-KEP, pre-KEP: kubernetes/enhancements#83 The tests are in this file, https://github.com/kubernetes/kubernetes/blame/master/test/e2e_node/density_test.go#L118-L156 It looks like these results fill in http://node-perf-dash.k8s.io/#/builds It looks like maxpods was last updated to 110 https://github.com/kubernetes/kubernetes/pull/21361/files a long time ago. |
@MHBauer This is good info. Unfortunately, when I asked around, it is hard to know why those numbers are there today. OOM killer is normal when system is under memory pressure. My concern is that whether runc should be the one being picked. |
@bsdnet I've investigated it a bit further. one test (--focus="create 105 pods with 0s? interval [Benchmark]") runs more or less ok on cos-69-10895-385-0 and fails on cos-81-12871-119-0. I was running this test on n1-standard instances with cos-69 and cos-81 and looking at the on cos-69 minimum of free memory was 938Mi:
on cos-81 it was 112Mi and after that the instance hanged, so I couldn't type anything.
After some time the instance was available again and it turned out that kernel OOM killer killed cadvisor and e2e_node.test processes:
I used master branch for this test. The issue is reproducible almost 100%. Any suggestions how to continue? I can find out minimum amount of pods that trigger this issue on cos-81 if that helps. |
Lists of most memory consuming processes from both instances: cos-81:
cos-69:
|
I've tested this with different COS images. It looks like the test starts failing on cos-dev-73-11636-0-0: Here is a list of images I've tested:
release notes for cos-dev-73-11636-0-0 (taken from Container-Optimized OS - Release Notes):
|
I don't know if it's the root cause, but the containerd shim has gotten a little bit fatter over time. Maybe just enough to throw it over the edge. I think we need to take a step back and look at the contents and users of the file a bit deeper. I see duplication now that the image references are all updated to the most up to date. I also think we could probably modify the caller to reduce the duplication. I'm not sure if whoever relies on these outputs is paying attention. @lorqor |
Thanks @bart0sh bart0sh
|
What can we do about it? In my opinion we still have two choices short term:
Any other ideas? |
I think for now, we need to "decrease amount of pods created in the test". |
It didn't work with 100 pods. I thought that 10% lower number would give us enough memory and safety buffer. I can try if it works with 95 if that matters. |
@bsdnet I've tried increasing number of pods to 95. It triggered OOM killer and killed cadvisor and e2e_node.test. Here is a memory consumption picture around the peak of consumption, just before OOM killer starts its job. Note that 98.7% of memory has been consumed.
I agree with you regarding containerd being a culprit here. With 90 pods peak of memory consumption is around 95%. It makes it possible to avoid OOM triggering, but it's still quite high in my opinion. |
Thanks @bart0sh for the detailed info. |
@vpickard Closing as suggested. I'll submit another PR to change job yaml. |
Reopening as decreasing amount of pods to 90 is not an option because 100 is an official maximum. Note that machine types can be changed after #17853 is fixed. However, we shouldn't wait for that. We need to fix broken tests. |
Jobs that create 105 pods on COS are regularly triggering kernel OOM killer. That causes job falures. Used n1-standard-2 instance type with 7.5 Gb RAM to give tests processes more memory.
794a164
to
737da54
Compare
Closing as kubernetes/kubernetes#91813 has been merged. As we decreased amount of pods there is no need to use n1-standard-2 instances. |
Jobs that create 105 pods on COS are regularly triggering
kernel OOM killer. That causes job falures.
Used n1-standard-2 instance type with 7.5 Gb RAM to give
tests processes more memory.