Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods per node ? Any recommendation ? #6287

Closed
roldancer opened this issue Dec 13, 2015 · 12 comments
Closed

Pods per node ? Any recommendation ? #6287

roldancer opened this issue Dec 13, 2015 · 12 comments

Comments

@roldancer
Copy link

Hi All, I would like to know if there is any recommendation about the limit of pods per node, I was reading some performance reports from Kubernetes's site, all the test were using 30 pods per node, I want just to know if someone has experience using nodes with 100 pods or even more .

Many thanks for any advice.

@danmcp
Copy link

danmcp commented Dec 14, 2015

@jeremyeder Thoughts?

@jeremyeder
Copy link
Contributor

Depends what the pods are doing. If one pod can saturate your physical resources, then all you can run is one pod. If the pods are just sitting there running 'sleep', then you can obviously get a lot higher. We've got tests running with active storage and network I/O in the 100-200 pods per node range, with good results. Kube and OpenShift both default to 40 pods as a maximum because some of the communication between Kube and Docker is still undergoing optimization.

Back to my original point though, pod limit is assuming your hardware can actually support the work being done by those pods and still remain within your business/SLA rules.

@roldancer
Copy link
Author

Hi @jeremyeder I'm asking this because we are experiencing some deadlock in the Docker daemon, where the nodes's pod limit is 100, so we think there are some performance limitations not well documented, we see lot of REST requests to Docker from Kubenetes node.

@jeremyeder
Copy link
Contributor

@roldancer ah ok. Do you have any more information about that ? Yes, the REST docker/kube is what I was referring to. Though we really haven't seen the issue @ 100 pods. What kind of hardware do you have and what version of Origin are you running? And how many nodes?

@jeremyeder
Copy link
Contributor

@roldancer what exact kernel version are you using?

@roldancer
Copy link
Author

Hi @jeremyeder, here is the information about our node, by the way we are using OSE 3.1.

$ more /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)

$ uname -a
Linux XXXXXXXXXX 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

The node has 16 CPU's and 128GB of memory

$ more /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel Xeon E312xx (Sandy Bridge)
stepping : 1
microcode : 0x1
cpu MHz : 2593.992
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerf
pu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid
xsaveopt
bogomips : 5187.98
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

@jeremyeder
Copy link
Contributor

The deadlock issue is currently being worked in https://bugzilla.redhat.com/show_bug.cgi?id=1292481

If you'd like, you can subscribe to the bugzilla, but I'll make sure to post a note here as well once we have verified a fix...we're currently testing it. Should know more next week; until then please use the latest RHEL 7.1.z kernel.

@lestrade84
Copy link

Hi @jeremyeder. Right now @roldancer can't access the BZ you posted. It would be very appreciated if you update this issue as soon as we get any news on BZ 1292481.

@thincal
Copy link

thincal commented Jan 27, 2016

@jeremyeder what's the way to configure the max pods per node ? thanks.

Update:
just google something from stack overflow:

kubeletArguments:
  max-pods:
  - "100"

but failed to find out any official document about that, is that true anyway ?

@jeremyeder
Copy link
Contributor

That's the right way to do it, yep. If you're using openshift-ansible, we also support setting kubeletArguments during install phase.

openshift_node_kubelet_args={'max-pods': ['100']}

@jeremyeder
Copy link
Contributor

This fix is in 327.10 or higher kernel.

@smarterclayton
Copy link
Contributor

Closing due to age.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants