-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(user/rootless.md) Mention possible pids_limit issues with rootless podman #3687
docs(user/rootless.md) Mention possible pids_limit issues with rootless podman #3687
Conversation
|
Welcome @netguino! |
Hi @netguino. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
```ini | ||
[containers] | ||
pids_limit = 0 | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should hae a blank line before the next heating
@@ -52,6 +52,12 @@ Also, depending on the host configuration, the following steps might be needed: | |||
iptable_nat | |||
``` | |||
|
|||
- If using podman, be aware that by default there is a [limit](https://docs.podman.io/en/v4.3/markdown/options/pids-limit.html#pids-limit-limit) to the number of pids that can be created. This can cause problems like nginx workers inside a container not spawning correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should probably mention potential downsides as well? (to disabling the limit), users could also set a higher limit instead of no limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about something like...
If you want to raise this limit, edit your containers.conf
file ( generally located in /etc/containers/
). Note that setting this to 0
will disable the limit, potentially allowing things like pid exhaustion to happen on the host machine.
[containers]
pids_limit=10000
I'm not 100% sure about doing it this way because most people will likely copy the arbitrary number I chose for the block.
Alternatively, I can just go with the current way, but rewrite it like this:
If you want to disable this limit, edit your containers.conf
file ( generally located in /etc/containers/
). Note that this could cause things like pid exhaustion to happen on the host machine. If you prefer, change 0
to your desired maximum number for the new limit.
[containers]
pids_limit=0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable to me to show an example using 0, along with that warning about pid exhaustion being a possibility.
But I could also see the inverse. Havintg10000 as the example at least does set some sort of limit. Then there could just be a note stating it is possible to have no limit with 0.
Either approach seems fine to me. Your latest update looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree.
I think we just need to at least mention the risk of exhaustion and the possibility to set a specific higher limit instead, with the alternate downside that it may not be high enough and/or may still enable exhaustion.
I think your sample with 0 is sufficient otherwise. Thank yu!
When running rootless podman, there can be issues with processes not able to create new pids. This is caused by podman's default limit being too low for scenarios like running nginx and spawning workers. This simply adds a notice to the rootless section and suggests a way to disable said limit if desired.
330daa5
to
09680d2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -52,6 +52,12 @@ Also, depending on the host configuration, the following steps might be needed: | |||
iptable_nat | |||
``` | |||
|
|||
- If using podman, be aware that by default there is a [limit](https://docs.podman.io/en/v4.3/markdown/options/pids-limit.html#pids-limit-limit) to the number of pids that can be created. This can cause problems like nginx workers inside a container not spawning correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable to me to show an example using 0, along with that warning about pid exhaustion being a possibility.
But I could also see the inverse. Havintg10000 as the example at least does set some sort of limit. Then there could just be a note stating it is possible to have no limit with 0.
Either approach seems fine to me. Your latest update looks good.
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
thanks!!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder, netguino The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
When running rootless podman, there can be issues with processes not able to create new pids. This is caused by podman's default limit being too low for scenarios like running nginx and spawning workers.
Following up on this discussion: #3451
This simply adds a notice to the rootless section and suggests a way to disable said limit if desired.