Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nixos/sysctl: dynamic inotify watches or higher defaults #126777

Closed

Conversation

blaggacao
Copy link
Contributor

Nowadays most applications require a good amount of inotify watches,
so raise our default to what other distros do. If kernel supports it
enable dynamic setting.

@github-actions github-actions bot added 6.topic: nixos Issues or PRs affecting NixOS modules, or package usability issues specific to NixOS 8.has: module (update) This PR changes an existing module in `nixos/` labels Jun 13, 2021
@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch 3 times, most recently from f40a877 to cbe2715 Compare June 13, 2021 18:58
nixos/modules/config/sysctl.nix Show resolved Hide resolved
nixos/modules/config/sysctl.nix Outdated Show resolved Hide resolved
@ofborg ofborg bot added 10.rebuild-darwin: 0 This PR does not cause any packages to rebuild on Darwin 10.rebuild-linux: 1-10 labels Jun 13, 2021
Copy link
Contributor

@nh2 nh2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great find, change looks sensible to me.

@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch 2 times, most recently from c366d5e to 654173b Compare June 14, 2021 21:29
@blaggacao blaggacao requested review from alyssais and jonringer June 14, 2021 21:41
@blaggacao
Copy link
Contributor Author

ping: this is supposed to be ready 🤷

nixos/modules/config/sysctl.nix Outdated Show resolved Hide resolved
@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch from 654173b to de892e5 Compare June 15, 2021 18:57
@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch 4 times, most recently from c2dcc7b to 8c967c1 Compare June 16, 2021 16:21
@blaggacao blaggacao requested a review from alyssais June 16, 2021 17:33
@blaggacao
Copy link
Contributor Author

@GrahamcOfBorg test bittorrent
@GrahamcOfBorg test boot.biosCdrom
@GrahamcOfBorg test boot.biosNetboot
@GrahamcOfBorg test boot.biosUsb
@GrahamcOfBorg test boot.uefiCdrom
@GrahamcOfBorg test boot.uefiNetboot
@GrahamcOfBorg test boot.uefiUsb
@GrahamcOfBorg test installer.bcache
@GrahamcOfBorg test installer.btrfsSimple
@GrahamcOfBorg test installer.btrfsSubvolDefault
@GrahamcOfBorg test installer.btrfsSubvols
@GrahamcOfBorg test installer.encryptedFSWithKeyfile
@GrahamcOfBorg test installer.grub1
@GrahamcOfBorg test installer.luksroot
@GrahamcOfBorg test installer.lvm
@GrahamcOfBorg test installer.separateBoot
@GrahamcOfBorg test installer.simple
@GrahamcOfBorg test installer.swraid
@GrahamcOfBorg test installer.fsroot

@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch from 8c967c1 to 0e8af3e Compare June 20, 2021 20:02
@blaggacao
Copy link
Contributor Author

@alyssais Can we merge this? All feedback seems to be duly attended.

@fzakaria
Copy link
Contributor

This change LGTM.

Copy link
Contributor

@fzakaria fzakaria left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's great.
Small nit would be making it a let variable or a re-usable variable for INT_MAX.
(Probably re-used elsewhere)

# https://github.com/torvalds/linux/commit/ac7b79fd190b02e7151bc7d2b9da692f537657f3
boot.kernel.sysctl."fs.inotify.max_user_instances" =
if (config.boot.kernelPackages.kernel.kernelAtLeast "5.12")
then mkDefault 2147483647 # INT_MAX, dynamically limited based on available memory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make boot.kernel.INT_MAX a variable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess, it would be more like in the stdenv, though or lib or even builtin. I can make a quick follow-up PR once this is merged.

@maralorn
Copy link
Member

maralorn commented Jun 23, 2021

I don‘t understand the max_user_instances default.

I guess it makes sense for max_user_watches. The commit explains that the default value changes with 5.11 so we set that same value for everything below 5.11. Seems reasonable.

For max_user_instances we only change the default for everything above 5.12, if I understand kernelAtLeast correctly. Why is that?

It might be just right, but it looks like different cases. So describing them both with "Dynamic since X" confuses me. Can you maybe elaborate that?

@blaggacao
Copy link
Contributor Author

blaggacao commented Jun 23, 2021

Can you maybe elaborate that?

The upstream commit formulation is so intricate that I don't dare to give an approximate interpretation here inline.

torvalds/linux@ac7b79f

It reads:

With inotify instances charged to memcg [since 5.12], the admin can simply set max_user_instances to INT_MAX and let the memcg limits of the jobs limit their inotify instances.


everything above 5.12

... and including 5.12.

@maralorn
Copy link
Member

maralorn commented Jun 23, 2021

Okay, do you know what harm it would do do raise that limit unconditionally?

And related: Back porting a new default is one thing, but are we sure it’s a good idea to override an existing default for current and future kernels?


Edit: For reference I routinely increase max_user_watches. I never needed to increase max_user_instances.

@blaggacao
Copy link
Contributor Author

blaggacao commented Jun 23, 2021

Thanks for the continued discussion!

Okay, do you know what harm it would do do raise that limit unconditionally?

I don't for sure, but I assume that it's an int value, so it cannot receive more than MAX_INT.

And related: Back porting a new default is one thing, but are we sure it’s a good idea to override an existing default for current and future kernels?

This was inspired by how it was argued in the related issue: #36214, seems to be a requirement for docker or rather: in general name-spacing. I innocently assume because of those namespaces actually count towards this maximum.

Since this is a maximum boundary and implemented as a default, it is in any case as close to a pareto-improvement as we can get (doesn't do harm to anybody, but good to many). The maximum is only hit, if you actually use it.

@maralorn
Copy link
Member

maralorn commented Jun 23, 2021

Okay, do you know what harm it would do do raise that limit unconditionally?

I don't for sure, but I assume that it's an int value, so it cannot receive more than MAX_INT.

That’s not what I meant. I was wondering why we only do the override from 5.12 and newer.

But since then I think I understood it. I suggest adding a comment of the form:
From version 5.12 inotify watches count against a RAM resource limit, so it’s safe to increase this limit to the maximum
(If this is factually correct.)

And related: Back porting a new default is one thing, but are we sure it’s a good idea to override an existing default for current and future kernels?

This was inspired by how it was argued in the related issue: #36214, seems to be a requirement for docker or rather: in general name-spacing. I innocently assume because of those namespaces actually count towards this maximum.

Since this is a maximum boundary and implemented as a default, it is in any case as close to a pareto-improvement as we can get (doesn't do harm to anybody, but good to many). The maximum is only hit, if you actually use it.

Well the resulting harm would be if the user creates a resource exhaustion they don‘t want. So this wouldn‘t be a "pareto-improvement". The kernel has these limits for a reason. But if my phrasing above is correct, it’s fine to raise it.

@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch from 0e8af3e to 24f2ff6 Compare June 23, 2021 20:10
# Dynamic since kernel v5.12
# https://github.com/torvalds/linux/commit/ac7b79fd190b02e7151bc7d2b9da692f537657f3
# From version 5.12 inotify watches count against a RAM resource limit, so kernel
# mainatainers (see commit) suggest to increase this limit to the maximum
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# mainatainers (see commit) suggest to increase this limit to the maximum
# maintainers (see commit) suggest to increase this limit to the maximum

Nowadays most applications require a good amount of inotify watches,
so raise our default to what other distros do. If kernel supports it
enable dynamic setting.

fixes NixOS#36214, NixOS#65001 (re-fix)
- NixOS/nixops#890
- divnix/digga#209

replaces: NixOS#112472
@blaggacao blaggacao force-pushed the raise-global-inotify-watches branch from 24f2ff6 to 7336ba3 Compare June 23, 2021 20:49
@maralorn
Copy link
Member

I think I am 100% in favor of the instances change. It does basically what the kernel devs recommend.

I am a little bit uncertain about the watches change. The kernel wants to establish a new, system dependent default here and this PR would raise the limit for older kernels higher then what new kernels would pick on most systems. I don‘t think this is likely to cause any problems, but maybe someone more confident can merge this.

@blaggacao
Copy link
Contributor Author

Cleaning up my follow-up list. Please anyone feel free to pick this up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
6.topic: nixos Issues or PRs affecting NixOS modules, or package usability issues specific to NixOS 8.has: module (update) This PR changes an existing module in `nixos/` 10.rebuild-darwin: 0 This PR does not cause any packages to rebuild on Darwin 10.rebuild-linux: 1-10
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants