-
-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nixos/xserver: Implement configuration of NVIDIA Optimus via PRIME #42846
nixos/xserver: Implement configuration of NVIDIA Optimus via PRIME #42846
Conversation
3c8b16e
to
25c56d4
Compare
I made a fix in the definition of the |
Tried it, works for me! At last I see the nvidia card in |
👍 on making this work. Haven't tested it though. |
25c56d4
to
9540ea1
Compare
I have updated it to have separate patches in GDM. |
So this makes nvidia card always on? The idea of optimus was to enable nvidia card only on need e.g. for running graphics intensive program and keep it off in other time to save laptop battery. Why not just use one of these two solutions? https://nixos.wiki/wiki/Nvidia#Optimus |
@dukzcry Yes, this makes the nvidia card always on and used for all rendering. As far as I know, this is the only approach to use the NVidia driver (and get good 3D performance) on mux-less Optimus laptops which is officially supported by NVidia. The approaches you linked to are not supported by NVidia and sometimes work, sometimes don't or are partially broken in more-or-less subtle ways. Note that many people do not mind the increased power usage for the benefit of things just working with simple configuration. |
Then you should make a warning about it in documentation, because this is not a way it meant to be used.
The second solution is exactly official nvidia prime solution http://us.download.nvidia.com/XFree86/Linux-x86/358.16/README/randr14.html
Nvidia optimus is not properly brought to linux by nvidia. So there is no proper nix way to use it. Only non-nix ways, until vendor will do something. There is a slow progress though: |
@dukzcry The NVidia page you link to is what this pull request enables - doing all rendering on the NVidia GPU but allowing output to display connected only to the Intel GPU. See that the X configuration done by my changes is that that page lists under "Older X servers require...". (Note: I did that not the simpler configuration because that does not work on certain hardware like on my laptop.) The second solution in https://nixos.wiki/wiki/Nvidia#Optimus ("Nvidia PRIME") seems to be an unofficial approach for using the NVidia GPU only for specific applications (note that the nvidia page does not mention I will update the documentation to mention that the NVidia GPU will be always on. |
@dukzcry Also that wiki page does not mention that |
default = ""; | ||
example = "PCI:0:2:0"; | ||
description = '' | ||
Bus ID of the NVIDIA GPU. You can find it using lspci; for example if lspci |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NVIDIA => Intel?
This adds configuration options which automate the configuration of NVIDIA Optimus using PRIME. This allows using the NVIDIA proprietary driver on Optimus laptops, in order to render using the NVIDIA GPU while outputting to displays connected only to the integrated Intel GPU. It also adds an option for enabling kernel modesetting for the NVIDIA driver (via a kernel command line flag); this is particularly useful together with Optimus/PRIME because it fixes tearing on PRIME-connected screens. The user still needs to enable the Optimus/PRIME feature and specify the bus IDs of the Intel and NVIDIA GPUs, but this is still much easier for users and more reliable. The implementation handles both the X configuration file as well as getting display managers to run certain necessary `xrandr` commands just after X has started. Configuration of commands run after X startup is done using a new configuration option `services.xserver.displayManager.setupCommands`. Support for this option is implemented for LightDM, GDM and SDDM; all of these have been tested with this feature including logging into a Plasma session. Note: support of `setupCommands` for GDM is implemented by making GDM run the session executable via a wrapper; the wrapper will run the `setupCommands` before execing. This seemed like the simplest and most reliable approach, and solves running these commands both for GDM's X server and user X servers (GDM starts separate X servers for itself and user sessions). An alternative approach would be with autostart files but that seems harder to set up and less reliable. Note that some simple features for X configuration file generation (in `xserver.nix`) are added which are used in the implementation: - `services.xserver.extraConfig`: Allows adding arbitrary new sections. This is used to add the Device section for the Intel GPU. - `deviceSection` and `screenSection` within `services.xserver.drivers`. This allows the nvidia configuration module to add additional contents into the `Device` and `Screen` sections of the "nvidia" driver, and not into such sections for other drivers that may be enabled.
9540ea1
to
f261537
Compare
I have updated descriptions:
|
Thanks, this is new info to me. Because they put that simplification was possible due X server update, and no word that simple config leads to issues on some hardware.
Well both nvidia-xrun (not supported on NixOS), primerun (supported on "any" distro) and your solution work as written at http://us.download.nvidia.com/XFree86/Linux-x86/358.16/README/randr14.html The key difference is be it main X server or secondary temporary X server.
You're right. It doesn't matter in case of game programs though. |
''; | ||
}; | ||
|
||
hardware.nvidia.optimus_prime.enable = lib.mkOption { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have hardware.nvidiaOptimus
(currently only to disable optimus card). Perhaps you could combine namespaces into something like hardware.nvidia.optimus
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bumblebee might be worth integrating there too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about this, I put the three options into the hardware.nvidia.optimus_prime
section because one enables the Optimus/PRIME feature and the other two define required parameters for using it (bus IDs). I wouldn't want to have this and the existing options like hardware.nvidiaOptimus.disable
(which I not familiar with) and hardware.bumblebee
all thrown in the same place. Maybe these two could be moved to somewhere under hardware.nvidia
since they are both NVidia related? Possibility: hardware.nvidia.disableOptimus
, hardware.nvidia.bumblebee
(also: Bumblebee is not a piece of hardware).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though we might indeed have an optimus subsection, ending up with:
hardware.nvidia.optimus.prime
hardware.nvidia.optimus.bumblebee
hardware.nvidia.optimus.disable
(but maybe there is a better name for this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 for hardware.nvidia.optimus.prime. And then eventually move hardware.nvidiaOptimus.disable to hardware.nvidia.optimus.disable, and hardware.bumblebee to hardware.nvidia.bumblebee. We can wait on the latter part though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at https://wiki.archlinux.org/index.php/NVIDIA_Optimus it appears there is another option where you use the proprietary NVIDIA drivers.
@dukzcry I found the hint do to the "old style" config here after searching for the "No devices found" error: https://devtalk.nvidia.com/default/topic/1027345/linux/why-cannot-i-enable-drm-kernel-mode-setting-/post/5225539/#5225539 - it seems to be an issue that appears only when you enable the modesetting. I have no idea why this isn't mentioned in any of the official places. |
I've cherry-picked this on top of 18.03 (specifically 91b286c) and can finally get the HDMI out on my HP Pavilion 15-cb063tx to work properly. Thanks! |
Hey, is this going to be merged? |
@GrahamcOfBorg build gdm |
No attempt on x86_64-linux (full log) The following builds were skipped because they don't evaluate on x86_64-linux: gdm Partial log (click to expand)
|
No attempt on aarch64-linux (full log) The following builds were skipped because they don't evaluate on aarch64-linux: gdm Partial log (click to expand)
|
No attempt on x86_64-darwin (full log) The following builds were skipped because they don't evaluate on x86_64-darwin: gdm Partial log (click to expand)
|
@GrahamcOfBorg build gnome3.gdm |
No attempt on x86_64-darwin (full log) The following builds were skipped because they don't evaluate on x86_64-darwin: gnome3.gdm Partial log (click to expand)
|
Success on x86_64-linux (full log) Attempted: gnome3.gdm Partial log (click to expand)
|
Success on aarch64-linux (full log) Attempted: gnome3.gdm Partial log (click to expand)
|
/cc maintainer @jtojnar about the |
@ambrop72 GDM have a few script entry points that can probably be used. In particular With regards to #43992 this doesn't really seem to conflict in any meaningful way I think, might not apply cleanly (doesn't seem to confilct), but the functionality seems distinct so should probably be easy to merge. |
@hedning The /etc/gdm scripts no longer actually work despite being in the docs. See https://bugzilla.redhat.com/show_bug.cgi?id=449675 |
Hmm, I'm able to run the PreSession script at least, but yeah, Init doesn't work (looks like the code responsible is never called) so I'm guessing that's not usable. |
I just tested this again with recent master, it merges without problems and works fine with all three display managers (sddm, lightdm, gdm). |
Why is this not being merged? I think it would be very good to include this in 18.09, many people will finally get their graphics working well. |
Is there a way to test this even if it's not merged? I've tried the first wiki's solution (with both 18.03 and unstable) and some other configs found in stackexchange and they didn't seem to work. I'm new to NixOS btw. |
@mschonfinkel
|
@mschonfinkel, @dermetfan
And then configure and build like @dermetfan wrote. |
It's working now 💯 ! Thanks @ambrop72 and @dermetfan! |
Could we maybe get this merged before 19.03? Consistent Nvidia support seems like a great improvement and also advertisement for a mayor release. |
It's not a good thing when multiple people need to beg to get something merged. Is anything being done to address this? Looking at the history it should have been merged 3 months ago. In particular, it's frustrating for the opener of the PR. It sends the message that contributions are not welcome. |
I definitely agree this has been in the works a little too long. However, at some point we need to resolve the conflict between the naming schemes of all the nvidia stuff. That's been my main hesitancy so far. We can look into backporting this to 18.09 once that has been resolved. Besides naming though it looks to be a good addition. |
Can we outline what changes need to be made to get this into 18.09? I went through the process today of explaining to someone who has never used nixos before, in IRC, how to setup a local nixpkgs checkout for this among other things. This module was very helpful. |
For 18.09, merging/cherry-picking the commit in this pull request will work. I confirm it works with SDDM and LightDM. However, it does not work with GDM but that seems to be an unrelated "GDM no longer works with NVidia" problem. The fatal error is |
About the GDM problem, the issue is indeed related to X running as non-root but it is specific to this setup not NVidia in general. The problem is that for this to work, X needs permission to access the VGA arbiter (/dev/vga_arbiter). I don't know what the proper solution here would be, but giving world access to this is a workaround that gets GDM to work:
|
I went ahead and cherry-picked it in 195a573 for now. I was just hoping we could get consensus on what the "right" name for all of these Optimus/Bumblebee/Prime/etc. names was. Hopefully we can clean it up later, though. It's more bikeshedding than anything. |
This adds configuration options which automate the configuration of NVIDIA Optimus using PRIME. This allows using the NVIDIA proprietary driver on Optimus laptops, in order to render using the NVIDIA GPU while outputting to displays connected only to the integrated Intel GPU. It also adds an option for enabling kernel modesetting for the NVIDIA driver (via a kernel command line flag); this is particularly useful together with Optimus/PRIME because it fixes tearing on PRIME-connected screens.
The user still needs to enable the Optimus/PRIME feature and specify the bus IDs of the Intel and NVIDIA GPUs, but this is still much easier for users and more reliable. The implementation handles both the X configuration file as well as getting display managers to run certain necessary
xrandr
commands just after X has started.Configuration of commands run after X startup is done using a new configuration option
services.xserver.displayManager.setupCommands
. Support for this option is implemented for LightDM, GDM and SDDM; all of these have been tested with this feature including logging into a Plasma session.Note: support of
setupCommands
for GDM is implemented by making GDM run the session executable via a wrapper; the wrapper will run thesetupCommands
before execing. This seemed like the simplest and most reliable approach, and solves running these commands both for GDM's X server and user X servers (GDM starts separate X servers for itself and user sessions). An alternative approach would be with autostart files but that seems harder to set up and less reliable.Note that some simple features for X configuration file generation (in
xserver.nix
) are added which are used in the implementation:services.xserver.extraConfig
: Allows adding arbitrary new sections. This is used to add the Device section for the Intel GPU.deviceSection
andscreenSection
withinservices.xserver.drivers
. This allows the nvidia configuration module to add additional contents into theDevice
andScreen
sections of the "nvidia" driver, and not into such sections for other drivers that may be enabled.Motivation for this change
Setup of Optimus laptops is extremely hard and error-prone and in grave need of automation. Many users probably just give up even though Optimus laptops are technically supported by the NVIDIA driver via this setup.
It would be good for others to do some extra testing before merging this:
P.S. I am aware of Bumblebee but I never bothered with it. Do we need checks to prevent PRIME and Bumblebee being enabled at the same time?
Things done
sandbox
innix.conf
on non-NixOS)nix-shell -p nox --run "nox-review wip"
./result/bin/
)nix path-info -S
before and after)