Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Is #173 accurate?) Changing refresh-rate seems to have an effect on input lag with experimental-backends enabled #588

Closed
kwand opened this issue Feb 7, 2021 · 3 comments

Comments

@kwand
Copy link

kwand commented Feb 7, 2021

Platform

Manjaro Linux x86_64 (Kernel 4.19.167-1-MANJARO)

GPU, drivers, and screen setup

NVIDIA TU104 (GeForce RTX 2080 Rev. A), driver version 4.60.32.03

name of display: :0
display: :0  screen: 0
direct rendering: Yes
Memory info (GL_NVX_gpu_memory_info):
    Dedicated video memory: 8192 MB
    Total available memory: 8192 MB
    Currently available dedicated video memory: 7304 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce RTX 2080/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 460.32.03
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 4.6.0 NVIDIA 460.32.03
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)

OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 460.32.03
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

Environment

i3 gaps

picom version

**Version:** vgit-dac85

### Extensions:

* Shape: Yes
* XRandR: Yes
* Present: Present

### Misc:

* Use Overlay: No (Another compositor is already running) # NOTE: this is from running 'picom --diagnostics' in terminal. Picom is already running in the background.
* Config file used: /home/dkwan/.config/picom.conf

### Drivers (inaccurate):

NVIDIA  

Configuration:

backend = "glx";
glx-no-stencil = true;
vsync = true;
unredir-if-possible = true;
xrender-sync-fence = true;

refresh-rate = 180;
use-damage = false;

# Shadow
shadow = true;
shadow-radius = 0;
shadow-offset-x = 7;
shadow-offset-y = 7;

shadow-opacity = 0.1;

shadow-exclude = [
    "n:e:Notification",
    "n:e:Docky",
    "n:e:Do",
    "g:e:Synapse",
    "g:e:Conky",
    "n:w:*Firefox*",
    "n:w:*Chromium*",
    "n:w:*dockbarx*",
    "n:w:*VirtualBox*",
    "class_g ?= 'Cairo-dock'",
    # "class_g ?= 'Xfce4-notifyd'",
    "class_g ?= 'Xfce4-power-manager'",
    "class_g ?= 'Notify-osd'",
    "_GTK_FRAME_EXTENTS@:c"
];

# Opacity
detect-client-opacity = true;

# Window type settings
wintypes:
{
    tooltip = { fade = true; shadow = true; opacity = 0.75; focus = true; };
    dock = { shadow = false; };
    dnd = { shadow = false; };
};

#################################
#           Fading              #
#################################
#
# Fade windows in/out when opening/closing and when opacity changes,
#  unless no-fading-openclose is used.
# fading = false
fading = true;

# Opacity change between steps while fading in. (0.01 - 1.0, defaults to 0.028)
# fade-in-step = 0.028
fade-in-step = 0.03;

# Opacity change between steps while fading out. (0.01 - 1.0, defaults to 0.03)
# fade-out-step = 0.03
fade-out-step = 0.03;

# The time between steps in fade step, in milliseconds. (> 0, defaults to 10)
fade-delta = 5

# Specify a list of conditions of windows that should not be faded.
# fade-exclude = []

# Do not fade on window open/close.
# no-fading-openclose = false

# Do not fade destroyed ARGB windows with WM frame. Workaround of bugs in Openbox, Fluxbox, etc.
# no-fading-destroyed-argb = false

Steps of reproduction

  1. Run picom with picom --experimental-backends
  2. Change refresh-rate from 60 to 120 and then to 180 (preferably on a 60Hz monitor)
  3. Observe that input lag improves with higher refresh-rate settings.

Expected behavior

According to #173, the refresh-rate option should be ignored.

Current Behavior

Changing refresh-rate seems to affect the input latency, as in it is markedly improved (for NVIDIA) after switching from 60 to higher settings of 180. (I'm currently using 180 to type this out, and it is much, much better experience compared to frequent typos/lags caused by my typing speed apparently being faster than the latency apparently introduced by picom)

Some other notes

The TL;DR is that I'm wondering whether #173 is accurate or not, as it seems there is a noticeable change in input lag - at least to my subjective eyes; though it's also possible I'm not running picom correctly; it should be started up by i3 when the config is read:

exec_always --no-startup-id picom --experimental-backends

According to #173, it seems that refresh-rate is slated for removal soon. I'm not sure if refresh-rate affecting input latency is related to a change that's occurred in the code in the time that has past since #173 or that the option still affects the code in some ways (but not in other already-removed ways).

I asked in #543 about improvements for NVIDIA drivers using experimental backends mainly due to this issue of input lag (it was pretty unbearable, but so was tearing without vsync turned on for picom), and it seems like there are still no improvements slated to be implemented soon for NVIDIA (other than #147, which now seems to have been pushed back?).

While I would be happy to see this option removed for the sake of improving picom, I would be very concerned if what remains of refresh-rate is removed before input lag changes are made, since this is the main 'improvement' that's made picom much more usable to me, right now.

@kwand
Copy link
Author

kwand commented Feb 7, 2021

Hmm, after taking a quick look through the code (using ripgrep + some manual file viewing in VSCode), it appears that refresh-rate only matters if sw-opti is turned on, which it clearly isn't here (unless Manjaro's default installation ships with other settings, being enabled somewhere).

If this is correct, I'm not quite sure what/where the improved input lag came from, and whether it was just illusory or not. Unfortunately, I don't have a measuring device to test objectively. Though, if it was real and didn't come from changing refresh-rate, could it possibly had anything to do with picom restarting every time the config is changed?

I'm clearly not familiar enough with the codebase to tell if any of this is true or not; I'm actually more confused now as to what this has to do with the experimental backends (unless sw-opti being deprecated is related to the new backends; I'm not sure as I haven't checked that)

@kwand
Copy link
Author

kwand commented Feb 7, 2021

OK, I'm almost certain that what I experienced earlier was either an illusion or caused by picom resetting upon changing the settings (which somehow affected input lag) - my unfamiliarity with the code base is the only thing holding me back from saying this too absolutely. This can be closed once confirmed (and I can open a separate issue if the below is worth exploring)

Went down a rabbit hole trying to figure out if there were other solutions to input lag; I didn't notice until looking more closely at the code and commits that glFinish() is already temporarily being used for NVIDIA since April. After some searching, I discovered these related issues from KDE [1], [2] and tried implementing the same change from [1] into picom, since the KDE removed usleep for allegedly better performance using a cap of 1 on __GL_MaxFramesAllowed.

In my patched picom, it seemed to have solved the busy-waiting CPU usage problem as well and gives somewhat better smoothness, but it appeared to introduce somewhat more input lag? (I'm not quite sure - as it's too subjective for me to tell at this point. I originally thought my patched version had better input lag). Though, KDE seems to have changed their compositing strategy entirely with [2], as they deleted [1] with that commit.

@yshui Just wondering if you've tried setting __GL_MaxFramesAllowed as an alternative fix? I don't assume it's much, much better anymore now, with regards to input lag, though I could do some longer testing (even though results would admittedly be quite subjective).

Personally, after going through that rabbit hole, I've switched over to using ForceFullCompositionPipeline via the NVIDIA driver for now and disabling picom's vsync (just because I personally haven't tried this setting. As I'm typing this right now, it seems 'somewhat' better?)

EDIT: I went back to trying my patched version again, and plan to use it for a bit (since setting ForceFullCompositionPipeline seems to have the added complication of doing it manually every time per launch; allegedly, there are some issues when doing it in an Xorg conf file). Though, theoretically-speaking (as I understand it now) apparently a lower value for MaxFramesAllowed should give better input lag?

Also, I suppose I should note that I've been using a very subjective test (and possibly a terrible idea to begin with) of typing fast to measure input lag - so it's very difficult for me to tell if it's truly picom input lag, application lag, or even keyboard lag.

@kwand
Copy link
Author

kwand commented Feb 9, 2021

Closed. I think this was mostly a fluke - though perhaps I did actually notice an improvement in input lag, but caused by restarting picom or something related to the USLEEP flag.

Started a new issue to discuss using __GL_MaxFramesAllowed and other possible improvements to frame timing and input lag.

@kwand kwand closed this as completed Feb 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant