-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic VRR management #5076
Comments
Does enabling/disabling adaptive sync work via
|
The second certainly does seem like the more preferable option, if it works in practice.
Probably would need to be an EGL and Vulkan extension, if Mesa wanted this at all. |
This would only be desirable if we go for the first option. In this case, indeed having a way for users to override adaptive sync for each app would be nice. |
Honestly, I think the second proposal is the only one making sense to me. And rather than having the compositor doing that, the kernel should be doing it since it is as easy to implement the logic there as in the compositor. As for "Force a page-flip if no client commits a new buffer", why is that needed? The HW or driver should do that transparently for you anyway, otherwise they are broken because the KMS interface does not mandate compositors to push new frames constantly. |
I asked about this last XDC, and kernel devs wanted user-space to do it. I'll try asking again to see if that's the consensus. Doing automatic page-flips at arbitrary times in the kernel could surprise user-space. It might just be fine, not sure.
Let's say a full-screen client renders at 60FPS, and then suddenly decides to stop completely. To avoid flicker, we need to perform page-flips to slowly ramp-down to the lowest possible refresh rate. |
All of what I am saying is assuming what I would call a sane HW design: the HW will repeat a frame if the kernel asks to (write to a register) or if vblank_max_ns is exceeded (this value being set by the kernel based on the EDID). This enables soft-queuing of the frame and not having to wait for the exact time to push the 'trigger' bit.
Automatic flips should happen only to repeat a frame that has already been presented. This is something the userspace gets no feedback of AFAIK, and this is what the HW should be doing anyway (once vblank has been stretched to the max and no new frame came in).
Right, but the kernel has all the information needed to do the ramp down on its own, just set vblank_max_ns to min(edid_vblank_max_ns, vblank_max_ns_cur + vblank_max_change_ns). A similar solution can be used for ramping up ;) Now, I think the ramping up and down could be improved by providing expected flip times for a frame (and maybe it expiration date?) so as to let the kernel select the frame closest to the actual flip time. |
One more thought: We should add VRR_REFRESH_LIMIT (maximum change per frame), VRR_REFRESH_MIN (minimum refresh speed, clamped by the EDID mode) and VRR_REFRESH_MAX (maximum refresh speed, clamped by the EDID mode) as connector properties. This way, the userspace would be in full control without needing to keep the CPU busy to guarantee pushing the next frame at the exact right time. |
Hi, First of all, to what's been discussed on the list: My monitor has a VRR range from 40 up to 144 Hz and I don't have flickering with the current Sway implementation. It can also show the refresh rate in a HUD, so it's easy to see and if you need something to test, you could ping me. About having A-sync enabled on the "desktop":
E.g. if I play a 60 FPS video in the background and move my mouse, Sway renders at 144 Hz and as 144%60>0, the video begins to stutter. The same is true for games with an overlay cursor or anything else. Having the refresh rate follow the mouse cursor unconditionally basically breaks the benefit of what VRR wants to achieve: smooth playback of content. But if it doesn't, we have problem 1 again. That's why I think it makes more sense for full screen or needs more complex rules. I used to think it made sense to always enable VRR years ago, but changed my mind here. Next, I consider flickering a hardware issue. After all, the display's EDID says something like "I can run from 35 to 144 Hz" and not "I can run 35-144 but please just change 10 Hz at a time". I've used 3 adaptive sync compatible setups with supported ranges from 35-90, 56-144 and 40-144 Hz. None of them flickered when going from min to max and back quickly. There seems to be a reason why some LCDs would support a constant 30 Hz rate without flickering but their adaptive sync range still starts above 40 Hz. I could currently buy only 13 different models that claim their minimum AS rate is 30 Hz. 156 Models with 40 Hz minimum and 474 models with 48 Hz min. My theory is that most screens aren't actually capable of going through their full range but they still advertise it because a) AMD has a feature called LFC on Windows (see below), and b) because A-sync is only enabled in specific cases (full screen, whitelisted) on Windows and xf86-video-amdgpu, which mitigates those issues.
This is what LFC (low framerate compensation) on AMD/Windows does. LFC doubles/triples/... frames in order to stay within adaptive sync ranges. E.g. if the display supports 56-144 Hz a-sync range, but framerate is 40 FPS, then 80 Hz would be sent to the screen, which is obviously better than showing unsynced 40 FPS on a 56 Hz rate. It might be worth asking AMD if they plan LFC in amdgpu as well, maybe flickering issues would be resolved then without further changes. LFC is a useful feature after all, because it extends the usable VRR range so IMHO that'd make more sense than the compositor working around display hardware issues, but that's really just an opinion. One more thing to the current sway VRR state: I've noticed Firefox "pollutes" the refresh rate across all workspaces of an output. So if it plays a video or animation, and I change to a different workspace, set another container to full screen or even have swaylock in the foreground, the display refreshes at FF's rate. I haven't noticed that with other programs, once they're invisible, they don't affect refresh rate anymore. Maybe a blacklist would make sense after all. |
Harry agreeds:
Harry has pointed this out too. For fullscreen apps, it's pretty clear that we want to wait for the client to submit a frame. If we show multiple surfaces (regular desktop), it's less clear. We can't predict whether the client will submit a frame or not.
Wayland can't predict the refresh rate of clients. We'd probably need a protocol to let clients queue multiple frames and attach a timestamp ("don't display this frame prior to this timestamp").
Hm. Firefox requests frame callbacks even when it's not rendering. But we shouldn't send these frame callbacks if the window isn't visible, this might be a Sway bug here. |
This would work for something steady like video but generally, when playing games or any other content that reacts to user input (even if it's just scrolling a web page) "pre-rendered" frames are advised against because it introduces significant lag for the benefit of a smoother playback. VRR is for smooth playback and I wouldn't want my compositor to introduce lag. In my (unprofessional) opinion: the compositor shouldn't do this at all. It updates if there's something new and the kernel driver determines if the display needs an additional refresh or not. I mean, that's what they already do with the minimum refresh rate anyways. Also, I've just looked into the kernel and they already have LFC (I wasn't aware of that). I think that should be the solution to the flicker issue. LFC also has the potential to add latency, e.g. if the driver decided to display the last frame again and the next frame comes in 1ms after that. Then this frame has to wait for e.g. ~6 ms (if 144 Hz is the upper bound), but that's still better than pre-rendering frames and show each frame with a delay of 1-2 full refreshes.
Do you want me to open an issue for that? Need any more info from me? |
Maybe a protocol for clients to tell the compositor the predicted/preferred framerate would be better, and use that information set up frame doubling if necessary. Or detect a stable framerate over a certain number of frames and use that as the basis for future frame doubling and restart the calculation when a frame outside of the range arrives. |
LFC is already supported in amdgpu (the kernel driver), and has been for some time. I've used it without problems in X, and I just tested it with Sway (and it worked correctly). It did have bugs in the past (which resulted in flickering) but they should be fixed by now. |
I just want to add that I get periodic stuttering with VRRTest and Retroarch with adaptive_sync enabled (that doesn't occur when using native X). With VRRTest the stutter seems to occur every 8 seconds or so, with a 144hz refresh rate, target fps of 139, actual fps of 129. The discrepancies between target and actual don't seem to be relevant as they are similar with X as well, however. If this should be considered a separate bug, I'm happy to open a new issue. |
Started testing out VRR last night and here are some remarks. Using 5700XT, Mesa-git and 5.6 kernel, displays are Asus MG278 and LG27UK650-W. Tests run on workspace with only wallpaper, also the only workspace assigned to the screen. The Asus has FPS counter I've observed VRR is working on it. Refresh rate drops to 40Hz immidiately after movement on the screen and after a couple of seconds rises to around 62Hz. Is there a reason for this behaviour, why is it not keeping down to 40Hz? When moving cursor around the screen to keep refresh rate at 144Hz there are black frames every five seconds or so, this happens also when gaming. The LG does not have FPS counter so tests are a bit limited. Cursor movement is noticeably uneven compared to non-VRR, the display has extended and basic FreeSync modes, both have the same problem. Also tested with browser and scrolling is very unpleasant compared to 60Hz non-VRR. No black frames on this display. |
@grmat If this issue persists on sway/wlroots master, please open a separate issue. It's hard to track issues if they only exist in comments. |
@YaLTeR is implementing this for GNOME here: https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1620 |
AFAICS this is frame_callback and compositing time delaying we already have within sway, I don't see VRR specific parts in this MR. |
The innovation is the automatic delay estimation:
|
What I implemented would rather be #4734 but parts of it might be useful here indeed. |
Oh, right, indeed. |
On my new VRR monitor (Gigabyte G27QC), I don't have flickering in Windows (unless it is a game with extremely stuttery framerate), but in sway it is absolutely unusable. Every time I start or stop moving the mouse cursor, the display flickers. I disable VRR in Linux for this reason. Sounds like many people in this thread are sceptical of the need to implement any smarter algorithm for managing refresh rate, because they have monitors that don't flicker. So I wanted to comment to say that it is very much necessary, because for me (and likely many others) the feature is unusable without it. My monitor's VRR range is 48-165 Hz. I've also read that the flickering is common with VA-panel type displays (which mine is), more so than other LCD technologies. Maybe all the people who have IPS panels don't have an issue, IDK. It would be nice to have the benefits of VRR for content (video, games, etc.), but without the compositor causing the massive jumps in refresh rate due to cursor movement, etc., which causes the flickering. |
I'll just add to that, I own two different model freesync monitors, and they both flicker bad enough that I would disable freesync during desktop use. They both happen to be VA panels. |
AFAIK VA panels are very sensitive to slight changes in voltage, and the voltage needs to change dynamically depending on the refresh rate. Since the display hardware cannot predict the timing of the next frame coming in, the flickering issue seems to be unavoidable. The larger and more inconsistent the frame delta, the worse it will be. It's an inherent limitation of the technology. Suggesting that the advertised VRR window is "too large / misleading" and that they shouldn't be advertising it, is just wrong. It's great that they are doing it, it's just that this panel technology has some inherent limitations. It allows for an awesome gaming experience with games at different levels of performance. Many of us prefer VA for its advantages with deep blacks and high contrast, so there are good reasons for this technology to be supported (it's not "inferior"), but it does have its quirks. When using it in Windows, it works great, but the current implementation in sway is awful, as it assumes that it is OK to just instantly jump between min/max refresh rate. |
The driver is responsible for avoiding refresh rate changes that cause flicker. Please open a bug for your kernel driver. |
Just here to say that I also had to disable VRR because of the flickering on my 165hz VA panel (mesa-git, RX 6800 XT). |
I'll like to leave my anecdote and idiotic ramblings here. I have two LG 27gn950 monitors, both work fine running at their native 144hz refresh rate (which utilizes display stream compression) and the current sway implementation for VRR. Noticeably reduced input lag for moving the mouse / windows and typing lengthy github comments is a very nice feature indeed. Zero flickering or any perceived brightness changes so far for the week that I've been using them. In a perfect world, VRR would mean VRR and not VRR with ramp up and ramp down or other nonsense edge cases. If the technology used in the display cannot support instant changes in frame timings, then the monitor's firmware should compensate in some way. Obviously, we are faced with a leaky abstraction here and we cannot leave these monitors unsupported even though technically they are faulty by my definition. I do like the idea of making the use of VRR completely transparent to applications simply because as a software developer, shortcuts always get taken whether you like to admit it or not especially when there are time constraints. Having the burden of applications managing how VRR works especially when we are talking about making these decisions based on monitors having a faulty VRR implementation which could depend on the specific monitor model doesn't seem like a very sustainable solution. Maybe we should have a configurable ramp up/ramp down interval and/or minimum refresh rate so that users that previously did not have issues can disable the feature so that mouse movements will not have a "warm up" period until they get to the desired refresh interval. For users that do experience flickering, they can adjust the smoothness factor or other variables to suit their specific hardware. I do by no means think this is a perfect solution and like most people I like things to "just work" but we are dealing with imperfect hardware, so any solution will be non-ideal at best. However, it is a stop gap solution until monitors inevitably switch to superior technologies such as OLED/micro LED where these issues likely will no longer exist. X11 got in the state that it has because technology has moved on, and x11 hasn't. If we start baking in protocols into wayland regarding VRR, then there will be precedence for us to maintain what in my opinion are hacks for faulty hardware. VRR should be as simple as "compositor has new frame -> monitor should be told of new frame" and I trust that thing will eventually end up like that. |
Just throwing in my findings, I have an ASUS TUF VG27AQL1A. Flicking issues seem to come and go, but usually when flickering happens it's when the refresh rate is at its minimum (48hz) on an idle desktop. Moving the mouse instantly jumps it up to 144hz and the flickering disappears. This happens regardless of if ELMB is on or off. Playing a video usually makes it go away as well as it goes up to 60hz at least. It also goes NUTS if I was just playing something in Retroarch with BFI on. (even if it was just fine while ingame) |
The frame rate is controlled by the GPU driver. Massive unwanted hacks aside there isn't anything we can do. Assuming this monitor isn't just wrong for advertising a rate that it can't maintain, the GPU driver is what needs to change. |
It would take either KDE or GNOME fully enabling VRR at the desktop to make the driver devs step in and try to come up with a workaround. My guess is that they will stay away from VRR at the desktop until VA panels start going away, and I can't really blame them. It's a hardware issue after-all, and any workaround for this might negatively affect game performance, the bread and butter of VRR. It seems to me like whatever the cheapest way to turn on VRR for games but off at the desktop is probably the best solution for these screens. |
You can toggle the setting on when starting a game. |
There is a pretty reliable workaround. I did this years ago, in the early days of FreeSync: build yourself a custom EDID with a min framerate of e.g. 60 Hz. It is saved as file, you can then add to kernel cmdline to be loaded at drm init instead of the display's provided one. Ping me if you can't figure it out, I might find something more somewhere. One of us could then provide collected info and/or a simple tool if it affects more users. I don't think there is the need to work around this in display managers, especially as I think those problematic screens tend to disappear from the market.
IMHO, that's exactly what it is. In a way they do work, but not if the frequency changes in large jumps in a short amount of time, which doesn't usually happen in video games, which is why not many people experience this. In games, frame rates vary a lot, but not from 40 to 140 and back <1s - but that's what happens if you idle on the desktop and move the mouse. And in fact, AMD tried to work around this by e.g. displaying the same picture twice at 80 Hz instead of dropping to 40 instantly. They called it LFC and it had more reasons, but this is one of them. |
Here are some: https://github.com/rpavlik/edid-json-tools or https://sourceforge.net/projects/wxedid/ |
Right, and that's the only good option right now. Ideally the process would be automatic, even something rudimentary, like an optional setting to enable VRR for fullscreen applications but disable it otherwise. Seems to me like that wouldn't cause long-term trouble, and also wouldn't be hard to implement (although i dont know the sway codebase so i could be wrong on that one). That's what i meant by "cheap solution" |
I have this issue even during gameplay sometimes (factorio, which is limited to and constantly running at 60fps) on an 48-170Hz IPS screen. GPU is AMD. |
This comment has been minimized.
This comment has been minimized.
WorkaroundI currently have this script: #!/usr/bin/env bash
#~/.local/bin/adaptive-sync
swaymsg "output * adaptive_sync on"
cleanup() {
swaymsg "output * adaptive_sync off"
}
trap cleanup EXIT
"$@"
exit $? It will then automatically enable/disable adaptive sync for whatever I'm running (e.g.
It would be great if it worked with |
This seems reasonable and is already the way KDE Wayland works. If there are any fullscreen applications that break, the option can just be disabled in the usual way. |
Am I missing something, or is this not already supported with the VRR option set to 2? |
Which VRR option? |
Disregard my previous comment, I'm in the wrong repo. |
VRR for fullscreen apps onlyI stitched this script together The idea behind this script is to only use VRR when a application is in fullscreen mode, here's the features:
As a result, you don't have to manually toggle VRR (which mitigates cursor stuttering in desktop mode when multi-tasking while gaming if VRR was to be enabled globally)
Update 1 Changelog: |
@GrabbenD, your script works fine for most applications I've been using, but the kitty terminal does not seem to like fullscreen with it much. When using the Edit: Edit 2: The other possibility I'm wondering about is the GPU accelerated terminals are not expecting a changing refresh rate, or toggling of VRR. |
|
With #5063, users have a way to unconditionally enable VRR. However this can cause some flickering on some monitors. Flickering seems to happen on higher-end monitors which have a larger VRR range. It seems like big enough refresh rate variations will cause flickering.
I think there are two ways to fix this. Both options aren't necessarily exclusive.
Do like xf86-video-amdgpu: add a way for clients to opt-in
This would mean designing a Wayland protocol to let clients opt-in to VRR. Only clients with a mostly-fixed frame rate would opt-in. This excludes clients such as video players and web browsers, and includes games.
The compositor would only enable VRR if a client is fullscreen and has opted in.
The protocol should probably operate at the
wl_surface
level. We'll want to let Mesa opt-in if possible. Not sure how to let clients override Mesa's decision (this is also an issue with the current X11 implementation).This is not great, because we miss some use-cases (power savings for web browsers, lowering the refresh rate to the video's frame rate for video players) and it relies on Mesa's VRR blacklist (I don't really like the blacklist).
Avoid flickering in the compositor
This is a little bit more involved. However this allows us to take advantage of VRR for power savings and video playback. Also this doesn't require any cooperation with clients or Mesa, the compositor can do everything on its own.
The text was updated successfully, but these errors were encountered: