Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HiDPI scaling finetuning #837

Closed
elinorbgr opened this issue Apr 9, 2019 · 42 comments
Closed

HiDPI scaling finetuning #837

elinorbgr opened this issue Apr 9, 2019 · 42 comments
Labels
C - needs discussion Direction must be ironed out D - hard Likely harder than most tasks here P - high Vital to have S - meta Project governance S - platform parity Unintended platform differences
Milestone

Comments

@elinorbgr
Copy link
Contributor

Thanks to previous discussions #105 and some awesome work by @francesca64 in #548 winit has now a quite fleshed out dpi-handling story. However there is still some friction (see #796 notably), so I suggest putting everything on the table to figure out the details.

HiDPI handling on different platforms

It tried to gather information about how HiDPI is handled on different platforms, please correct me if I'm wrong.

Here, "physical pixels" represents the coordinate space of the actual pixels on the screen. "Logical pixels" represents the abstract coordinate space obtained from scaling the physical coordinates by the hidpi factor. Logical pixels generally give a better feeling of "how big the surface is on the screen".

Windows

On Windows an app needs to declare itself as dpi aware. If it does not, the OS will rescale everything to make it believe the screen just has a DPI factor of 1.

The user can configure DPI factors for their screens by increments of 0.25 (so 1, 1.25, 1.5, 1.75, 2 ...)

If the app declares itself as DPI aware, it then handles everything in physical pixels, and is responsible for drawing correctly and scaling its input.

macOS

On macOS, the app can request to be assigned an OpenGL surface of the best size on retina (HiDPI) displays. If it does not, its contents are scaled up when necessary.

DPI factor is generally either 2 for retina displays or 1 for regular displays.

Input handling is done in logical pixels.

Linux X11

The Xorg server doesn't really have any concept of "logical pixels". Everything is done directly in physical pixels, and app need to handle everything themselves.

The app can handle HiDPI handling by querying the DPI value for the screen it is displayed one and computing the HiDPI factor by dividing it by 96, which is considered the reference DPI for a regular non-HiDPI screen. Meaning the DPI factor can basically have any float value.

There are several potential sources for obtaining the DPI of the screen:

  • retrieving it from Xrandr, which computes it from the physical dimensions of the monitor meaning:
    • it'll almost never be a round number
    • it can be unreliable if the X server couldn't get the actual physical dimensions
  • retrieving it from a user configuration like xft.dpi or the gnome service configuration, meaning:
    • it is a globally set value, not per-monitor, so it won't work well on mixed-dpi setups
    • if several config sources are available, which is to be used?

Linux Wayland

Similarly to macOS, most of the handling is done only on logical pixels.

The app can check the HiDPI factor for each monitor (which is always an integer), and decide to draw its content with any (integer) dpi factor it chooses. If the chosen dpi factor does not match the one of the monitor displaying the window, the display server will upscale or downscale the contents accordingly.

Mobile platforms

I don't know about Android or iOS, but there should not be any difficulties coming from them : a device only has a single screen, and generally apps take up the whole screen.

Web platform

I don't know about it.

Current HiDPI model of winit

Currently winit follows an hidpi model similar to Wayland or macOS : almost everything is handled in logical pixels (marked by a specific type for handling it), and hidpi support is mandatory in the sense that the drawing surface always match the physical dimensions of the app.

The LogicalSize and LogicalPosition types provide helper methods to be converted to their physical counterpart given an hidpi factor.

The app is supposed to track HiDPIFactoChanged(..) events, and whenever this event is emitted, winit has resized the drawing surface to match its new correct physical size, so that the logic size does not change.

Reported friction

As can be expected from the previous description, most of the reported friction with winit's model comes from the X11 platform, for which hidpi awareness is very barebones.

I believe the main source of friction however comes from the fact that linux developpers are very used to the x11 model and their expectations do not map to the model winit has chosen: winit forces its user to mostly work using logical coordinates to handle hidpi, and only use physical coordinates when actually drawing. This causes a lot of impedance mismatch, which is a large source frustration for everyone.

Solutions

There doesn't appear to be some silver-bullet that can solve everything, mostly as the X11 model for hidpi handling is so different from the other platforms that unifying them in a satisfactory way seems really difficult.

So while this section is really the open question of this issue, there are some simple "solutions" that I can think of, in no particular order:

  • Winit could just keep its current model and work around X11 subtleties as best as it can, even if the dpi factor is 1.0866666666, and we can close this issue as WONTFIX or WORKS AS INTENDED
  • Winit could officially declare that X11 is a second-class citizen wrt to hidpi handling. As such, on X11 winit will always report an hidpi factor of 1 and have physical an logical coordinates always be the same. Winit would then expose some platform-specific methods like get_xrandr_dpi_for_monitor and get_xft_dpi_config, and let the downstream crate deal themselves with X11 hidpi
  • Winit could shift its dpi model to be closer to X11: do everything in physical pixels and only provide the dpi factor as an information and let the downstream users handle everything themselves
  • Someone suddenly has a "Eureka!" moment and figures out a new revolutionary API that cleanly unifies everything

Now tagging the persons that may be interested in this discussion: @Osspial @zegentzy @icefoxen @mitchmindtree @fschutt @chrisduerr

@elinorbgr elinorbgr added D - hard Likely harder than most tasks here P - high Vital to have C - needs discussion Direction must be ironed out S - meta Project governance S - platform parity Unintended platform differences labels Apr 9, 2019
@icefoxen
Copy link
Contributor

icefoxen commented Apr 9, 2019

Thank you, this is a nice overview of the complexity of things! It definitely makes clearer that alot of the weird edge cases are due to X11 being half-assed about it. We'll be rid of it someday, honest.... I'm also not sure we can dismiss mobile platforms as easy though; mobile devices often have much higher DPIs than desktops, 300+ rather than ~100. Making stuff look nice on both is a non-trivial problem, and I sure don't know how to solve it.

For completeness, the definition of "hidpi factor" is physical pixels = logical pixels * hidpi factor. Also, apple uses the term "points" for logical pixels, so one may stumble across that term in apple-related docs.

Not to restate what I've already stated at too much length (:-P) but as a user, I am attracted to the Windows style. By default you get Something More Or Less Sane, and if you tell the system you know what you're doing it gets out of your way and lets you handle it. My use case is basically only gamedev though, which is fairly narrow and specialized.

@Osspial
Copy link
Contributor

Osspial commented Apr 9, 2019

Android, at least, provides fairly detailed docs on how it handles DPI scaling. That's available here.

Regarding X11 DPI scaling: For winit, my general rule of thumb in solutions like this is to do what everyone else does. GDK, Qt, Firefox, and Chrome all have faced this problem before; what solutions did they come up with? If they collectively decided to just not handle DPI scaling, that's the solution we should use, but if there are cleaner solutions buried in their source code it would be silly not find them out and implement them here.

I've got thoughts on reducing the reported friction outside of X11 and solutions for it, but those thoughts aren't fully baked so I'll wait until I figure out what exactly I'm trying to say there before I post them here.

@elinorbgr
Copy link
Contributor Author

elinorbgr commented Apr 9, 2019

@icefoxen

I'm also not sure we can dismiss mobile platforms as easy though; mobile devices often have much higher DPIs than desktops, 300+ rather than ~100. Making stuff look nice on both is a non-trivial problem, and I sure don't know how to solve it.

From the point of view of winit, I was rather thinking in terms of API constraints: a mobile platform will very likely have a constant dpi factor during the whole run of the app, and it also imposes the dimensions of the window (most often the whole screen). This makes questions like "How to handle resizing?" or "How to specify the initial window size?" basically nonexistent.

@icefoxen

Not to restate what I've already stated at too much length (:-P) but as a user, I am attracted to the Windows style. By default you get Something More Or Less Sane, and if you tell the system you know what you're doing it gets out of your way and lets you handle it. My use case is basically only gamedev though, which is fairly narrow and specialized.

Though if there were no question about X11, isn't that pretty similar to what winit currently does (and wayland and macOS), except everything is handled in "logical pixels" / "points" rather than physical pixels?

The main advantage I see is that it allows most of the app code to be invariant under DPI change: only the drawing logic needs to be DPI-aware if all input / UI processing is done in points.

@Osspial

Regarding X11 DPI scaling: For winit, my general rule of thumb in solutions like this is to do what everyone else does. GDK, Qt, Firefox, and Chrome all have faced this problem before; what solutions did they come up with?

I don't think these are actually what we want to compare winit with. Winit's scope is more similar to projects like FreeGlut of GLFW on this point.

Though, for the record, from my experience of having an hidpi laptop, both Qt and GTK (and to some extend Firefox) "solve" it mostly by being configured using env variables, defaulting to not doing any any hidpi handling on X11, see https://wiki.archlinux.org/index.php/HiDPI#GUI_toolkits for example.

@Osspial
Copy link
Contributor

Osspial commented Apr 9, 2019

I don't think these are actually what we want to compare winit with. Winit's scope is more similar to projects like FreeGlut of GLFW on this point.

Though, for the record, from my experience of having an hidpi laptop, both Qt and GTK (and to some extend Firefox) "solve" it mostly by being configured using env variables, defaulting to not doing any any hidpi handling on X11, see https://wiki.archlinux.org/index.php/HiDPI#GUI_toolkits for example.

I was mainly talking about looking at how they calculate DPI scaling values. If they don't even bother, then I think we should just make X11 DPI support second-class - yes, it breaks multi-monitor DPI scaling, but trying to solve that has caused enough problems that I'm tempted to just not even bother with it any more.


Anyhow, my thoughts on reducing API friction:

TL;DR: I think we should drop LogicalSize and LogicalPosition entirely, since Winit is too low-level to expose them in a way that's useful for application developers.

Why do I think that's the case?

There are two broad classes of applications Winit aims to support: 3D/2D rendering for games, and 2D rendering for GUIs. Broadly speaking, logical pixels are useless for the 3D rendering, so I won't spend any time talking about them in that context.

It's less obvious why Winit's logical pixels are useless for GUI rendering: it's useful for GUI applications to express units in terms of logical pixels, right?

Well, yes. However, if a GUI application wants to render its assets without blurring, it still has to choose the proper assets for the pixel density, and manually align those assets to the physical pixel grid. To do that, the GUI's layout engine has to distribute the fractional part of the logical pixels throughout the output image as described here (admittedly for text justification, but the same principle can be applied to widget layout). It can only do that effectively in particular, safe, areas (mainly whitespace areas); if it doesn't take care to do it in those safe areas, it'll end up stretching out parts of the image that shouldn't stretched, such as text, with the following result:

yes gui development is this finnicky. i've spent two years of my life learning this

Winit's logical pixel model doesn't account for those subtleties, and using it as suggested just distributes fractional pixel values evenly through the rendered image, resulting in artifacts like shown above.

To reiterate, the proper way for an application to account for fractional DPI scaling values is by:

  • Choosing the right assets for the given DPI value.
  • Distributing out the fractional logical pixels in places where it's safe to do so.

Winit is, by design, so low in the application rendering stack that it cannot assist with either of those things. As such, we shouldn't even pretend to do it, and just provide physical pixel values w/ a scaling factor to downstream applications. Getting rid of LogicalSize also allows us to expose pixel values exclusively as integers instead of as floats which, as a downstream user, makes a whole lot more sense.

EDIT: To be clear: we shouldn't drop LogicalSize and LogicalPosition without adding in some documentation to fill in the gap - DPI is a hard problem, and if we don't make it a prominent problem, users will forget about it. We should also make get_hidpi_factor a more prominent function by moving it so it's close to the top of Window's documentation, so that it's one of the first functions people see; as of now, it's easy to miss since it's buried in the middle of a whole lot of other, less important functions.

@elinorbgr
Copy link
Contributor Author

I see your point @Osspial. I'm not yet sure I fully agree, but it makes sense.

However, if we go that route, I believe hidpi awareness should be opt-in, maybe with a .with_hidpi_awareness() method on the WindowBuilder. Because handling HiDPI from solely physical coordinate space will likely require more care than from logical coordinates, and so I believe winit users should actively request it.

@icefoxen
Copy link
Contributor

icefoxen commented Apr 11, 2019

Though if there were no question about X11, isn't that pretty similar to what winit currently does (and wayland and macOS), except everything is handled in "logical pixels" / "points" rather than physical pixels?

No because winit lacks the ability to tell it to ignore the hidpi scaling factor.

only the drawing logic needs to be DPI-aware if all input / UI processing is done in points.

Usually when one clicks on a window one is clicking... well, on something. So your application would need to convert logical to physical pixels or vice versa at some point anyway.

...both Qt and GTK (and to some extend Firefox) "solve" it mostly by being configured using env variables, defaulting to not doing any any hidpi handling on X11, see https://wiki.archlinux.org/index.php/HiDPI#GUI_toolkits for example.

...Or by the application itself, in Qt's case by calling QGuiApplication::setAttribute(Qt::AA_EnableHighDpiScaling); as demonstrated here.

With regards to get_hidpi_factor, I'd just like to remind us all of the sort of wacky edge cases we're dealing with: one of the exciting things that's just Hard to deal with in general is you can't always know what hidpi factor a window will actually have until it's created. You can't always control which monitor a window is created on, and if you have multiple monitors with different DPI factors, you don't know what DPI factor to use until the window is already opened, at which point it's too late. I THINK it currently immediately sends a resize event or HidpiChange event or whatever it's called when this happens, but that's really not a good solution, just the least bad one possible.

That said, I'm fine with either opting in or opting out, since unlike converting all logical pixels to physical ones, that's an easy choice to make that only happens in one place.

@chrisduerr
Copy link
Contributor

With regards to get_hidpi_factor, I'd just like to remind us all of the sort of wacky edge cases we're dealing with: one of the exciting things that's just Hard to deal with in general is you can't always know what hidpi factor a window will actually have until it's created.

This is often a big problem with Linux WMs too, since it can lead to resizing twice immediately at startup which many don't really like very much.

On X11, I think Qt attempts most to provide proper Hidpi support. It supports both the Xrandr method of scaling and fixed scaling values for all screens. GTK is surprisingly lacking here by only reading the Xft.dpi for fonts (which is fine and works for most users), but only allowing a fixed DPI UI scale using an environment variable.

I think if winit would provide physical pixels with a way to get the Xft.dpi using the get_hidpi_factor and a way to override this to use the dynamic xrandr approach instead (maybe WindowExt on linux can provide something like `with_xrandr_dpi(true)?), it would likely give developers all the tools necessary to implement DPI however they want (and however a user might want it) on X11.

In general I think the biggest difference from the change to physical pixels to logical pixels was the automatic window resizing. For Alacritty this is nice since we don't have to do it manually anymore, however I think it would be trivial to implement ourselves but very hard to disable. So if this is not done, it seems like the user is mostly in control of handling DPI themselves which I'd be in favor of generally.

All I'd want personally would be a way to get the hpdip factor, handling everything else myself seems straight-forward. Note that this is with a pretty simple application that fully tries to comply with DPI, however we still convert everything to physical pixels anyways.

@elinorbgr
Copy link
Contributor Author

elinorbgr commented Apr 11, 2019

I'm no going to answer to individual points in a quote-reply manner, because I feel it'd would only exacerbate misunderstanding about details, rather than actually advance the discussion.

The thing I understand is that uses such as gaming really only deal in physical pixels, and as @Osspial explained advanced UI layout really needs to account for the physical pixels rather than only work in logical pixels.

However, I feel most of the discussion is done taking the X11/Windows model as a basis. Please take into account that these intuitions don't map well to macOS/Wayland. Especially the question of automatic resizing.

On Wayland, if the dpi factor changes (by moving the window to a monitor), the physical surface is resized, so that the logical size does not change, period. On Wayland, it does not make any sense to request a specific physical size. (But on wayland, the hidpid factor is always an integer, so this simplifies stuff).

The initial sequence of the creation of a wayland window is something like that:

  • The client signifies to the compositor that it intents to create a window
  • The compositor replies to the client with a suggested logical size (for tiling WMs typically) or telling it to choose whatever
  • The client draws its contents once at whatever dpi factor it chooses, usually 1
  • The surface is mapped on the screen, the client is notified of which monitor(s) it is displayed on and of their scaling factor
  • The client can redraw its content using a new dpi factor, the size of the window as displayed on the screen does not change following this

This all works together because when submitting its contents, the client actually tells the server "I'm drawing on this window using a dpi factor of XX".

In this context, if winit is to always expose physical pixels to the user, I see a few possibilities. Assuming winit has some switch to enable or disable DPI-awareness:

  • if dpi-awareness is disabled, winit always tells the server that drawing is done using a dpi-factor of 1, and winit windows will be upscaled by the compositor if necessary and look slightly blurry
  • if dpi-awareness is enabled, I see three options:
    • whenever the dpi factor changes, winit computes the new physical size maintaining the logical size, and forwards the associated HiDPIFactorChanged and Resized events to the app
    • whenever the dpi factor changes, winit just forwards the new dpi factor to the app and lets it deal with resizing itself using winit::Window::set_inner_size()
      • note that here, if the app does not resizes itself, the size of the window as perceived by the user will change (and become twice too small or too big for example)
    • winit picks some arbitrary dpi factor (for example the max of the dpi factors of all monitors), and always uses this factor for everything, letting the compositor downscale the contents if necessary

I personally think the first option is the most correct, to ensure the best visual outcome for the user & avoid wasting resources rendering in hidpi when displayed on a lowdpi screen.

@xStrom
Copy link

xStrom commented Apr 12, 2019

The user can configure DPI factors for their screens by increments of 0.25 (so 1, 1.25, 1.5, 1.75, 2 ...)

This is certainly true for the per-monitor DPI scaling mode that was introduced in Windows 8.1. However it might be worth mentioning that the older system wide DPI scaling mode, introduced in Vista, lets you choose an integer % between 100 and 500. Users can still opt in to this mode today, as seen from the screenshot I just took on Windows 10 1809.

custom-scaling

What's more, this value doesn't actually get directly used in Windows. Whatever you insert gets multiplied by 96/100 and saved as an integer in the HKCU\Control Panel\Desktop\LogPixels registry key. This leads to configuration UI quirks that a value like 112% can't actually be set. 112 * (96/100) = 107.52 so LogPixels gets set to 108. When the configuration UI reads this value, it shows 113% instead, which is 108.48.

@tangmi
Copy link
Contributor

tangmi commented Apr 13, 2019

Oof, I'm familiar with that field. It's the enum, ResolutionScale, that gets casted to an int. The "named" settings have some interesting values that might help inform winit's design: 120, 140, and 160.

That being said, would it be reasonable to just round the DPI scale factor on X11 to the nearest integer (or 0.25 increment)? I believe DPI scaling exists to make UIs appear (about) the same size on different resolution density monitors and that scaling is standardized in the system's compositor to make the UIs of different apps on the same system appear the same size. If X11 doesn't have a standard "blessed" way of getting a nice DPI scale factor, I think we can only make our best guess and hope that's what most other apps are doing.

In support of @Osspial's proposal to drop the logical units from winit, I'd like to make the observation that macOS/iOS and Windows (and likely others I'm not familiar with) use logical units in their windowing APIs since those APIs are tied to OS's UI libraries. While all UI libraries need to be in some logical coordinate system (which could just be equal to the physical units), winit doesn't provide a UI library and can expose just physical units. It should, however, provide tools (like exposing the hi_dpi_factor before creating a window) for callers to create and manage their own UI coordinate system (e.g. they would simply divide by the hi_dpi_factor on all input). winit could even still provide LogicalSize and LogicalPosition as helpers that must be created from a hi_dpi_factor and a PhysicalSize or PhysicalPosition to help encourage and document DPI handling.

@elinorbgr
Copy link
Contributor Author

winit doesn't provide a UI library and can expose just physical units.

Yeah, that makes sense.

It should, however, provide tools (like exposing the hi_dpi_factor before creating a window) for callers to create and manage their own UI coordinate system (e.g. they would simply divide by the hi_dpi_factor on all input).

Absolutely, with one notable specificity: "exposing the hidpi factor before creating a window" is a hairy problem. Currently, not all platforms allows you to know in advance on which monitor your window will be displayed, which means that in a multi-dpi setup, you cannot reliably know in advance what will be the dpi factor applied to your window.

For this reason, I believe there is some uncertainty wrt to window creation : how should winit interpret the size requested by the user ?

  • as a logical size ? This works well with some platforms (all these dealing in logical coordinates), but causes spurious resizes at startup on x11
  • as a physical size ? This would work well for x11, but cannot ever work in wayland in a multi-DPI context
  • in a platform dependent way ? This will require extensive documentation and careful consideration from app writers

winit could even still provide LogicalSize and LogicalPosition as helpers that must be created from a hi_dpi_factor and a PhysicalSize or PhysicalPosition to help encourage and document DPI handling.

Yeah, I believe these are worth keeping too.

@icefoxen
Copy link
Contributor

icefoxen commented Apr 18, 2019

Honestly the more I think about it the more Windows's approach, of all things, makes sense from a conceptual level. Just 'cause it has an actual slider you can adjust in the display settings. If hidpi is a tool to scale things up to look good for the user, then it makes perfect sense to be able to tell applications "your window is X size, oh and also, the user has requested you display at 1.5x scale". The goal is to provide information to the application about what the user wants. Apple's auto-scaling approach is a hack to try to do something, anything, for all those iOS applications that assume they're always on a 1080p screen.

Under this approach, @Osspial 's proposal to drop logical units sounds reasonable, though it makes me sad 'cause it involves undoing a lot of hard work in winit. :-(

This, annoyingly, also sounds like the exact opposite of what Wayland does, so I want to do some extra research there.

For the matter of exposing a hidpi factor before creating a window, unless we can always specify which monitor to have a window on I think we're going to have to choose a least-bad solution. Allow the programmer to either ask a window to be created on a specific monitor and accept it won't always work, or do the current approach of letting them see what monitors have what DPI factor's and handling it themselves. Or a bit of both.

For my use case, the Principle Of Least Surprise would dictate that when a program creates a window, it always gets a window that is the physical size it asks for, and also gets a HidpiChanged event or something if necessary.

This all works together because when submitting its contents, the client actually tells the server "I'm drawing on this window using a dpi factor of XX".

This might be the way to close the loop; let a programmer say "I want a window of physical size 800x600 and hidpi factor 1.0", and after it's created winit can say "here is your window of physical size 800x600, but it got created on a screen with hidpi factor 2.0 so have a HidpiChanged event as well." Programs that then want to resize themselves can do so, ideally before drawing or doing anything else so it's minimally invasive. Since we're always going to have to have the application adapt to what it actually gets instead of what it asks for, then it should be given as many tools to do so as necessary.

@elinorbgr
Copy link
Contributor Author

This, annoyingly, also sounds like the exact opposite of what Wayland does, so I want to do some extra research there.
[...]
For my use case, the Principle Of Least Surprise would dictate that when a program creates a window, it always gets a window that is the physical size it asks for, and also gets a HidpiChanged event or something if necessary.

In its approach, wayland kind-of flips this concern on its head going the other way. The rationale is that the only thing that really matters to the user is "How big do things appear on the screen", and the actual pixel density used to draw them is a detail. This is especially relevant in multi-dpi setups, where logical coordinates reflect a common space independent of the screen actually displaying the window.

And, there may be no simple mapping between the abstract space and the physical pixel: suppose the user has setup a screen mirroring between a lowdpi and a hidpi monitor. Your app now has two different physical sizes at the same time. In this context, from the wayland point of view, it does not make any sense for a client to request a size in physical units, as the app cannot anticipate what will be the relation between the pixels in the buffer it submits in a reliable way.

So what the client does is that it submits a buffer of pixel contents to the compositor, along with the information "btw, this buffer was drawn assuming a dpi factor of XX". The dimensions of the buffer define the logical dimensions of the window (by dividing them by the factor the client used), and the compositor then takes care of mapping this to the different monitors as necessary.

Wayland also tries to be somewhat security-oriented, as such a general axiom is that the compositor does not trust the clients. So the clients will never know where exactly they are displayed on the screens or if they are covered by an other window. Not even if there are other windows. To still allow correct hidpi support, the compositor will provide the clients a list of the monitors the window is currently being displayed on, and the dpi factor of each. But the client will not be able to make the difference between "being half on a screen and half on the other" or "being on both screens because they are mirrored" for example.

The reasonable behavior for clients in this context is to draw their contents using the highest dpi factor of the monitors they are being displayed on, and let the compositor downscale the contents on the lower-dpi monitors. This is what winit currently does: whenever a new monitor displays the window, it recomputes the dpi factor, and changes the buffer size accordingly. As such the logical size of the window does not change, but its physical size does.

This might be the way to close the loop; let a programmer say "I want a window of physical size 800x600 and hidpi factor 1.0", and after it's created winit can say "here is your window of physical size 800x600, but it got created on a screen with hidpi factor 2.0 so have a HidpiChanged event as well." Programs that then want to resize themselves can do so, ideally before drawing or doing anything else so it's minimally invasive.

So, given what I just described I suppose you can see why this would be difficult. The way a wayland-native app would see it is rather "Here is your window of logical size 800x600, btw you were mapped on a screen with hidpi factor of 2.0, so your contents were upscaled by the compositor. If you want to adapt to it, tell me and submit your next buffer twice as big."

I hope this helped clarify the wayland approach to dpi handling, and how it conflicts to what other platforms (especially x11) does.

@tangmi
Copy link
Contributor

tangmi commented Apr 18, 2019

I think I'm with @icefoxen on using physical units to create a window.

If there is an api that's like get_hidpi_factor_before_window_creation() -> Option<f64>, I imagine a workflow going something like:

// the app knows the logical size of its content
let app_resolution = LogicalSize { width: 800, height: 600 };

// assume no dpi scaling if the hidpi factor is unavailable yet (e.g. on x11)
let hidpi_factor = get_hidpi_factor_before_window_creation.unwrap_or(1);

// at this point, this is the best guess we have for how big our app content should be
// on screen
let window_size = app_resolution.to_physical(hidpi_factor);

let wb = WindowBuilder::new().with_dimensions(window_size);

// Later in the main loop...
events_loop.poll_events(|event| {
    if let winit::Event::WindowEvent {
        event, ..
    } = event
    {
        match event
        {
            winit::WindowEvent::HiDpiFactorChanged(hidpi_factor) =>
            {
                // if we can only get the hidpi factor after window creation, this
                // event should be fired on the first frame and the app can respond
                // to it before the first frame is presented
            }
            _ => (),
        }
    }
});

This lets platforms like Wayland shine while supporting other platforms in a reasonable way. This does mean that winit has to pick a single "best" hidpi factor provide the app. I think this is reasonable, especially if an app cannot really do "better" (like rendering different parts at different DPIs) even if given the whole set of current possible dpi factors for the window.

If an app just doesn't care about hidpi at all, it can work entirely in physical units (without even having to say "I'm rendering at a hidpi factor of 1"):

let window_size = PhysicalSize { width: 800, height: 600 };
let wb = WindowBuilder::new().with_dimensions(window_size);

// ...
winit::WindowEvent::HiDpiFactorChanged(hidpi_factor) =>
{
    // just ignore this event
}

If all events from winit are using physical units (e.g. window resize), then app developers can choose to opt-out by just ignoring all things hidpi.

@elinorbgr
Copy link
Contributor Author

@tangmi I get your intent, but I'm afraid there has been some misunderstanding: the get_hidpi_factor_before_window_creation() function would not be possible on wayland in general, because a window cannot know in advance on which monitor it'll be displayed.

So your example code, rather than "letting wayland shine", is rather an example of what I described earlier as "interpreting initial window dimensions in a platform dependent way" : as a physical size on all platform and as a logical size on wayland.

@tangmi
Copy link
Contributor

tangmi commented Apr 19, 2019

Ah, my apologies. Given that, however, I'm curious how wayland solves this problem... Is wayland strongly associated with any GUI library (qt, maybe?) or is it just the windowing system that has no opinions on what goes in each window? If it's the latter, I'm curious how people are intended to deal with hidpi settings if the creation of a window (and all mouse, touch, etc input) is in logical units. To me, this seems to imply that wayland is very opinionated that all users correctly handle DPI scaling (by doing everything in logical units) and to only dip into physical units when doing the "low level" thing of dealing directly with gpu surfaces.

I know I've been flip flopping on this, but if Wayland, macOS, and Windows all deal in logical units, I think that winit could also deal in logical units. I'm not sure about how X11 fits into this... Chromium and i3 do some gnarly looking stuff to calculate the DPI scale factors.

@elinorbgr
Copy link
Contributor Author

Is wayland strongly associated with any GUI library (qt, maybe?) or is it just the windowing system that has no opinions on what goes in each window? If it's the latter, I'm curious how people are intended to deal with hidpi settings if the creation of a window (and all mouse, touch, etc input) is in logical units.

Wayland is not associated with any GUI library and rather has a pretty low-level approach to graphics : a window is defined by the buffer of its pixel contents, and the clients are 100% responsible to drawing them. All input is done in logical units, but with sub-pixel precision (fixed point number with 1/256 of a logical pixels in resolution).

To me, this seems to imply that wayland is very opinionated that all users correctly handle DPI scaling (by doing everything in logical units) and to only dip into physical units when doing the "low level" thing of dealing directly with gpu surfaces.

Kind of yes. The idea is that if your app is not hidpi aware, you just treat logical pixels as if they were physical pixels and the display server will upscale your window on hidpi screens. If you are hidpi aware, the only part of your code that needs to deal with physical pixels is your drawing code.

This model is greatly facilitated by the fact that the possible dpi factors are only integers. Wayland is arguably pretty opinionated on this point (with the mantra "half-pixels don't exist"), and argues that either your screen is single, or double or triple, ... pixel density, and that finer size configuration should be done by application configuration (font size and friends).

@icefoxen
Copy link
Contributor

icefoxen commented Apr 19, 2019

So if you ask for a window of 800x600 pixels on Wayland, and it creates window of 1600x1200 physical pixels and says "this is a 800x600 logical pixel window with a hidpi factor of 2", how do you draw your stuff at full resolution? As I understand, if you give it a buffer of 800x600 pixels and say "this is drawn at a hidpi factor of 1", it will upscale it to fit. Do you then hand it a buffer of 1600x1200 pixels and say "this can be drawn at a hidpi factor of 2" and it draws it on the screen without transformation?

Also, more importantly for my use case, how does this interact with the creation of OpenGL/Vulkan contexts? Those API's have their own opinions about viewport sizes that may or may not be the same as Wayland's.

@elinorbgr
Copy link
Contributor Author

elinorbgr commented Apr 19, 2019 via email

@elinorbgr
Copy link
Contributor Author

elinorbgr commented Apr 19, 2019 via email

@elinorbgr
Copy link
Contributor Author

I just realised I was not precise enough wrt to OpenGL/Vulkan:

For OpenGL, the EGL wayland interface has and API to set the pixel size of the drawing surface, this is what glutin's WindowedContext::resize uses.

For Vulkan, I am no expert but if I understood correctly, Vulkan swapchains are created for a given rendering surface size, and this size is the one used for exporting the buffers to the wayland compositor.

@icefoxen
Copy link
Contributor

icefoxen commented Apr 23, 2019

How does wayland dpi stuff interact with events, then? I assume everything is in logical coordinates again.

Wayland in general sounds excellent from the point of view of the compositor, and like utter hell from the point of view of the application developer who just wants to have some assurance that what is actually getting shown bears some vague resemblance to what they intend to be shown. Unsurprisingly. I've given enough presentations to be really sympathetic to the idea of wanting to mirror one's desktop on your laptop and on a crappy projector and have it actually look similar in both places. (Lots of projectors, even new, expensive, modern ones, have absolutely absurd resolutions like 854x480 or 1280x800.) But with integer-only DPI factors there's no way it's actually going to look and function very much the same in the not-uncommon cases of, say, the monitor and projector having sizes that are non-integer multiples of each other, or or having different aspect ratios. All the compositor can hope to do in that case is draw the window at the larger size, and scale it down/mush it around on the smaller display to make it fit as well it can even if the result is ugly. Or ask the application to draw it twice at different sizes.

In which case, from the point of view of the application, you have to just accept "this is going to get scaled with ugly interpolation no matter what I do and I can't help it, the only thing to do is draw it large and pray it works out okay", or "I need to handle layout and drawing at two different resolutions at once". Neither of which are actually helped by having your window units be in logical pixels.

Ugh. Sorry if I'm beating a dead horse, nobody's sicker than thinking about this than I am. But what is an application even supposed to do if it's mirrored on two monitors, one 2560x1440 and the other 1920x1080? Both have the same aspect ratio, both are very common and reasonable modern monitor sizes. Very sane situation to have. One monitor is 4/3 times the size of the other. An integer hidpi factor isn't going to cut it. Again, the only options are "scale with interpolation" or "get the application to draw twice at different sizes".

@elinorbgr
Copy link
Contributor Author

Yes all events are done in logical sizes.

I don't get your point for the rest though. If you're mirroring on two monitors with very different dimensions, the display server (be it X11, Wayland, Windows or MacOS) has only two solutions : either the two monitors won't cover the same rectangular surface of the "virtual desktop" (which is what Xorg does IIRC), or it'll have to scale things on one of the monitors, and I don't see how using logical coordinates in the protocol have anything to do with the decision of the display server to do one or the other (and the Wayland protocol does not mandate anything here).

In any case complaining about it here will achieve nothing apart calming your nerves on me. I didn't design the protocol and have no power to change it. Unless your point is that winit should not support Wayland at all, in which case I don't know what to say to you.

Anyway, at this point I just don't care anymore. I think I've pretty clearly exposed the constraints of the Wayland platform, so I'll just let you all decide on the API your want. Ping me when you have decided and I'll shoehorn the Wayland backend into it.

@icefoxen
Copy link
Contributor

icefoxen commented Apr 23, 2019

Yeah. Sorry. That wasn't really directed at you personally as much as me just being frustrated. Sorry it came off that way. 😢

The point I'm trying to make is basically that hidpi is a false solution that doesn't actually solve any of the problems it pretends to solve. Trying to deal with it only makes life more complicated for no gain, because there are so many situations it doesn't cover.

I had hoped that Wayland would make the world's graphics ecosystem better, not worse, but that doesn't seem to be happening.

@elinorbgr
Copy link
Contributor Author

elinorbgr commented Apr 23, 2019

@icefoxen
Yeah. Sorry. That wasn't really directed at you personally as much as me just being frustrated. Sorry it came off that way. 😢

Okay, I get that you are frustrated, even though I don't fully get the source of your frustration. Given how I'm mostly familiar with how Wayland handles DPI, I'm kinda frustrated by how x11 seem to not handle it from my point of view.

As @anderejd reminds us, our goal here is to find a pragmatic API that works reasonably well on all platforms. Let me make a proposal that attempts that.

  • All winit-generated events only deal in PhysicalPosition and PhysicalSize
  • the HiDPIFactorChanged(f64) event is changed into HiDPIFactorChanged(f64, Option<PhysicalSize>).
    • The second value of the event may contain a new physical size suggested by the platform to account for the change in pixel density. This is a suggested size that is not automatically applied by winit, and the app writer is free to do whatever they want with it. If the value is Some(_) it is however recommended to call Window::set_inner_size with the chosen size, even if it is not the one suggested by winit. Not doing so may result in graphical glitches (like the window decorations not being drawn the correct size on Wayland for example).
  • The window creation API takes as argument a PhysicalSize and an optional Option<f64>.
    • If the Option is set to None, winit will just give you a window with this physical size and not try to be any smarter.
    • If the Option is set to some value, winit will interpret this as the intended DPI factor matching this physical size, and if the monitor your window is spawned on does not match this value, it'll generate a HiDPIFactorChanged suggesting you a new physical size corrected for the actual DPI factor.
    • On Wayland, due to how the platform works, setting the option to None is equivalent to setting it to Some(1.0).
  • While not used in the API, the LogicalSize and LogicalPosition types and the conversion methods with their physical counterpart are kept in winit's API for convenience.

How does that sound?

@Osspial
Copy link
Contributor

Osspial commented Apr 24, 2019

@vberger That looks good. The one point I'd add would be to change the units in PhysicalPosition to i32s and in PhysicalSize to u32s, since it doesn't make a whole lot of sense to expose physical pixel values as floats (you can't have a fractional physical pixel, or at least you shouldn't).

@elinorbgr
Copy link
Contributor Author

Agreed about PhysicalSize, I'm not so sure about PhysicalPosition. Some platforms (wayland for example), provide sub-pixel precision for input events. Do we want to lose that?

@Osspial
Copy link
Contributor

Osspial commented Apr 24, 2019

I wasn't aware that was something Wayland could do. If that's the case, I see no reason to not expose that.

@Ralith
Copy link
Contributor

Ralith commented Apr 25, 2019

X11 with XI2 also provides subpixel precision for mouse positions. This is useful e.g. for painting in graphics applications, where it can noticably reduce aliasing.

@icefoxen
Copy link
Contributor

icefoxen commented May 6, 2019

If people agree that the way to go is to default to PhysicalPosition/Size I'm willing to make a PR to start the process. It sounds easy and tedious, so I'm probably qualified. Not sure if this is something we want rolled into the same release as EL2.0, thoughts?

@elinorbgr
Copy link
Contributor Author

IMO it'd be better to roll it in the same release as EL2.0. I'd tend to cluster all breaking changes together rather than do several largely breaking releases in a short timespan.

@tangmi
Copy link
Contributor

tangmi commented May 7, 2019

I believe that it's possible for input devices to have a higher resolution input space than the screen they're displaying to (e.g. pen digitizers), so PhysicalPosition might want to use floats.

An example is the POINTER_INFO struct on Windows, which can return values in HIMETRIC for higher precision with pen input.

@Ralith
Copy link
Contributor

Ralith commented May 7, 2019

I believe that it's possible for input devices to have a higher resolution input space than the screen they're displaying to (e.g. pen digitizers)

This is actually pretty common even for conventional input devices, in my experience. There's really no reason for any analog input device's precision to be limited to the display resolution, of all things.

@tangmi
Copy link
Contributor

tangmi commented May 7, 2019

...

This is actually pretty common even for conventional input devices, in my experience. There's really no reason for any analog input device's precision to be limited to the display resolution, of all things.

Yeah, as another example, it looks like Wacom wintab's PACKET struct has fields, pkX, pkY, and pkZ, which (although the docs aren't clear on this) seem to be in either the tablets "native" or "output" space, which maps to some region of the screen, but are not equatable to the screen.

Additionally UIKit's UITouch.location returns a point that's in a given view's coordinate system. It's float precision presumably because views don't necessarily need to be integer multiple scale factors from each other, but this design has the side effect of allowing fractional input, even on the root view.

@Osspial
Copy link
Contributor

Osspial commented May 30, 2019

I'm gonna take a stab at implementing this.

@Osspial
Copy link
Contributor

Osspial commented May 30, 2019

  • The window creation API takes as argument a PhysicalSize and an optional Option<f64>.
    • If the Option is set to None, winit will just give you a window with this physical size and not try to be any smarter.
    • If the Option is set to some value, winit will interpret this as the intended DPI factor matching this physical size, and if the monitor your window is spawned on does not match this value, it'll generate a HiDPIFactorChanged suggesting you a new physical size corrected for the actual DPI factor.
    • On Wayland, due to how the platform works, setting the option to None is equivalent to setting it to Some(1.0).

@vberger Actually, thinking about it, does it maybe make more sense to have the window creation API take either a PhysicalSize or a LogicalSize, like this?

pub enum Size {
    Physical(PhysicalSize),
    Logical(LogicalSize),
}

impl From<PhysicalSize> for Size {/* ... */}
impl From<LogicalSize> for Size {/* ... */}

pub struct WindowAttributes {
    pub inner_size: Option<Size>,
    pub min_inner_size: Option<Size>,
    pub max_inner_size: Option<Size>,
    /* ... */
}

impl WindowBuilder {
    pub fn with_inner_size<S>(mut self, size: S) -> WindowBuilder
        where S: Into<Size>
    {/* ... */}
    pub fn with_min_inner_size<S>(mut self, min_size: S) -> WindowBuilder
        where S: Into<Size>
    {/* ... */}
    pub fn with_max_inner_size<S>(mut self, max_size: S) -> WindowBuilder
        where S: Into<Size>
    {/* ...*/}
}

IMO this would be easier to document/implement than having an optional target DPI parameter, but I'm also the person that came up with it so I may not be the best judge of that.

@elinorbgr
Copy link
Contributor Author

@Osspial I guess it makes sense yes. And in that case, such API could also be used for Window::set_inner_size I guess?

@Osspial
Copy link
Contributor

Osspial commented Jun 1, 2019

@vberger Yep. I'll go ahead on that design, then.

@PSteinhaus
Copy link

Actually, thinking about it, does it maybe make more sense to have the window creation API take either a PhysicalSize or a LogicalSize?

This helped us very much, thank you for providing it. (Sorry that this comment doesn't really provide any constructive feedback, I hope it's ok anyway.)

@dhardy
Copy link
Contributor

dhardy commented Jun 6, 2021

I believe this is all implemented now and can be closed?

At least, the biggest issue I have with Wayland scaling is the lack of support for non-integer scale factors, but that's not really something we can solve here.

@kchibisov
Copy link
Member

I'll close this issue given that semantics outlined here established in the winit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C - needs discussion Direction must be ironed out D - hard Likely harder than most tasks here P - high Vital to have S - meta Project governance S - platform parity Unintended platform differences
Development

No branches or pull requests