Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try moving Metal Layer handling to the pipeline #1554

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

dpogue
Copy link
Member

@dpogue dpogue commented Jan 9, 2024

The goal here is to make the pipeline implementation responsible for constructing the renderer layer, so that everything related to Metal can be handled by the Metal pipeline and (eventually) everything related to GL can be handled by the GL pipeline. The client just has a layer as part of its view and doesn't need to care what renderer is being used.

CALayer* plMetalPipeline::GetRenderLayer()
{
CA::MetalLayer* layer = CA::MetalLayer::layer();
layer->setPixelFormat(MTL::PixelFormatBGR10A2Unorm);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maximumDrawableCount defaults to 3, so there's no need to set that explicitly here.

contentsScale is not exposed as part of CA::MetalLayer's API, but I added setting it whenever it changes to the PLSView handlers (which also set the drawableSize).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

contentsScale is independent from Metal so it probably should go into the client. contentsScale should apply to both GL and Metal evenly. Or whatever else comes along in the future.

self.layer.contentsScale = scaleFactor;

if ([self.layer isKindOfClass:[CAMetalLayer class]]) {
((CAMetalLayer*)self.layer).drawableSize = newSize;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to do this with respondsToSelector: and performSelector:withObject: but CGSize isn't an id type so it can't be sent through there. Also can't do setValue:forKey: which was going to be my other workaround, so we get ugly casting for now 😞

NSInvocation would probably work but that looks awful to use

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NSInvocation is typically on the developer do-not-use list (Swift does not support it in any form) - so NSInvocation should be avoided.

I think the way it's phrased is fine. drawableSize exists on all versions of CAMetalLayer AFAIK so the CAMetalLayer check is sufficient.

Co-Authored-By: Adam Johnson <AdamJohnso@gmail.com>
@colincornaby
Copy link
Contributor

I'll add my misgivings here that I've said elsewhere - but also admit what is proposed might be the best way.

CAMetalLayer is not part of Metal - it's part of the window server API. Which is why I originally bumped it into the client. The client has a 1:1 relationship with the window server. If we wanted to support a new type of window server we'd write a new client.

Apple's window servers have all agreed on a common layer for Metal so far. So that's at least good. But they have not agreed on a common OpenGL layer (which may also be due to their OpenGL and OpenGL ES vendor specific APIs being vastly different.) So we're starting to push more window server specific details into the pipeline, which is kind of meh.

For both Metal and OpenGL on Apple platforms - there are higher level abstractions such as MLKView and NSOpenGLView. NSOpenGLView has fallen out of favor, but MTKView is still widely used and wouldn't be compatible with this change. MTKView wants to supply its own layer. However - I don't have any plans to run Plasma under MTKView on any platform.

Another misgiving on iOS it might be preferred that the view create it's layer directly via layerClass, which this would prevent.
https://developer.apple.com/documentation/uikit/uiview/1622626-layerclass?language=objc

But my understanding is all these issues are shared by the GL pipeline as well in a much larger way. GL has to communicate with multiple window servers from multiple vendors to jumpstart a GL viewport and context.

I guess one question I would have is: At least for GL - the GL layer will only work on macOS. Is there a disadvantage to leaving the layer type selection within the macOS client?

CAMetalLayer* layer = [CAMetalLayer layer];
layer.contentsScale = [[NSScreen mainScreen] backingScaleFactor];
layer.maximumDrawableCount = 3;
layer.pixelFormat = MTLPixelFormatBGR10A2Unorm;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously we've talked about inspecting the screen bit depth and setting the layer pixel format based on that. I.E. don't set a 10 bit color depth if the screen isn't 10 bit. I don't know if that change is moving ahead - but moving that out of the client would complicate things. The answer might need to be some sort of engine level flag for 10 bit color. In theory - Direct3D 9 supports 10 bit color output as well.

I've also seen Apple just set this to 16 bit in their examples (which is what our fragment shaders output in Metal) and call it a day. But memory usage would be higher.

@@ -209,7 +201,7 @@ - (id)init
- (void)startRunLoop
{
[[NSRunLoop currentRunLoop] addPort:[NSMachPort port] forMode:@"PlasmaEventMode"];
[self.plsView setBoundsSize:self.plsView.bounds.size];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Odd. But I think maybe this line was here to get the view to just enter setBoundsSize and do setup.

_renderLayer.backgroundColor = NSColor.blackColor.CGColor;

self.window.contentView.layer = _renderLayer;
self.window.contentView.wantsLayer = YES;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how I feel. PLSView was meant to be self contained and wrap all details about the layer. Mostly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's some nuance here that I don't entirely understand about layer-hosted views and layer-backed views and the order of setting layer and wantsLayer

@dpogue
Copy link
Member Author

dpogue commented Jan 10, 2024

It's mostly a separation of concerns thing: the client shouldn't need to care what renderer is being used and handle special cases for each renderer.

Ideally everything related to a given renderer should be implemented in its pipeline implementation, and that seems to work fine everywhere except macOS where they want different UI things used for different renderer types :(

options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionInitial
context:DeviceDidChangeContext];
}
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good change. I think that this probably does extra checking. It could simply check the layer type and then check for device - in since CAMetalLayer already has device defined. The respondsToSelector would be unnecessary (wouldn't want some other device function floating in unintentionally in a different layer type.) The ifdef might be redundant in since it's a dynamic check, but we could skip the dynamic check if Metal wasn't present.

Longer term - we need a common code path for GL and Metal to handle screen and device changes. I think screen right now is all CGDisplay based so that's agnostic to the underlying API.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell, the only thing this is currently being used for it to print the Metal device name in the window title for debugging purposes, so I'm hopeful it can be dropped entirely at some point in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strangely - this has been useful. On Intel MacBook Pros I've run into situations in which Plasma is running on the wrong GPU, so it's a nice sanity check. However it could be maybe re-implemented against the enumeration code that already exists in the engine and may already be storing the same data.

There's another reason this is a helpful debug hint. Plasma may need to switch GPUs mid run (which can be supported with Plasma's GPU crash recovery mechanisms.) So double checking the GPU device would be helpful for proofing that out. But I haven't gotten eGPU ejection or Plasma-follows-your-current-displays-GPU yet.

@class CALayer;
#else
class CALayer;
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I'm curious, but I bet I know the answer... Another reason CALayer was kept out of the renderer was to prevent Obj-C from creeping in. This forward declaration seems to be part of the abstraction against that. Metal can allocate a CALayer through Metal-cpp. Wouldn't this introduce Obj-C (or, I'm guessing Obj-C msg sending) into the GL renderer itself?

One risk of putting it into the pipeline is that a lot of stuff includes the pipeline, so you risk Obj-C leakage. I'm wondering if maybe there needs to be a secondary class that just pairs pipelines with layers, and that way the layer integration and all the mess that goes with can live outside of the pipeline.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is why I'm playing with subclassing CAOpenGLLayer entirely in C++ using objc runtime APIs (🙊), but in a class that's only included when compiling for Darwin.

The alternative is that the client needs to have a CAOpenGLLayer subclass that knows internal state of the GL pipeline like the GL context object, and that feels like it is prone to breaking due to a fuzzy API boundary between the two (whereas CAOpenGLLayer's expected API is well understood)

@colincornaby
Copy link
Contributor

I'm going to put in some more time on this review this weekend. Still thinking it over. But I need to put in some work to get some more concrete thoughts on this.

I think maybe one thing I was thinking before is that PLSView would be the glue code that matched the renderer to a layer. The view is macOS specific, but we're already in a decent amount of macOS specificity. Layers aren't technically part of SwiftUI, so it provided some level of abstraction and swappability for whenever Apple gets around to replacing layers. (For now, Metal and OpenGL still require layers to operate on macOS.)

I know I've run into issues with this approach on macOS, and you may be as well. Metal needs to talk to the layer to set the framebuffer size for resolution switches, and it also needs to make a request of the layer to swap framebuffers and get a new back buffer. I used to try to abstract that more to with interfaces back to the client - but recently I think I've been allowing the pipeline to talk directly to the layer.

I don't know what you're thinking with OpenGL - but there are probably similar issues. OpenGL also comes with the complication that context creation is managed by the layer as well. I think macOS takes a reasonable position of setting the current context within a callback. OpenGL doesn't have a cross platform context type built in so it's reasonable for macOS to abstract that away. Trying to work with contexts directly will introduce a whole bunch of platform specific code in the renderer. However - not all window managers agree on that. Some window managers encourage direct context manipulation.

If it's helpful in general or here - it may be beneficial to not manage contexts within the GL renderer. I think the context problem is kind of similar to this problem. Contexts are platform specific and will introduce platform code into the renderer - and the client itself is already a pretty good heap of platform code. Multiple contexts also shouldn't be needed - we should be ok with the system provided context. Multiple contexts are only needed for rendering in multiple threads at once. In theory - things like shadow buffers and other offscreen buffers could be rendered in a multithreaded manner. But Plasma has been pretty serial so far.

I'll think about it more. I'm still stuck on PLSView being meant to solve this, but PLSView not necessarily solving it in practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants