Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coordinate systems #416

Closed
Richard-Yunchao opened this issue Aug 19, 2019 · 27 comments
Closed

Coordinate systems #416

Richard-Yunchao opened this issue Aug 19, 2019 · 27 comments

Comments

@Richard-Yunchao
Copy link
Contributor

Richard-Yunchao commented Aug 19, 2019

Introduction

There are many coordinate systems in native graphics APIs and WebGL. And the coordinate systems may differ in some aspects. For example, the direction of +y axis may be up or down. And the origin(0, 0) or point(-1, -1) may be at the top-left or bottom-left corner. The differences make developers quite confusing, and require y-flip and/or invert winding direction for culling, in order to make applications rendered correctly and behaved consistently.

This document attempts to summarize all of these coordinate systems and propose appropriate solution for WebGPU. This document is from the Google doc I wrote.

Coordinate Systems

There are 3 coordinate systems needed to be taken into consideration in Graphics pipeline for WebGPU:
1) NDC (Normalized Device Coordinate): this coordinates is used by developers to construct their geometries and transform the geometries in vertex shader via model and view matrices. Point(-1, -1) in NDC is located at either the top-left corner (Y down) or the bottom left corner (Y up).

Normalized vertices (position, normal, transform matrices) and clip coordinate follow NDC.

2) Framebuffer Coordinate (Viewport coordinate): when we write into attachment or read from attachment or copy/blit between attachments, we use framebuffer coordiante to specify the location. The origin(0, 0) is located at either the top-left corner (Y down) or the bottom left corner (Y up).

Viewport coordinate and fragment/pixel coordinate (like gl_FragCoord by default, you can configure it though) follow framebuffer coordinate.

3) Texture Coordinate: when we upload texture into memory or sample from texture, we use texture coordinate. The origin(0, 0) is located at the top-left corner (Y down, upside down) or the bottom left corner (Y up, right-side up).

We can divide texture coordinate into texture uploading coordinate which defines where texels are stored into texture memory (the lowest memory stores the top left texel or the bottom left texel), and texture sampling coordinate which defines where we sample/read from memory. For example, the API part to upload data into texture via texImage2D might be written by one developer while the shader part to sample from texture might be written by another developer. They may have different concepts of texture coordinates. But these two coordinates are the same in a specified graphics API. These two developers should follow the specified API. Then it's fine. What is interesting is that if both of them make mistakes (both of the texture uploading coordinate and texture sampling coordinate are incorrect) for the specified API, you can fetch correct texel!

A few more coordinate systems in graphics are listed below:
1) Tessellation coordinate: we won’t discuss this because WebGPU can’t support tessellation currently.
2) Window coordinate (present coordinate): window compositor/manager, window system, and display may need to be aware of this, but it’s not WebGPU’s duty. And all window coordinates are Y down across different OSes.

OpenGL's framebuffer coordinate is different from window coordinate, so when we present the rendered image, it flips Y for display under the hood to present the result on screen. It might hurt performance because it add an extra pipeline to flip Y in fragment shader for every pixel when present onto screen.
I tend to propose Y down in framebuffer coordinate, which is aligned with window coordinate. So the proposals don't have the performance issue when presenting onto screen in WebGPU's implementations and in D3D/Vulkan/Metal drivers (OpenGL is different, but it is the least important backend for WebGPU).

3) Canvas coordinate, we need to consider this when we implement WebGPU in browsers. But this is not in WebGPU's rendering pipeline. And canvas coordinate is Y down, which is aligned with framebuffer coordinates in my proposals.

Native APIs

OpenGL, OpenGL ES and WebGL

NDC: +Y is up. Point(-1, -1) is at the bottom left corner.
Framebuffer coordinate: +Y is up. Origin(0, 0) is at the bottom left corner.
Texture coordinate: +Y is up. Origin(0, 0) is at the bottom left corner. See OpenGL 4.6 spec, Figure 8.4

D3D12 and Metal

NDC: +Y is up. Point(-1, -1) is at the bottom left corner
Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner
Texture coordinate: +Y is down. Origin(0, 0) is at the top left corner.

Vulkan

NDC: +Y is down. Point(-1, -1) is at the top left corner. See the description about “PointCoord” in Vulkan 1.1 spec.
Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner. See the description about “VkViewport” and “FragCoord” in Vulkan 1.1 spec. But we can flip the viewport coordinate via a negative viewport height value.
Texture coordinate: +Y is down. Origin(0, 0) is at the top left corner.

Possible solutions for WebGPU

Factors we need to consider

When we propose solution of coordinate systems for WebGPU, we need to consider the impact of these factors:
WebGPU developers: if web developers don't need to flip Y or invert winding direction in a particular solution, then that solution is better.
Implementation: we need to consider the implementation and its performance impact on different native graphics APIs: D3D12, Metal, Vulkan (and maybe OpenGL).
WebGL compatibility: it’s impossible to run WebGL applications via WebGPU runtime directly because the APIs are quite different. But we can reuse as many resources as we can for porting WebGL apps to WebGPU. It's better to propose a solution in which we can reuse the vertex data, shaders and textures from WebGL with no change or small changes when we port WebGL application to WebGPU.

Possible solutions

Let's discuss the coordinates in reverse order in pipeline:
1) Texture coordinate: Y is down in all three modern APIs (D3D12, Metal and Vulkan), although Y is up in WebGL and OpenGL. So I propose that y is down in texture coordinate. And I was told that the group has already discussed texture coordinate at #157 before, and the consensus is the same (Y down).
2) Framebuffer coordinate: Y is down in all three modern APIs (D3D12, Metal and Vulkan), although Y is up in WebGL and OpenGL. So I propose that y is down in framebuffer coordinate. In addition, we tend to choose Y down in texture coordinate, so it's better to make framebuffer coordinate be aligned with texture coordinate. Otherwise, when we sample from a rendered texture, which means that we use texture coordinate to read from the texture but we just rendered it via framebuffer coordinate, it might be inconvenient for WebGPU developers if these two coordinates are different.

Y down in FB coordinate will make it easier for WebGPU to render into a Canvas. Because it is Y down in Canvas. They are aligned. Flipping Y is not needed when we draw or composite image in FB to a canvas.
Y down in FB coordinate also makes it easier for WebGPU to present on the screen. Because it is Y down in window coordinate. They are aligned. Flipping Y is not needed when we present the image in FB onto screen.

3) NDC: Y is up in D3D12 and Metal (and WebGL and OpenGL), but y is down in Vulkan. I propose that Y up in NDC for WebGPU because two of three modern APIs and WebGL applications follow this way. Y down is a possible solution, though.

So there are two possible solutions for WebGPU's coordinate systems:

  1. Y up in NDC, Y down in other coordinate systems
  2. Y down in all coordinate systems.

Let's take a look at the the implementation on top of native graphics APIs and the impact of these two solutions (say, for porting WebGL apps).

Implementation and impact

Solution 1: Y up in NDC, Y down in other coordinate systems, see Dawn patch 10201

D3D12 and Metal: When we implement it on D3D12 and Metal, we need to do nothing.
Vulkan: When we implement it on Vulkan, we need to flip Y.
OpenGL: When we implement it on OpenGL, we need to flip Y in vertex shader and invert winding direction.

For this solution, we need to map (-1, 1) at top left corner in NDC to (0, 0) at top left corner in framebuffer coordinate, which might be mathematically incorrect. It is natural to map the smallest value, point (-1, -1), in NDC to smallest value in Framebuffer coordinate.
The upside is that when we port WebGL application, we can reuse the vertex data, and vertex shader directly.

Solution 2: Y down in all coordinate systems, see Dawn patch 8420

OpenGL: When we implement it on OpenGL, we need to invert winding direction.
D3D12 and Metal: when we implement it on D3D12 and Metal, we need to flip Y in vertex shader.
Vulkan: when we implement it on Vulkan, we need to do nothing.

For this solution, we map (-1, -1) in NDC to (0, 0) in framebuffer coordinate, which is mathematically correct. Furthermore, +Y is down in all coordinate systems. The consistency is pretty good for WebGPU developers. WebGL also behave consistently on coordinate systems for developers, it is y up in all coordinate systems though.
However, when we port WebGL application to WebGPU, we need to flip Y in WebGL's vertex shader and invert winding direction in WebGL applications, because the NDCs are different in WebGL and WebGPU. Otherwise, the shape of geometries rendered on screen might be incorrect.

Implementation Details

How to invert winding direction

Changing winding direction from CW/CCW to CCW/CW in graphics API is simple. And we can get the info that whether a geometry is front facing or back facing via gl_FrontFacing (but we can't change its value) in OpenGL's shader.

How to flip Y

We can flip Y by a few means:
1) Operate the coordinate values directly in shader.
In vertex shader, we can negate gl_Position.y
gl_Position.y = -gl_Position.y
In fragment shader, we can assign 1 - texCoord.y to texCoord.y
texCoord.y = 1 - texCoord.y
And we can do this for framebuffer coordinate via FragCoord.

2) Using tools like spriv-cross and/or spirv/OpenGL semantics. It has the ability to flip Y for you in spirv-corss, or specify the origin is located at top left or bottom left in shader for you, or via glClipControl API.

3) Set appropriate viewport rect. We can change viewport rect in Vulkan 1.1 and Vulkan 1.0 with VK_KHR_maintance1 support as follows.

    Viewport vp = originalViewport;  
    vp.y = orignalViewport.y + originalViewport.height;
    vp.height = -originalViewport.height;

This is good because changing the viewport rect is quite easy and we don't need to revise the shader. Sometimes developers would like to change API only. They don't want to revise shader because shader is a private asset and we can get the assembly code (like spirv) only or even binary code only.

Proposal

According to the investigation, solution 1 (Y up in NDC, and Y down in all other coordinates) can be supported on more native graphics APIs (D3D12 and Metal) without any change. Furthermore, it can be supported on Vulkan with simple change at API side only. In addition, it's more friendly to reuse WebGL's resources. So, I tend to propose solution 1.

One more issue: Z range in NDC

Z range is [-1, -1] in WebGL and OpenGL. But it is [0, 1] in D3D12, Metal and Vulkan. So I propose that Z range to be [0, 1] to follow modern APIs.

Q/A about coordinate systems

I discussed with Intel driver team about a few questions related to coordinate systems. I'd like to list them as follows. Hope that I didn't misunderstand them.

  1. When do we do culling? I saw a statement that says all APIs agree on the definition of a clockwise or counterclockwise polygon - this decision is made in framebuffer coordinates as a human would see it if presented to the screen. Is this statement true? Is winding direction (clockwise and counterclockwise) based on NDC or framebuffer coordinate? Previously, I thought CW and CCW is defined in NDC, and backface culling is done after vertex shading but before viewport projection. So, framebuffer coordinate has not been calculated yet in pipeline. So I thought CW and CCW is defined in NDC.
    A: Culling is done after viewport projection but right before rasterization. Because the viewport projection might impact geometry winding (winding might be different from different perspectives). So, culling is done on the basis of FB coordinate. In addition, we have extension/core feature (like VK_KHR_maintenance1) to flip Y via setting negative height for viewport rect in OpenGL/Vulkan, and culling is done after viewport projection, so you need to take care of that. Because flip Y will change the winding direction.

  2. How do we map points in NDC to points in framebuffer coordinate? I thought that the smallest value in NDC - point(-1, -1) - will be mapped to the smallest value - the origin(0, 0) - in framebuffer coordinate. And it is true for Vulkan and OpenGL. However, it is not true for D3D. Is this weird for D3D? I mean, developers might be happier if we always map the smallest value in one coordinate to the smallest value in another coordinate, like FB coordinate, texture coordinate, screen coordinate, etc.
    A: In Vulkan and OpenGL, we map (-1, -1) in NDC to (0, 0) in FB coordinate because Y axis in NDC and FB coordinate are the same (both down or up). This behavior looks more mathematically correct. Mapping (-1, 1) in NDC to (0, 0) in FB coordinate in D3D/Metal seems mathematically incorrect. But if developers get used to it. It’s not a big problem. And the fact is that game engines and 3D modeling tools follow D3D's coordinates to generate meshes for geometries, so developers don't matter it at all. They might think mapping (-1, 1) in NDC to (0, 0) in FB coordinate is natural.

  3. Shall we follow D3D/Metal's coordinate systems or Vulkan's coordinate systems for WebGPU? Looks like the only difference is NDC is y-up or y-down.
    A: I might say that it is good to follow D3D's coordinate systems, because many game engines target D3D first, and port to other platforms after the fact. Different coordinate systems can lead to portability problems like flip Y in existing apps. On the other hand, Following Vulkan's coordinate is not bad if WebGPU looks into future because Y down might be the trend. You see, more and more coordinate systems (NDC, FB coordinate, texture coordinate, window coordinate) are Y down. Being consistent on all coordinate systems is very good for developers, too.

  4. What about the performance impact on hardware?
    A: AFAIK, the performance impact is not a big deal because we only need to flip Y or invert winding direction. I don't have performance data, though.

  5. What about window coordinate (present coordinate)? It seems to me that every window manager or window system uses +Y = down. I mean, Y is down in all window coordinate (or present coordinate) and origin(0, 0) is located at the top left corner across different OSes. Is this correct?
    A: AFAIK, it is correct.

References

  1. Vulkan spec
  2. OpenGL spec
  3. Coordinate systems in MSDN
  4. Working with Viewport and Pixel Coordinate Systems in Metal Programming Guide
  5. Keeping the Blue Side Up: Coordinate Conventions for OpenGL, Metal and Vulkan
  6. Flipping the Vulkan viewport
@Kangz
Copy link
Contributor

Kangz commented Aug 19, 2019

I think "Y up" vs. "Y down" is confusing because it is tied to how we represent things on paper. A better way to look at it imho is to describe how texel (0, 0) is represented in the different coordinate spaces, assuming texel (0, 0) is the first texel in a buffer that's copied to a texture.

For example in OpenGL texel (0, 0) is:

  • Texture coordinate (0, 0)
  • NDC coordinate (-1, -1)
  • Viewport coordinate (0, 0)
  • At the bottom left in present coordinates
  • At the top left in canvas coordinates

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Aug 19, 2019

I don't think there are many differences. In all graphics APIs the origin of a framebuffer and the origin of a texture both represent the lowest byte in memory for the data backing that image. In other words, memory addressing starts at 0 presents origin(0, 0) in FB coordinate and texture coordinate and then increases as we go the right, then as we go to the next line. Whether we are in texels in a texture or pixels in a framebuffer, this relationship holds up. The only difference is that where origin is located, top left corner (Y down) or bottom left corner (Y up).

In addition, present coordinate and canvas coordinate, as I said, don't belong to native graphics API. IMHO, the former equals to window coordinate. Display, window manager, window system need to care about it. And AFAIK, present coordinate is Y down across different OSes (origin is located to top left corner). The latter, canvas coordinate, belongs to web area only. It doesn't belong to native graphics coordinate systems either. And canvas coordinate is also Y down (origin is located at top left corner).

@kvark
Copy link
Contributor

kvark commented Aug 19, 2019

Thank you for moving the investigation here from the Google docs!

However, we need to map (-1, 1) at top left corner in NDC to (0, 0) at top left corner in framebuffer coordinate, which might be mathematically incorrect.

Could you elaborate on that?

invert winding direction from CW/CCW to CCW/CW in graphics API is simple, we may invert winding direction via setting gl_FrontFacing in shader.

Strictly speaking, gl_FrontFacing can't be set, it's read-only. Unless you meant having the shader translation to make it so gl_FrontFacing is inverted? That shouldn't be necessary if the implementation adjusts the frontFace property of a render pipeline at creation.

Now that solution 1 (Y up in NDC, and Y down in all other coordinates) can be supported on more native graphics APIs, and it's more friendly to reuse WebGL's resources. I tend to propose solution 1.

I agree. It's also appealing because only Vulkan needs to be patched, where we can achieve the NDC flip by just specifying the inverted view port rect, so no shader changes are needed.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Aug 19, 2019

I think "Y up" vs. "Y down" is confusing because it is tied to how we represent things on paper. A better way to look at it imho is to describe how texel (0, 0) is represented in the different coordinate spaces, assuming texel (0, 0) is the first texel in a buffer that's copied to a texture.

For example in OpenGL texel (0, 0) is:

  • Texture coordinate (0, 0)
  • NDC coordinate (-1, -1)
  • Viewport coordinate (0, 0)
  • At the bottom left in present coordinates

Let's do not use present coordinate, I think this concept might be misleading: does it mean window coordinate that is used by display to present the image on screen? or does it mean the framebuffer coordinate(fragment/pixel coordinate) which is used by rasterization and rendering fragment/pixel into attachment? I thought it is the former. but looks like you were saying the latter.

  • At the top left in canvas coordinates

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Aug 19, 2019

Thank you for moving the investigation here from the Google docs!

Looks like that we are more active to comment on WebGPU issues, instead of a Google doc. :)

However, we need to map (-1, 1) at top left corner in NDC to (0, 0) at top left corner in framebuffer coordinate, which might be mathematically incorrect.

Could you elaborate on that?

In OpenGL/Vulkan, +Y axis is up or down in all coordinate systems. And point(-1, -1) in NDC is mapped to origin(0, 0) in FB/viewport coordinate. However, in D3D/Metal, Y is up in NDC but Y is down in FB/Viewport coordinate. In order to map the top left corner in NDC to the top left in FB/Viewport coordinate and the shape of geometries keep the same after viewport projection, it maps the top left corner in NDC, which is (-1, 1), to the top left corner in viewport/FB coordinate, which is (0, 0), during viewport projection. For OpenGL/Vulkan developers, I think it might be unintuitive and mathematically incorrect, because we always map the smallest values in one coordinate to another coordinate. For D3D developers, they might get used to that and don't notice this at all.

invert winding direction from CW/CCW to CCW/CW in graphics API is simple, we may invert winding direction via setting gl_FrontFacing in shader.

Strictly speaking, gl_FrontFacing can't be set, it's read-only. Unless you meant having the shader translation to make it so gl_FrontFacing is inverted? That shouldn't be necessary if the implementation adjusts the frontFace property of a render pipeline at creation.

I misunderstood this. Will remove this statement.

Now that solution 1 (Y up in NDC, and Y down in all other coordinates) can be supported on more native graphics APIs, and it's more friendly to reuse WebGL's resources. I tend to propose solution 1.

I agree. It's also appealing because only Vulkan needs to be patched, where we can achieve the NDC flip by just specifying the inverted view port rect, so no shader changes are needed.

@Richard-Yunchao
Copy link
Contributor Author

@Kangz , looks like you have different proposal during the discussion today. What's your proposal? Following OpenGL/WebGL and make Y up for all (default) coordinate systems in WebGPU? Could you clarify your reasons for this proposal and I can discuss with Intel driver team to assess the performance impact?

In addition, all coordinate systems are Y down in Dawn's current implementation, which is different from your proposal. Please correct me if I made any mistakes.

@kenrussell, for WebGL we do have performance issue on some devices which need to flip Y during presentation and prevent overlay when present WebGL on ChromeOS. There is a ChromiumOS bug. I think the reason is that framebuffer coordinate is Y up in WebGL/OpenGL(ES) while canvas coordinate and window coordinate are Y down, so we need to flip Y when we present rendered image in framebuffer onto canvas/screen. This post-processing (flip-Y) may prevent overlay and hurt performance on some devices. But I will investigate further in case I misunderstand the issue.

@kenrussell
Copy link
Member

@Richard-Yunchao I double-checked and confirm that on platforms where Chrome passes WebGL-rendered textures down to the system compositor - like on macOS, where they're handed to Core Animation for presentation - they need to be vertically flipped in order to handle the discrepancy between OpenGL's NDC, which is Y-up, and the window system's coordinates, which are basically always Y-down.

On ChromeOS the GL_MESA_framebuffer_flip_y extension was introduced in order to allow WebGL to be rendered directly in system overlays. It renders the framebuffer vertically flipped, so that the scanout hardware can consume it without needing to flip it vertically again. It's important to achieve good performance on lower-end devices, which are severely fill limited. However, this extension has caused some bugs; see http://crbug.com/973917 (clipping of wide lines) and http://crbug.com/996323 (problem with Intel's CMAA shader). Basically, it wasn't as easy as originally thought to perform the vertical flip under the hood.

Based on this experience it seems best to choose the alternative that diverges as little as possible relative to the native APIs. For this reason I think Solution 1 is the best direction, despite potential performance implications.

@Richard-Yunchao
Copy link
Contributor Author

@kenrussell , thanks for pointing out the issues caused by GL_MESA_framebuffer_flip_y.

WebGPU's default coordinate systems should follow most of the native graphics APIs (Y-up in NDC and Y-down in all other coordinate systems) because hardware design and underlying drivers also follow most of the native graphics APIs in a long run. And we can do least changes to adapt WebGPU's coordinate system to the native graphic APIs. Moreover, we can reuse many resources from WebGL apps and native graphics apps (like meshes and vertices and vertex shader) when we port WebGL apps and native apps to WebGPU. In addition, it is easier for most developers to learn WebGPU with little misunderstanding about coordinate systems if they have graphics background.

We can set different coordinate systems (like y-up in all coordinate systems) as non-default one though, just as @Kangz suggested. Some WebGPU/WebGL JS libraries might use y-up in all coordinate systems. The non-default one is good for those JS libraries during the transition from WebGL to WebGPU.

@kenrussell
Copy link
Member

It would be best to avoid adding optional coordinate systems, because doing so would complicate both the API and its implementation. Would it be feasible to do these sorts of coordinate system changes easily in a library which uses the WebGPU API, or are they intrusive - for example, changing how shaders are compiled?

Note that there is some exploratory work ongoing outside this community group on how to best handle hardware on which optimal rendering requires transformations of the drawing buffer (relative to the way the developer thinks the page is being rendered) such as 90 degree rotations. See https://chromium-review.googlesource.com/1759338 by @sunnyps as a first prototype in the WebGL API. If this group decides to add optional flips then that functionality should be designed in a holistic manner that addresses those use cases too.

@Richard-Yunchao
Copy link
Contributor Author

@kenrussell, true. It should be not difficult for JS libraries to be aligned with the new and optimal coordinate systems and do y-flip in its legacy WebGPU demos and samples, and I suppose there are not many demos till now because WebGPU is far from being widely used. So it is a good time for WebGPU CG to choose an optimal solution as early as possible. Offering multiple options may make developers confused and make implementation complicated and hurt performance. I don't think it is worthwhile to have a non-default coordinate systems for supporting a small amount of existing WebGPU demos (Note that changing those demos is not difficult).

The only valid concern is that @Kangz said that some mobile devices could be y-up in its window coordinates in another mailing thread. Is this true? If this is true for a large amount of mobile devices, then an optional non-default coordinate systems like Y-up in framebuffer coordinate might be reasonable. I was told that window coordinates for presentation across all mainstream OSes are y-down, no exception. But Intel has small market share on mobile and console and some other market and our driver team may make mistakes.

@kenrussell
Copy link
Member

Need @Kangz to comment. My knowledge only covers some desktop platforms and ChromeOS devices, and I'm advocating for a more general solution than offering an optional Y-flip on the framebuffer to solve those use cases.

@Kangz
Copy link
Contributor

Kangz commented Aug 30, 2019

Sorry for the delay answering this and making a proposal, I wanted to make a blog post outlining the reasons I think there is such diversity in graphics API coordinate systems but decided against it because I was only 90% sure the explanation I found are true. With that out of the way here's a proposal for WebGPU along with the reasoning for it.

Proposal for default coordinate systems in WebGPU

WebGPU uses "Y-down" coordinate systems for all types of texture coordinates. Concretely this means that texel (0, 0) corresponds to the following coordinates:

  • (-1, -1) in NDC (gl_Position.xy / gl_Position.w).
  • (0, 0) in normalized texture coordinates used for sampling.
  • (0, 0) in framebuffer coordinates used for rasterization.
  • By default the top-left corner of the canvas element (see below).

The notion of triangle winding is defined based on how the triangle would be if presented to the canvas "Y-down". This means that WebGPU uses a left-handed coordinate system (yay lefties!) and in particular the winding of a triangle is opposite in WebGL and WebGPU (given the same vertex gl_Position).

There is mechanism to choose the orientation of the GPUSwapChain that is a bit similar to the handling of the swap chain's format. The application can choose any valid GPUSwapChainOrientation it wants for its application but can query what would be the optimal choice from the user agent.

// The flip happens before the rotation, if the rotation is 90 or 270
// the swap chain texture width/height are swapped compared to 0 or 180.
dictionary GPUSwapChainOrientation {
    bool flipY = false;
    unsigned long rotation = 0; // must be 0, 90, 180 or 270
};

partial dictionary GPUSwapChainDescriptor {
    GPUSwapChainOrientation orientation;
};

partial interface GPUCanvasContext {
    Promise<GPUSwapChainOrientation> getSwapChainPreferredOrientation(GPUDevice device);
};

Why Y-down all the things.

Having the NDC be a different orientation than the rest of the pipeline is really confusing see for example the Hacks of life blogpost or the fact that no one in this group has been able to wrap their head around to understand what NDC flipped entails for the application developer.

Also there is mathematical purity in having all coordinate systems have the same Y direction because then they all have the same "handedness" (even if it is left-handed).

Finally Y-down instead of Y-up because Y-down is what most display controllers support for overlays and user agents can do zero-copy presenting on the screen when possible. My understanding is that the <canvas> element and the rest of the Web platform use Y-down coordinates so it matches that too.

Why this orientation craziness.

Two reasons. While Y-down is the most useful orientation for display controllers, it isn't the only orientation. An easy example is that on phones Y-down could be optimal but only on one of the portrait or landscape orientations. That's why GPUSwapChainOrientation gets both flipY and rotation: it allows expressing any transform that keeps the swapchain an aligned rectangle.

The other reason is that it helps developers port content to WebGPU: WebGL is used to Y-up everywhere, naively porting to Y-down everywhere like WebGPU would just be a matter of setting flipY: true and choosing the opposite triangle winding.

@ChasBoyd
Copy link

ChasBoyd commented Sep 3, 2019

While Ken and Richard-Yunchao have laid out a persuasive argument for option 1, there is one more point to make here. For most users, it's easier to understand if there are simple rules for when Y is up vs not. The general model we've been using is:
• All 3D coordinate systems should be Y- (or Z-) up.
• All 2D coordinate systems are ‘reading order’ (Y-down) because that’s how they got started originally.
NDC is a 3D coordinate system defining a volume in space. It is used mostly in the 3D/Vertex pipeline to perform clipping and then 3-D projection. Having to transform from a 3D world space with Y-up to a 3D space (clip space / NDC) with Y-down would make current graphics devs crazy, and confuse new users for sure. Let's just keep to the relatively coherent model where NDC is Y-up like a 3D space should be.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Sep 4, 2019

See my comments inline, feel free to correct me if I made mistakes.

Sorry for the delay answering this and making a proposal, I wanted to make a blog post outlining the reasons I think there is such diversity in graphics API coordinate systems but decided against it because I was only 90% sure the explanation I found are true. With that out of the way here's a proposal for WebGPU along with the reasoning for it.

Proposal for default coordinate systems in WebGPU

WebGPU uses "Y-down" coordinate systems for all types of texture coordinates. Concretely this means that texel (0, 0) corresponds to the following coordinates:

  • (-1, -1) in NDC (gl_Position.xy / gl_Position.w).
  • (0, 0) in normalized texture coordinates used for sampling.
  • (0, 0) in framebuffer coordinates used for rasterization.
  • By default the top-left corner of the canvas element (see below).

The notion of triangle winding is defined based on how the triangle would be if presented to the canvas "Y-down". This means that WebGPU uses a left-handed coordinate system (yay lefties!) and in particular the winding of a triangle is opposite in WebGL and WebGPU (given the same vertex gl_Position).

AFAIK, culling (and triangle winding direction) is done after viewport projection and before rasterization. It is far from presentation from the perspective of graphics pipeline stages. So I don't think it depends on Canvas coordinate (which is during presentation). But I will investigate this further.

There is mechanism to choose the orientation of the GPUSwapChain that is a bit similar to the handling of the swap chain's format. The application can choose any valid GPUSwapChainOrientation it wants for its application but can query what would be the optimal choice from the user agent.

// The flip happens before the rotation, if the rotation is 90 or 270
// the swap chain texture width/height are swapped compared to 0 or 180.
dictionary GPUSwapChainOrientation {
    bool flipY = false;
    unsigned long rotation = 0; // must be 0, 90, 180 or 270
};

partial dictionary GPUSwapChainDescriptor {
    GPUSwapChainOrientation orientation;
};

partial interface GPUCanvasContext {
    Promise<GPUSwapChainOrientation> getSwapChainPreferredOrientation(GPUDevice device);
};

What's the impact of providing an option to do flip for swapchain? Window coordinate in most (if not all) OSes/devices are y-down. If we don't flip (the default one), this indicates that the framebuffer coordinate and the texture coordinate should be y-down, which is aligned with window coordinate (so we don't need to flip during presentation). However, if we set the option to true, does this means that FB coordinate and texture coordinate are y-up? Setting y-up in FB coordinate and texture coordinate and do flip may hurt performance. We should not encourage developers to make this option true. So the question is: why do we provide such an option? This will also make browser implementation more complicated.

In addition, I have concern that some JS libs may set this option true by default for their developers, which is really bad.

Why Y-down all the things.

Having the NDC be a different orientation than the rest of the pipeline is really confusing see for example the Hacks of life blogpost or the fact that no one in this group has been able to wrap their head around to understand what NDC flipped entails for the application developer.

The blog Hacks of life is saying that when we port OpenGL apps to Metal (and DX) is really confusing. However, we are not going to follow OpenGL's coordinate systems. So we don't have this issue. Regardless we follow DX/Metal's coordinate systems (Y-up in NDC, and Y down in the other coordinate systems, proposal 1 from my investigation above) or Vulkan's coordinate systems (Y-down in all coordinate systems, proposal 2 from my investigation above), developers don't need to wrap their head around in order to write WebGPU apps. They just need to follow WebGPU's coordinate systems. They don't need to wrap their head for every WebGPU app they are writing and consider the differences in native graphics under the hood. It's browser's duty that how to map WebGPU's coordinate systems to native graphic APIs. And this is not very difficult once we get consensus.

Also there is mathematical purity in having all coordinate systems have the same Y direction because then they all have the same "handedness" (even if it is left-handed).

This is true.

Finally Y-down instead of Y-up because Y-down is what most display controllers support for overlays and user agents can do zero-copy presenting on the screen when possible. My understanding is that the <canvas> element and the rest of the Web platform use Y-down coordinates so it matches that too.

The only difference between DX/Metal's coordinate systems and Vulkan's coordinate systems is y-up or y-down in NDC, while FB coordinate and texture coordinate are all y-down, which is aligned with display controllers support for overlay. Only Framebuffer coordinate and texture coordinate are related to display/window coordinate during presentation. NDC coordinate, tessellation coordinate, etc. are too far from presentation. I think coordinate systems before rasterization are irrelevant to display/presentation. So, no flip during presentation for all of them. So this is not a reason to choose Vulkan's coordinate systems over DX/Metal's coordinate systems. However, this is a good reason to object OpenGL/WebGL's coordinate systems, whose FB coordinate and texture coordinate are y-up.

Why this orientation craziness.

Two reasons. While Y-down is the most useful orientation for display controllers, it isn't the only orientation. An easy example is that on phones Y-down could be optimal but only on one of the portrait or landscape orientations. That's why GPUSwapChainOrientation gets both flipY and rotation: it allows expressing any transform that keeps the swapchain an aligned rectangle.

Same comment as I stated above, this is not a reason to choose Vulkan's coordinate systems over DX/Metal's coordinate systems. But it is a good reason to object OpenGL's coordinate systems.

The other reason is that it helps developers port content to WebGPU: WebGL is used to Y-up everywhere, naively porting to Y-down everywhere like WebGPU would just be a matter of setting flipY: true and choosing the opposite triangle winding.

I don't think this is true. WebGL is y-up in NDC, if we follow DX/Metal's coordinate system, we even don't need to flip Y and invert winding direction in vertex shader because DX/Metal's NDC is also y-up. For FB coordinate and texture coordinate, what we need to do for porting WebGL apps to WebGPU is the same, regardless we choose either DX/Metal or Vulkan because all these three APIs use y-down in FB and texture coordinate, which is opposite from WebGL. So, it's DX/Metal's coordinate for WebGPU that will make life easier when we port WebGL apps to WebGPU.

And I want to emphasize this fact: most developers and 3D modeling tools follow DX's coordinate systems to generate and set up vertices at first. And they may do some extra work like flip y for other APIs after the fact. So I think it's good for WebGPU to follow DX (and Metal)'s coordinate systems. It will make WebGPU coordinate systems easy to be used, and have less mistakes and less misunderstanding.

@Kangz
Copy link
Contributor

Kangz commented Sep 4, 2019

AFAIK, culling (and triangle winding direction) is done after viewport projection and before rasterization. It is far from presentation from the perspective of graphics pipeline stages. So I don't think it depends on Canvas coordinate (which is during presentation). But I will investigate this further.

What I meant is that the definition of winding should match what a developer sees if they present on the canvas with the default options. It would be very confusing if developers look at the resulting texture, see a clockwise triangle on their screen, but the triangle is actually counterclockwise for the backface culling purpose because of some flips in WebGPU.

What's the impact of providing an option to do flip for swapchain?

As discussed offline, these options only affect the presenting of the texture inside the Web page. So for example if you flipY:true, texel (0, 0) will become the following in the different coordinate systems:

  • (-1, -1) in NDC (gl_Position.xy / gl_Position.w).
  • (0, 0) in normalized texture coordinates used for sampling.
  • (0, 0) in framebuffer coordinates used for rasterization.
  • Bottom-left corner of the canvas element, the only thing that changed.

In addition, I have concern that some JS libs may set this option true by default for their developers, which is really bad.

Libraries porting from WebGL could use it this way while they implement WebGPU support, but the should eventually optimize to support the full range of GPUSwapChainOrientation.

I don't think this is true. WebGL is y-up in NDC, if we follow DX/Metal's coordinate system, we even don't need to flip Y and invert winding direction in vertex shader because DX/Metal's NDC is also y-up. For FB coordinate and texture coordinate, what we need to do for porting WebGL apps to WebGPU is the same, regardless we choose either DX/Metal or Vulkan because all these three APIs use y-down in FB and texture coordinate, which is opposite from WebGL. So, it's DX/Metal's coordinate for WebGPU that will make life easier when we port WebGL apps to WebGPU.

I think that's one of the reasons D3D originally chose to use they coordinate systems they are using: logic for OpenGL would magically become Y-down for presentation and be more efficient. That said this "free" WebGL port breaks down as soon as you use render to texture, because your renderer textures will be flipped compared to what you have in your WebGL app. The solution I outlined works 100% of the time for WebGL applications.

And I want to emphasize this fact: most developers and 3D modeling tools follow DX's coordinate systems to generate and set up vertices at first. And they may do some extra work like flip y for other APIs after the fact.

Or developers do gl_Position.y = - gl_Position.y in the vertex shader. I don't think we should worry about the model formats orientation because every engine defines it's own worldspace coordinate system. See this chart.

@Kangz
Copy link
Contributor

Kangz commented Sep 4, 2019

For most users, it's easier to understand if there are simple rules for when Y is up vs not. The general model we've been using is:

  • All 3D coordinate systems should be Y- (or Z-) up.
  • All 2D coordinate systems are ‘reading order’ (Y-down) because that’s how they got started originally.

Thanks, that's good rational for D3D's coordinate system and makes a good case for "NDC Y-up". Whichever the result of the discussion, this group should write porting guides from D3D/Metal, OpenGL/WebGL and Vulkan to WebGPU to help developers understand all that mess.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Sep 5, 2019

AFAIK, culling (and triangle winding direction) is done after viewport projection and before rasterization. It is far from presentation from the perspective of graphics pipeline stages. So I don't think it depends on Canvas coordinate (which is during presentation). But I will investigate this further.

What I meant is that the definition of winding should match what a developer sees if they present on the canvas with the default options. It would be very confusing if developers look at the resulting texture, see a clockwise triangle on their screen, but the triangle is actually counterclockwise for the backface culling purpose because of some flips in WebGPU.

Regardless WebGPU choose DX/Metal's coordinate systems or Vulkan's coordinate systems, a cts-compliant browser will not change triangle winding unexpectedly and surprise WebGPU developers.

You know, native graphics (DX/Metal/Vulkan) developers don't need to do flip-y or invert winding direction if they develop apps for the particular graphics API. The resulting image will be CW or CCW as expected when it is presented on the screen. It's the driver's duty to make this happen. Likewise, WebGPU developers don't need to flip-y or invert winding direction in their apps, they just need to follow WebGPU's coordinate systems. And the resulting image will be presented as expected. It is browser's responsibility to do flip-y and/or invert winding direction and make it consistent on different backend. Things like flip-y and winding direction change can be totally transparent to WebGPU developers if we propose appropriate coordinate systems for webgpu.

BTW, we can classify situations as follows (and don't mix them together, otherwise it's confusing): 1) What we need to do for coordinate systems when we port WebGL (and OpenGL and other native graphics apps) to WebGPU: flip y and/or invert winding direction if needed. 2) What WebGPU developers need to do when they develop WebGPU apps from scratch or on top of a JS lib: no flip, no winding direction inversion, just follow the WebGPU's coordinate systems if we propose appropriate coordinate systems. 3) What we need to do to implement WebGPU's coordinate systems in browsers on different backend (DX, Metal, Vulkan and maybe OpenGL): flip-y and/or invert winding direction if needed. I have analyzed the impact of different proposals (choose DX/Metal's coordinate systems, or Vulkan's coordiante systems) on these three situations respectively at the investigation above.

What's the impact of providing an option to do flip for swapchain?

As discussed offline, these options only affect the presenting of the texture inside the Web page. So for example if you flipY:true, texel (0, 0) will become the following in the different coordinate systems:

  • (-1, -1) in NDC (gl_Position.xy / gl_Position.w).
  • (0, 0) in normalized texture coordinates used for sampling.
  • (0, 0) in framebuffer coordinates used for rasterization.
  • Bottom-left corner of the canvas element, the only thing that changed.

This is your good intention. But JS libraries may use this option in a different way. If we add this option, I think it can become a perfect workaround for JS libraries to follow OpenGL/WebGL's coordinate systems via setting this option to true by default. Then more and more WebGPU apps and demos use OpenGL/WebGL's coordinate systems as time goes by. If we don't have this option, however, it is much more difficult for JS libraries to follow OpenGL/WebGL's coordinate systems: they may need to add some invasive code to flip y in their user's every fragment shader.

You know, using OpenGL/WebGL's coordinate systems and flip-y during presentation is not good, it may prevent overlay and hurt performance. So I disagree to add this option. The only exception is that you have evidence to show that window coordinate is y-up in a large amount of devices, which means that we need to provide an option of flipY=true for these devices during display/presentation. Otherwise, I don't think it is necessary to add such an option: FB coordinate and texture coordinate are y-down in all modern Graphics APIs (DX, Metal and Vulkan), which is aligned with default canvas coordinate and window coordinate in almost all OSes, and flip-y is not needed at all during display.

IMHO, It's time to forget OpenGL/WebGL's coordinate systems and embrace modern graphics APIs' coordinate systems (Y-down in FB coordinate and texture coordinate, no flip-y during display). Legacy support may make more and more WebGPU apps being legacy from the perspective of coordinate systems.

In addition, I have concern that some JS libs may set this option true by default for their developers, which is really bad.

Libraries porting from WebGL could use it this way while they implement WebGPU support, but the should eventually optimize to support the full range of GPUSwapChainOrientation.

@Kangz
Copy link
Contributor

Kangz commented Sep 5, 2019

Regardless WebGPU choose DX/Metal's coordinate systems or Vulkan's coordinate systems, a cts-compliant browser will not change triangle winding unexpectedly and surprise WebGPU developers.

Yes, that's exactly what's described:

The notion of triangle winding is defined based on how the triangle would be if presented to the canvas "Y-down". This means that WebGPU uses a left-handed coordinate system (yay lefties!) and in particular the winding of a triangle is opposite in WebGL and WebGPU (given the same vertex gl_Position).

This explains what the proposed winding definition is for WebGPU compliant browsers and gives a rational of why this was chosen. Not that the definition of winding doesn't depend on GPUSwapChainOrientation.

But JS libraries may use this option in a different way. If we add this option, I think it can become a perfect workaround for JS libraries to follow OpenGL/WebGL's coordinate systems via setting this option to true by default.

Great, it makes it easy to port content from WebGL>

You know, using OpenGL/WebGL's coordinate systems and flip-y during presentation is not good, it may prevent overlay and hurt performance. So I disagree to add this option. The only exception is that you have evidence to show that window coordinate is y-up in a large amount of devices, which means that we need to provide an option of flipY=true for these devices during display/presentation.

This is the reason behind the proposal for a getPreferredOrientation: the browser can tell the application that Y-down is the most efficient orientation. But it goes even further and lets applications on non-Y-down-preferred devices adapt to it and always use the most efficient path.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Sep 5, 2019

You know, using OpenGL/WebGL's coordinate systems and flip-y during presentation is not good, it may prevent overlay and hurt performance. So I disagree to add this option. The only exception is that you have evidence to show that window coordinate is y-up in a large amount of devices, which means that we need to provide an option of flipY=true for these devices during display/presentation.

This is the reason behind the proposal for a getPreferredOrientation: the browser can tell the application that Y-down is the most efficient orientation. But it goes even further and lets applications on non-Y-down-preferred devices adapt to it and always use the most efficient path.

You talked again and again that this option will be friendly for non-y-down-preferred devices, and I'd like to ask again that whether we have such devices in an important market segment? Please show me the evidence that we do have such devices in an important market segment (like PC, Mac, Linux/Unix server, mobile, game console).

Otherwise, if no device or just a small amount of old/legacy devices are non-y-down-preferred, I don't think that adding this option is a priority, considering that we have a lot of things to do for WebGPU MVP and shader formats. You know, this option will make browser implementation more complicated, and make developers confused too.

@kenrussell
Copy link
Member

You know, using OpenGL/WebGL's coordinate systems and flip-y during presentation is not good, it may prevent overlay and hurt performance. So I disagree to add this option. The only exception is that you have evidence to show that window coordinate is y-up in a large amount of devices, which means that we need to provide an option of flipY=true for these devices during display/presentation.

This is the reason behind the proposal for a getPreferredOrientation: the browser can tell the application that Y-down is the most efficient orientation. But it goes even further and lets applications on non-Y-down-preferred devices adapt to it and always use the most efficient path.

You talked again and again that this option will be friendly for non-y-down-preferred devices, and I'd like to ask again that whether we have such devices in an important market segment? Please show me the evidence that we do have such devices in an important market segment (like PC, Mac, Linux/Unix server, mobile, game console).

Otherwise, if no device or just a small amount of old/legacy devices are non-y-down-preferred, I don't think that adding this option is a priority, considering that we have a lot of things to do for WebGPU MVP and shader formats. You know, this option will make browser implementation more complicated, and make developers confused too.

The ChromeOS graphics team at Google has recently asked for the ability to tell applications the optimal orientation for rendering into overlay planes. This functionality is needed going forward, not to support legacy devices.

However, I think that the notion of "preferred orientation" needs to be specified more generally - it has to work for WebGL (at least), and maybe 2D canvas, too. The exact way it will be handled by the application and web page is still being discussed by multiple groups, including people outside the WebGPU working group.

I think the discussion of adding "getPreferredOrientation" can be set aside for the moment. It's not directly related to the choice of WebGPU's coordinate system.

@amerkoleci
Copy link

As an engine developer I agree on having standard WebGPU coordinate system which for me looks good to be or DX/Metal or Vulkan, but not specifing nothing can be problem going further.

@devshgraphicsprogramming
Copy link

devshgraphicsprogramming commented Sep 6, 2019

There is actually no OpenGL/DX/Vulkan mandate that says texture coord (0,0) is top-left or bottom-left, this only depends on how you load your input + what your uv's/texcoords are.

Even with compute shader, if we're talking about invocation (0,0,0) it is only in your head that it it at the bottom-left of the virtual work-grid.

What is Fastest

PNG, JPEG, TGA (don't know about DDS, .basis and KTX) with the sole exception of the "weirdo" BMP all store their byte-data as first-pixel-in-memory-is-top-left.

Now it would be extremely important that the WebGPU backend nor the engine/user-space-code using WebGPU has to flip the texture rows, especially if we want to enable low-battery high-performance texture streaming (i.e. transferring texture data straight to a mapped GPUBuffer pointer, directly from a file, socket, or other source).

Especially no flipping on the back-end as many of the API's don't support row-reversal in hardware on their transfer queues/during tranfer ops:

  • OpenGL/ES row flipping is only a Mesa extension and ONLY FOR PACKING (not unpacking)
  • Vulkan, all row pitch, size, length etc. can be only given as an unsigned int
  • DirectX, no idea if LONG_PTR RowPitch can be made negative on the subresource struct
  • Metal I don't know

so if you're gonna flip uploaded textures behind people's back that will force the use of compute shader and disallow the transfer queue.

@devshgraphicsprogramming
Copy link

devshgraphicsprogramming commented Sep 6, 2019

Or developers do gl_Position.y = - gl_Position.y in the vertex shader. I don't think we should worry about the model formats orientation because every engine defines it's own worldspace coordinate system. See this chart.

I concur with @Kangz , Y/Z/X up left hand or right hand makes absolutely no difference to me, I only care about texture coordinates (not having me or the API flip them on load from CPU memory).

Loads Happen More Often than Writes

Framebuffer output

I should hope that browsers use GPU accelerated compositing, so if a WebGPU swapchain present is actually a blit/textured-quad/copy then the transfer will occur on the GPU in VRAM or on UMA normal memory.

I wouldn't count on direct-to-framebuffer/direct-to-browser rendering, because a lot of developers like their double-buffering.

Render To Texture

Doesn't matter, people have been dealing for years with the fact that textures that have been rendered to end up upside down (on some APIs).

Reading a Texture in a Shader

Doesn't matter to the extreme, textures are virtualized anyway (128kb pages) and stored non-contiguous and probably along a space filling curve Hilbert or Morton/Z-Order (D3D12's only standardized layout other than linear).
They are also cached (even on mobile) and come across highest bandwidth bus possible (desktop GDDR5-6/HBM1-2).
So Y-Up/Y-Down really stops to matter for performance.

Uploading a Texture

No API AFAIK can do atomatic row flipping in its async DMA transfer queues for a texture upload operation.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Sep 9, 2019

So Y-Up/Y-Down really stops to matter for performance.

Thanks for your analysis.

It doesn't matter for these scenarios above, so I think it is not necessary to take the trouble to add a flipY option in swapchain (which makes web developers confused to set it true or false, makes browser implementation complicated, and hurts performance on some devices for some cases - performance does matter for developers and implementations, you know).

It doesn't matter, so it is reasonable to follow the majority of modern graphics APIs: DX and Metal's coordinate systems (y-up in NDC, y-down in all other coordinates), instead of Vulkan's.

Then it's done: explicitly state that WebGPU follows DX/Metal's coordinate systems, period.

@Richard-Yunchao
Copy link
Contributor Author

Richard-Yunchao commented Sep 9, 2019

You know, using OpenGL/WebGL's coordinate systems and flip-y during presentation is not good, it may prevent overlay and hurt performance. So I disagree to add this option. The only exception is that you have evidence to show that window coordinate is y-up in a large amount of devices, which means that we need to provide an option of flipY=true for these devices during display/presentation.

This is the reason behind the proposal for a getPreferredOrientation: the browser can tell the application that Y-down is the most efficient orientation. But it goes even further and lets applications on non-Y-down-preferred devices adapt to it and always use the most efficient path.

You talked again and again that this option will be friendly for non-y-down-preferred devices, and I'd like to ask again that whether we have such devices in an important market segment? Please show me the evidence that we do have such devices in an important market segment (like PC, Mac, Linux/Unix server, mobile, game console).
Otherwise, if no device or just a small amount of old/legacy devices are non-y-down-preferred, I don't think that adding this option is a priority, considering that we have a lot of things to do for WebGPU MVP and shader formats. You know, this option will make browser implementation more complicated, and make developers confused too.

The ChromeOS graphics team at Google has recently asked for the ability to tell applications the optimal orientation for rendering into overlay planes. This functionality is needed going forward, not to support legacy devices.

However, I think that the notion of "preferred orientation" needs to be specified more generally - it has to work for WebGL (at least), and maybe 2D canvas, too. The exact way it will be handled by the application and web page is still being discussed by multiple groups, including people outside the WebGPU working group.

I think the discussion of adding "getPreferredOrientation" can be set aside for the moment. It's not directly related to the choice of WebGPU's coordinate system.

Yes. Orientation is not related to WebGPU's coordinate systems definition. And per my understanding, Orientation in swapchain is different from flipY in swapchain. Orientation will change x only or both x and y, but flipY only change y. In addition, orientation is set on the basis of existing coordinate systems we defined (like in WebGPU, WebGL and native side). Even you set up appropriate orientation, you still need to flipY for OpenGL during presenting, but don't need to flipY for other backend. Orientation and flipY in swapchain are separate issues. Please correct me if I made a mistake.

@kdashg
Copy link
Contributor

kdashg commented Sep 9, 2019

Resolved on call:

  • Defer discussion of getPreferredOrientation to the html/css/canvas specs for now
  • DX/Metal style for NDC and resource coordinates

@Kangz
Copy link
Contributor

Kangz commented Sep 2, 2021

Closing. WebGPU decided on a coordinate system like Metal's / D3D's following this discussion.

@Kangz Kangz closed this as completed Sep 2, 2021
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this issue Sep 6, 2022
* Catch exceptions in expectValidationError

And fix unhandled rejection in testMapAsyncCall.

Was previously caught in WPT (due to window.onunhandledrejection) but
not in standalone.

* Remove unnecessary asynchronicity

* Add note about why expectValidationError is the way it is
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants