Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add image tiling nodes to enable Tiled Upscaling #5142

Merged
merged 16 commits into from
Nov 30, 2023

Conversation

RyanJDick
Copy link
Collaborator

@RyanJDick RyanJDick commented Nov 20, 2023

Contributors

Big thanks to @JPPhoto, @dwringer, @skunkworxdark and @dunkeroni for their contributions in this discord discussion: https://discord.com/channels/1020123559063990373/1161727357309165608.

The implementation in this PR is based closely on their work.

Check out that discussion for a more-advanced, continually-evolving implementation of tiled upscaling.

What type of PR is this? (check all applicable)

  • Refactor
  • Feature
  • Bug Fix
  • Optimization
  • Documentation Update
  • Community Node Submission

Have you discussed this change with the InvokeAI team?

  • Yes
  • No, because:

Have you updated all relevant documentation?

  • Yes
  • No

Description

This PR adds several new invocations to enable image tiling workflows:

  • CalculateImageTilesInvocation
  • TileToPropertiesInvocation
  • PairTileImageInvocation
  • MergeTilesToImageInvocation

Here is a sample Tiled Upscaling workflow to illustrate how these tiling invocations are intended to be used: tiled_upscaling.json

Known Limitations / Weirdness

During development of this feature there was lots of deliberation around how best to implement it within the 'nodes' ecosystem (a single tiled upscaling node vs. multiple nodes to enable tile splitting/merging). Below is a list of the known shortcomings of the selected implementation. Many of these point to improvements that could be made to node workflows in general.

  • CustomTypes
  • Performance:
    • The current implementation does lots of unnecessary image encode/decode roundtrips to pass images between nodes when we really just want to pass tensors in memory. This wastes time.
    • The current implementation results in much more model switching than is necessary. Specifically, we alternate between 'Denoise Latents' and 'VAE Decode' rather than performing 'Denoise Latents' on all images followed by 'VAE Decode'. On systems with limited RAM/VRAM model cache space this model switching could be very costly. It is possible to work around this issue, but it requires adding a bunch more complexity to the workflow graph in the form of more iterate/collect nodes. The takeaway here is that complex node-based workflows can encourage inefficient model switching behavior.
  • Algorithm constraints:
    • The selected node architecture prevents us from using information from previously-refined tiles while processing the current tile, because we do not support cyclic graphs. This has been proposed as a possible improvement to the algorithm, but would require a very different implementation and node interface.
  • Usability:
    • Unfortunately, users needs a pretty complex graph to do anything useful with these new tiling nodes.
    • I fear that the API is likely to change. Should we make these 'experimental' nodes and hide from the UI somehow?

Related Tickets & Documents

I previously posted this RFC with an alternative single-node implementation: #5109

After exploring it further, I decided to proceed with the approach in this PR. The biggest challenge with the single-node approach is that it is cumbersome to call into all of the necessary logic that is spread across the Stable Diffusion nodes (e.g. to support ControlNet, IP-Adapter, etc.).

QA Instructions, Screenshots, Recordings

Here is a sample Tiled Upscaling workflow: tiled_upscaling.json. Try it out and let me know what you think.

Please reflect on whether this is the right API for tiling. Now is the easiest time to change it.

Added/updated tests?

  • Yes
  • No

The core tiling logic has strong unit test coverage. Invocations are not currently covered.

TODO

@dwringer
Copy link
Contributor

dwringer commented Nov 23, 2023

I've looked over the code for this PR and I have a few thoughts. There are a few things that are very useful in the tiled upscaling process we've been using in Discord (via @skunkworxdark's nodes) that I have found to be pretty essential for getting the best results. As I've been testing and using those nodes extensively over the past few days, I'd like to share some comments on what they offer as it relates to what's included and discussed in this PR.

Most prominently is the inclusion of @JPPhoto's smart seam implementation, which locates an optimal seam between two adjacent tiles - this seam (plus a specified margin) is used for the overlap blending, which greatly helps to obscure the location of seams. This leads into the discussion about whether a VAE decode can simply be done at the end, rather than per-tile.

The smart seam algorithm operates in image space, so it outright requires decoding the tiles individually. Beyond that, another important factor in why I think individual tile decodes are essential is that VAE decode can be costly (unless itself tiled), and if a user is already generating tiles of a size they customarily generate successfully, decoding the tiles individually means users can be assured the decode will be successful as well.

Regarding tile size, it's been my experience that large tiles are no particular problem, or at least, are a known problem for anyone who does non-tiling high res optimization node workflows already. Giving the user the power to specify the size of tiles allows users to make tiles as large as they want up to whatever will fit into their system's VRAM. Conversely, providing a mechanism to automatically calculate tile size based on a given number of divisions is very convenient when VRAM is not particularly limited.

Both of the aforementioned tile size calculation approaches are covered in @skunkworxdark's nodes, one with his own implementation of a tiler and the other with an implementation by @JPPhoto. In the latter, the user specifies a minimum overlap in pixels, whereas the former just takes an overlap percentage between images, which is assured by the algorithm. I think the different interfaces provided by those two tiling nodes are both useful tools and elegantly implemented.

Finally, the ability to sample a single noise tensor, generated at the full upscaled size [and cropped for each tile], is something we've been doing which has proven helpful in a couple of ways. Unfortunately this requires each tile to align to pixel multiples of 8, which further complicates things as these tiles may not all be exactly the same size. Thus, in order to crop segments of the large noise tensor for each individual tile generation, the coordinates of that tile must be provided along with the image for denoising. The purpose of the large noise field is twofold: one, the overlaps between upscaled tiles will both have the same underlying noise, which causes them to blend a bit more smoothly together. Two, if a user is not using pure white noise [a subject for another discussion], then discontinuities could arise in the noise structures between two adjacent tiles.

@RyanJDick
Copy link
Collaborator Author

@dwringer, thanks for the review! I'll do my best to respond to all of the points you raised.

Smart Seam

Most prominently is the inclusion of @JPPhoto's smart seam implementation, which locates an optimal seam between two adjacent tiles - this seam (plus a specified margin) is used for the overlap blending, which greatly helps to obscure the location of seams.

Smart seam sounds great. I think it makes more sense to add in a future PR though. It should be straightforward to add as a minor version bump to the MergeTilesToImageInvocation. It adds enough complexity than it should probably be reviewed/evaluated independently.

Merge in image vs. latent space

The smart seam algorithm operates in image space, so it outright requires decoding the tiles individually. Beyond that, another important factor in why I think individual tile decodes are essential is that VAE decode can be costly (unless itself tiled), and if a user is already generating tiles of a size they customarily generate successfully, decoding the tiles individually means users can be assured the decode will be successful as well.

The workflow shared in this PR does what you are describing (decodes tiles, then merges in image space) - so, I think we are on the same page here.

I haven't experimented with merging in latent space. If we did want to support this in the future, I can think of a few possible paths. The simplest would be to just do both tiling and merging in latent space - the current nodes should work as-is for this.

Preferred Tiling Interface

Regarding tile size, it's been my experience that large tiles are no particular problem, or at least, are a known problem for anyone who does non-tiling high res optimization node workflows already. Giving the user the power to specify the size of tiles allows users to make tiles as large as they want up to whatever will fit into their system's VRAM. Conversely, providing a mechanism to automatically calculate tile size based on a given number of divisions is very convenient when VRAM is not particularly limited.

Both of the aforementioned tile size calculation approaches are covered in @skunkworxdark's nodes, one with his own implementation of a tiler and the other with an implementation by @JPPhoto. In the latter, the user specifies a minimum overlap in pixels, whereas the former just takes an overlap percentage between images, which is assured by the algorithm. I think the different interfaces provided by those two tiling nodes are both useful tools and elegantly implemented.

It sounds like we have found that different schemes for specifying the tiling are convenient under different circumstances:

  • Tile size vs. number of tiles
  • Target overlap vs. min overlap vs. implied overlap
  • Overlap in pixels vs. %

To support different tiling schemes, we'll want to have a different tiling node for each scheme (since they have different input APIs). We can add as many tiling schemes as we need in future PRs, but I think the key right now is to agree on a tiling output representation that all of these schemes can share (to ensure that they can all share the same tile merging nodes). @dwringer What do you think of the current Tile representation?

Global noise

Finally, the ability to sample a single noise tensor, generated at the full upscaled size [and cropped for each tile], is something we've been doing which has proven helpful in a couple of ways. Unfortunately this requires each tile to align to pixel multiples of 8, which further complicates things as these tiles may not all be exactly the same size. Thus, in order to crop segments of the large noise tensor for each individual tile generation, the coordinates of that tile must be provided along with the image for denoising. The purpose of the large noise field is twofold: one, the overlaps between upscaled tiles will both have the same underlying noise, which causes them to blend a bit more smoothly together. Two, if a user is not using pure white noise [a subject for another discussion], then discontinuities could arise in the noise structures between two adjacent tiles.

The workflow attached to this PR generates a single global noise tensor, as you've suggested.

With regards to multiple-of-8 errors, the current implementation takes the following approach:

  • No multiple-of-8 validation on tiling-related nodes, because as-written they should work with any tile/image/overlap shapes.
  • If you do violate the multiple-of-8 requirement in nodes such as CropLatentsInvocation or NoiseInvocation then those nodes will raise an exception.

@dwringer, does this handling make sense to you? Or did you have something else in mind?

@skunkworxdark
Copy link
Contributor

skunkworxdark commented Nov 23, 2023

@RyanJDick,
Before I start I agree with what @dwringer said and feel his points are all well made. I second that the smart seam is definitely worth including.

My first impressions of using the workflow you provided. It seems that you have gathered most of the essence of what has already been done in my nodes. I have only had a brief look at the code and hopefully, I will get a chance to have a deeper dive into that tomorrow. I am not 100% sure why so much effort has gone into rewriting what already exists but maybe when I look at the code in depth I will see.

  • Crop Latents - I assume you are currently using that from my set of nodes. It doesn't currently exist in the main branch at the moment. I guess that would need to be brought into the main as well.

  • What benefit do you get from cutting up the image in the workflow rather than doing it in a single node? Personally, the workflow around the Tiles to Properties and the Image Crop nodes is a bit of a mess of connections and would be very easy for someone to get wrong or mess up. I think the approach of having a single node doing the cutting is preferable. I can't see any benefits of having it done in the workflow like that.
    image

  • The Calculate Image Tiles is pretty much the same as the Default Tile Generator in my nodes where it just fits the last tiles in and potentially has a large overlap. I feel that for a user to get a good distribution of tiles and overlaps then choosing the tile size and overlap can be tricky, especially for images of non-standard sizes. That is why we moved away from this, if you take a look at the minimum overlap and even split versions of the tile generators in my nodes then you can see alternatives to this approach.

  • Why do you have an image size input into the Merge Tiles to Image? This information is already available from the last tile. Having this as an input just gives the ability to pass incorrect values and mess up the image reconstruction.

  • Is there a reason for having a different blend amount compared to the overlap? If you choose a blend that is smaller than the overlap this is just wasted generated pixels.

  • I've not debugged it but think you might have a problem with your blending masks. You start with the Y mask and then the x but the first thing done is to set 0's for the non-masked part I think this will overwrite part of the already generated y mask. If I am right then this would manifest in images as a hard-to-spot sharp cut in the blending. Additionally

  • I am not sure I agree with the idea that you should leave the 8-pixel validation to later nodes. This will probably just be confusing for users as to why and how they fix it. Also if you know at the point of generating the tiles then why not stop the process at that point and give the user a clear error?

  • The final image quality output from this doesn't match up to what comes out of my nodes but I am not sure exactly why. I think it might be the choice of using IP-Adaptor and canny controlnet but it might also be down to the potential mask issue I mentioned earlier. I think on the column to the right of the door you can see a sharp cut in the vertical lines. You can also see the same kind of thing on the left-most lower arch on the right-hand side of it. Generally, the image seems just a bit blurry.

Source image
image

Output from this workflow:
image

Output from my nodes:
image

@RyanJDick
Copy link
Collaborator Author

Thanks for the detailed review, @skunkworxdark.

I totally used Crop Latents without realizing that it was a custom node...

I'll address that and get to the rest of your comments, but probably not until Monday.

@psychedelicious
Copy link
Collaborator

psychedelicious commented Nov 24, 2023

#5113 is superseded by #5157

Here's the tiled upscaling workflow, fixed to work load on #5157:
tiled_upscaling_5157.json

I'm not sure where to get missing crop node, so I haven't actually tested running it.

Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't reviewed yet, but I noticed that this PR somehow does not need a review to merge. This makes me uncomfortable, so I've requested changes to hopefully prevent this accidentally merging.

@skunkworxdark
Copy link
Contributor

skunkworxdark commented Nov 24, 2023

@psychedelicious

I'm not sure where to get missing crop node, so I haven't actually tested running it.

fyi the crop node is from my nodes and can be found in this branch. https://github.com/skunkworxdark/XYGrid_nodes/tree/Skunk-Exact-division-Tiling

@skunkworxdark
Copy link
Contributor

skunkworxdark commented Nov 24, 2023

@RyanJDick ,

Just had a go at rebuilding the workflow after @psychedelicious changes as for some reason I couldn't load his fixed workflow. I was just hit with loads of errors. While rebuilding the workflow I found a couple of silly things.

  • My impression of the Tile to Properties area of the workflow as being a hot mess and very easy to get wrong has really been reinforced after having to rebuild it. I have built more than my fair share of these types of workflows now and if it is mildly confusing for me it's going to be a stretch for others. If you keep the image cutting inside the iterate on the workflow then maybe think about adding width and height as outputs to the Tile To Properties to avoid having to calculate it with maths nodes as that would simplify it a little.

  • The Calculate Image Tiles and Merge Tiles to Image nodes should have the Height and Width inputs reversed so they match every other node. This avoids node connections crossing which can lead to confusion. What kind of madman has them the other way around ;) (jk)

tiled_upscaling_5157-fix.json

@psychedelicious
Copy link
Collaborator

@skunkworxdark to use the fixed workflow I uploaded you'd need to rebase this PRs branch on my branch from #5157. Not expected to work on this PR directly

@skunkworxdark
Copy link
Contributor

@psychedelicious that exactly what I thought I did. Do I need to do a yarn build aswell?

@psychedelicious
Copy link
Collaborator

Ah yeah, would need to rebuild the UI (or run it in dev mode with yarn dev, using the app at localhost:5173)

@skunkworxdark
Copy link
Contributor

Oh yeah I always run it with yarn dev. So not sure why I had issues.

@psychedelicious
Copy link
Collaborator

psychedelicious commented Nov 24, 2023

I identified and fixed some issues in #5157. But workflow loading itself works fine for me, after rebasing #5142 on #5157.

  1. I had removed certain UIType members which are now redundant (like UIType.FloatCollection), so on attempting to load your nodes, it threw an error. I've added some backcompat logic to prevent this. A warning is now issued when a deprecated UIType is used and it is just ignored.

    • In your nodes, you can remove all usage of ui_type=UIType.SomeType, they aren't needed anymore.
  2. I deprecated default_factory, which didn't actually work (just a coincidence this didn't cause issues). I've added backcompat logic for this. When default_factory is provided in InputField(), the provided function is called once on app init.

    • In your nodes, I noticed the use of default_factory=list. You can use default=[] instead.

The tiled upscaling works great. Here's my test image:
d2fe21d2-5a10-4116-adf9-57952df0bb2a

And the 4x upscaled result:
compressed

(I swapped out the scale image node for ESRGAN, used the RealESRGANx4 model and then dropped denoise_start down a bit)

@skunkworxdark
Copy link
Contributor

Nice test image. Looks like I have some work to do on my nodes. Any idea what release this might appear in?

@psychedelicious
Copy link
Collaborator

Release in 3.5 I believe, but I added back compatibility logic so it shouldn't break any existing nodes. Just print a warning.

I'll test some other community nodes to double check I haven't broken anything though.

Workflows made for 3.4 and before will be automatically migrated to the version. The workflow in this PR that had to be fixed needed fixing bc it was based on changes on the first custom types pr (which I will close now)

@RyanJDick
Copy link
Collaborator Author

I am not 100% sure why so much effort has gone into rewriting what already exists but maybe when I look at the code in depth I will see.

I definitely don't want to duplicate the effort that you're already putting in. The spirit of this PR is to take the core ideas that you have developed and make them generally available to all Invoke users. Ideally, the version of this that gets merged is something that you would feel excited to contribute to as oppposed to feeling like a diverging implementation of your nodes.
With those goals in mind, the main things that I'm trying to achieve here (and source of some of the differences) are:

  • Take the simplest version of your nodes, and not add more complexity unless it very clearly adds a lot of value.
  • Add unit test so that these features can be feasibly maintained over time.
  • Consider what a general tiling API should look like so that these nodes are re-usable for all tiling workflows - not just tiled upscaling.
  • Crop Latents - I assume you are currently using that from my set of nodes. It doesn't currently exist in the main branch at the moment. I guess that would need to be brought into the main as well.

Yep, was totally using it without even realizing it was a custom node. I copied it into this PR now. Hope that's ok!

  • What benefit do you get from cutting up the image in the workflow rather than doing it in a single node? Personally, the workflow around the Tiles to Properties and the Image Crop nodes is a bit of a mess of connections and would be very easy for someone to get wrong or mess up. I think the approach of having a single node doing the cutting is preferable. I can't see any benefits of having it done in the workflow like that.

The main reason for doing it separately is just for better modularity:

  • Tiling workflows will benefit from improvements to the cropping node (e.g. speedups, bugfixes, support for new image formats, etc.) without having to touch the tiling nodes.
  • There's fewer assumptions about how the tiling nodes will be used in a workflow (i.e. no assumption that we are tiling a single PIL image), which makes them more re-usable.

Unfortunately, there is a cost to doing it this way: more unnecessary PNG encode/decode roundtrips. But, this should be fixed globally rather than allowing it to influence node API decisions.

  • The Calculate Image Tiles is pretty much the same as the Default Tile Generator in my nodes where it just fits the last tiles in and potentially has a large overlap. I feel that for a user to get a good distribution of tiles and overlaps then choosing the tile size and overlap can be tricky, especially for images of non-standard sizes. That is why we moved away from this, if you take a look at the minimum overlap and even split versions of the tile generators in my nodes then you can see alternatives to this approach.

Do you think it's important to support all of these options in core? Is this something that you'd be interested in contributing?

I'm thinking that it would probably make sense to add the alternative tiling schemes in follow-up PRs. But, we should make sure that the Tile representation chosen in this PR enables that (I think it does).

  • Why do you have an image size input into the Merge Tiles to Image? This information is already available from the last tile. Having this as an input just gives the ability to pass incorrect values and mess up the image reconstruction.

No strong use case in mind, so removed for now.

  • Is there a reason for having a different blend amount compared to the overlap? If you choose a blend that is smaller than the overlap this is just wasted generated pixels.

My thinking was that having extra overlap context could be helpful during the image-to-image operation, but you may want a smaller blend amount to avoid 'shadowing' artifacts.

In practice, with global noise generation, 'shadowing' doesn't seem to be much of a problem, so the benefit is marginal.

  • I've not debugged it but think you might have a problem with your blending masks. You start with the Y mask and then the x but the first thing done is to set 0's for the non-masked part I think this will overwrite part of the already generated y mask. If I am right then this would manifest in images as a hard-to-spot sharp cut in the blending.

I moved things around a little bit in the blending code to tweak the corner blending behavior. I'm not sure that I fully follow the issue that you're describing here. If you still think there's a problem can you add a comment directly to the relevant code?

  • I am not sure I agree with the idea that you should leave the 8-pixel validation to later nodes. This will probably just be confusing for users as to why and how they fix it. Also if you know at the point of generating the tiles then why not stop the process at that point and give the user a clear error?

It's a bit of a trade-off. On one hand, it would be nice to exit as soon as an error can be detected. On the other hand, doing the multiple-of-8 check in the initial tiling node means that we are restricting it unnecessarily. This could be inconvenient for tiling workflows that do not have this constraint (e.g. if they operate entirely in image space).

As a rule of thumb, I think it is best for nodes to only assert input requirements that are strictly required for them to run.

  • The final image quality output from this doesn't match up to what comes out of my nodes but I am not sure exactly why. I think it might be the choice of using IP-Adaptor and canny controlnet but it might also be down to the potential mask issue I mentioned earlier. I think on the column to the right of the door you can see a sharp cut in the vertical lines. You can also see the same kind of thing on the left-most lower arch on the right-hand side of it. Generally, the image seems just a bit blurry.

I created a workflow that roughly matches your configs (tile controlnet, no IP-Adapter, etc.). Visually, the result is looking nearly identical to me now.
Workflow: tiled_upscaling_controlnet_tile.json
tiled_upscaling_controlnet_tile_output

@RyanJDick
Copy link
Collaborator Author

  • My impression of the Tile to Properties area of the workflow as being a hot mess and very easy to get wrong has really been reinforced after having to rebuild it. I have built more than my fair share of these types of workflows now and if it is mildly confusing for me it's going to be a stretch for others. If you keep the image cutting inside the iterate on the workflow then maybe think about adding width and height as outputs to the Tile To Properties to avoid having to calculate it with maths nodes as that would simplify it a little.
  • The Calculate Image Tiles and Merge Tiles to Image nodes should have the Height and Width inputs reversed so they match every other node. This avoids node connections crossing which can lead to confusion. What kind of madman has them the other way around ;) (jk)

I have implemented both of these suggestions, thanks 🙂

@RyanJDick RyanJDick changed the base branch from feat/arbitrary-field-types to main November 29, 2023 15:09
@RyanJDick
Copy link
Collaborator Author

Just re-based now that #5175 was merged, and changed the base branch to main. (Apologies to everyone who has already pulled, but this seemed like the best option given how much changed in #5175.)

Here is an updated workflow for after the rebase:
tiled_upscaling_controlnet_tile_rebase.json

@RyanJDick
Copy link
Collaborator Author

Yes, I am very happy to see this in Core as it is a much better place than in my node pack. One additional thing is that maybe the order of inputs and names should match the image crop node to keep consistency. I think the image node version is x,y, width, and height.

Addressed in bb87c98

@RyanJDick
Copy link
Collaborator Author

Ok, to summarize where we are at, I think the main open questions that remain are around the plan for 1) optimized tiling schemes (even-split, minimum-overlap), and 2) smart seams.

For both features, I see the following options:

  1. Include in this PR.
  2. Add in separate PRs, but merge together so that it all gets released in one version.
  3. Leave in custom nodes for now, with the option to add to core in a future release.

If there are no strong objections, I'd like to eliminate 1. as an option. Given how much conversation has already happened on this PR, I think separate PRs are needed to give each feature the attention it deserves. This will also help to keep a clear history of the the merits of each feature for future reference.

Now or Later

From the little bit of experimentation that I've done, both of these features seem to add complexity for marginal benefit. This is why I've been hesitant to jump on them right away.

I'm definitely willing to change my mind though. @skunkworxdark Do you have test images you can share that showcase the benefits of either optimized tiling schemes or smart seam? Maybe the benefits just haven't been very pronounced on my choice of test images.

@hipsterusername
Copy link
Member

I'm fine with eliminating 1, but for the sake of ensuring that this feature gets released "Ready to go", I'd encourage us to aim for 2 as the best course of action. I think both of the concerns raised (tiling schemes, and smart seam) are seemingly low-risk optimizations, that help intelligently handle edge-cases in the upscaling workflow.

@RyanJDick - would you prefer that the other PRs be created branched off of this PR by @skunkworxdark & others in order to evaluate the full set of changes in those PRs?

@skunkworxdark
Copy link
Contributor

@RyanJDick,

Sorry if this is duplicated but I accidentally managed to delete my last comment. You have done a good job with this so far and the workflow is looking much neater with the last few changes.

I will try and find time tomorrow to look into what is involved in adding the extra tiling methods and the smart seam logic and report back.

Can suggest 2 more minor aesthetic changes.

  • On the Tile to Properties node move the Coords Left & Coords Right to the top of the list as this will prevent the connectors from crossing over when linking to the crop nodes. (might stop OCD people from going slightly crazy)
  • On the Crop Latensts node drop the "Offset" word from the X&Y inputs to match the Crop Image node.

image

@RyanJDick
Copy link
Collaborator Author

Addressed the latest UI requests. in 07e7c9e.

image

Updated workflow:
tiled_upscaling_simple_final.json

…se nodes is for use in tiled upscaling workflows.
…conflict with custom node. And other minor tidying.
…void having to calculate them with math nodes.
…ontally first and then vertically. This change achieves slightly more natural blending on the corners where 4 tiles overlap.
@hipsterusername
Copy link
Member

For the sake of integration testing for those on main, and ensuring that we don't run into major conflicts, I'm going to approve this and merge. It sounds like we're comfortable with adding in other optimizations on top of these nodes as follow-on PRs that we can push to have in before a next release, barring any major hiccups that are run into in the PR cycle.

@hipsterusername hipsterusername dismissed psychedelicious’s stale review November 30, 2023 15:48

Review hold no longer needed since upstream is now main.

@hipsterusername hipsterusername enabled auto-merge (rebase) November 30, 2023 15:49
@hipsterusername hipsterusername merged commit 984e609 into main Nov 30, 2023
7 checks passed
@hipsterusername hipsterusername deleted the ryan/tiled-upscaling-graph branch November 30, 2023 15:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants