Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to manual management of descriptor set and command buffer allocators #1957

Merged
merged 17 commits into from
Oct 5, 2022

Conversation

marc0246
Copy link
Contributor

@marc0246 marc0246 commented Aug 20, 2022

Changelog:

### Breaking changes

Changes to command pools:
- Renamed `CommandPool` to `CommandBufferAllocator`, `StandardCommandPool` to `StandardCommandBufferAllocator`, and `UnsafeCommandPool` to `CommandPool` to better reflect their action.
- Removed `Device::with_standard_command_pool`.
- Command buffer allocators must now be managed manually.
  - `AutoCommandBufferBuilder::{primary, secondary}` now take an implementation of `CommandBufferAllocator` instead of the `Device`.
  - `DeviceLocalBuffer::{from_buffer, from_data, from_iter}` and `ImmutableImage::{from_iter, from_buffer}` now take an implementation of `CommandBufferAllocator`.

Changes to descriptor pools:
- Renamed `DescriptorPool` to `DescriptorSetAllocator`, `StandardDescriptorPool` to `StandardDescriptorSetAllocator`, and `UnsafeDescriptorPool` to `DescriptorPool` to better reflect their action.
- Renamed `SingleLayout[Variable]DescPool` to `SingleLayout[Variable]DescriptorSetPool` for consistency.
- Removed `Device::with_standard_descriptor_pool`.
- Descriptor set allocators must now be managed manually.
  - `PersistentDescriptorSet::{new, new_variable}` now take an implementation of `DescriptorSetAllocator`, `PersistentDescriptorSet::new_with_pool` has been removed.

In two of my previous PRs I addressed some performance concerns around the standard implementations of (then) descriptor and command pools. One of the optimizations involved removing the layer of Mutex that was used to store these pools inside the device. My idea was to use thread-local storage as a solution with almost no overhead that would also work as conveniently as before, but after some discussion on Discord it was pointed out that in certain edge-cases, the TLS might keep Devices alive that have already been dropped by the user. Because of that, putting the burden of managing these (then) pools onto the user seems inevitable, otherwise we would have to sacrifice some of the performance gains for a bit of convenience.

Other than that, the "pool" and co. types have been renamed to hopefully be less confusing to someone coming from Vulkan, as they function quite differently. Also, AutoCommandBufferBuilder can now be constructed using arbitrary implementations of CommandBufferAllocator.

I have tried to do StandardMemoryPool in this PR as well, but quickly realized that CpuBufferPool wasn't made with manually-managed memory allocators in mind, so I decided to do that in an upcoming PR together with my revamp of memory allocation in vulkano.

@marc0246 marc0246 force-pushed the master branch 2 times, most recently from 3f975f7 to 30ec12b Compare October 4, 2022 05:08
@Rua
Copy link
Contributor

Rua commented Oct 5, 2022

It all works and looks good. It will be a big change for Vulkano, so hopefully there's no hidden bugs and users won't be too confused by it.

@Rua Rua merged commit ef65c98 into vulkano-rs:master Oct 5, 2022
@marc0246
Copy link
Contributor Author

marc0246 commented Oct 5, 2022

Agreed. I would have preferred to keep this automatic, but in the end this will be both faster and more flexible for the user, so hopefully people will take kindly to it.

Rua added a commit that referenced this pull request Oct 5, 2022
marc0246 added a commit to marc0246/vulkano that referenced this pull request Oct 6, 2022
marc0246 added a commit to marc0246/vulkano that referenced this pull request Oct 7, 2022
Rua pushed a commit that referenced this pull request Oct 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants