Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocating heap memory for low level DX12 backend fails #1379

Closed
ConorLPBoyle opened this issue Jul 17, 2017 · 7 comments
Closed

Allocating heap memory for low level DX12 backend fails #1379

ConorLPBoyle opened this issue Jul 17, 2017 · 7 comments

Comments

@ConorLPBoyle
Copy link

ConorLPBoyle commented Jul 17, 2017

When attempting to compile and run the trianglell example on Windows 10 64 bit MSVC with the default DX12 backend, the program panics at runtime with the following message:

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `0`,
 right: `-2147024809`', src\backend\dx12ll\src\factory.rs:215:8

The create_heap function asserts that

assert_eq!(winapi::S_OK, unsafe {
            self.inner.CreateHeap(&desc, &dxguid::IID_ID3D12Heap, &mut heap)
        });

Commenting this out, the example simply fails to run without providing a backtrace.

error: process didn't exit successfully: `target\debug\examples\trianglell.exe` (exit code: 2173)

The example works fine compiled with with --features vulkan.

@kvark
Copy link
Member

kvark commented Jul 17, 2017

Interesting! We don't require that much memory for the heaps, so I wonder why it fails.

It'd be great to see the DebugView output from this crash. Debug runtime should have a cleaner error message than just the code.

@ConorLPBoyle
Copy link
Author

Alright, this should be more helpful.
[12408] D3D12 ERROR: ID3D12Device::CreateHeap: D3D12_HEAP_FLAGS has invalid flag combinations set. The following flags may not all be set simultaneously. Exactly one must be left unset, or all may be left unset when the adapter supports D3D12_RESOURCE_HEAP_TIER_2 or creating a heap in conjunction with D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER: D3D12_FEATURE_DATA_D3D12_OPTIONS::ResourceHeapTier = D3D12_RESOURCE_HEAP_TIER_1, D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER = 0, D3D12_HEAP_FLAG_DENY_NON_RT_DS_TEXTURES = 0, D3D12_HEAP_FLAG_DENY_RT_DS_TEXTURES = 0, and D3D12_HEAP_FLAG_DENY_BUFFERS = 0. [ STATE_CREATION ERROR #631: CREATEHEAP_INVALIDMISCFLAGS]

@kvark
Copy link
Member

kvark commented Jul 17, 2017

Interesting, thanks for the info!
I'm not sure why I'm not seeing this on my Win10 machine.
Would you want to try making a patch?

@ConorLPBoyle
Copy link
Author

I'll see what I can do.

The docs state that there are multiple heap allocation tiers in DX12: Tier 1 supports allocation of heaps with resources of either buffer, regular texture, or Render target/depth stencil texture types, while tier 2 heaps allow resource allocation from any category. Evidently, my GeForce GTX 1050 only supports tier 1.

@ConorLPBoyle
Copy link
Author

ConorLPBoyle commented Jul 18, 2017

The problem was exactly this. DX12 devices with Tier 1 resource heap support (or: every NVIDIA card from Kepler to Pascal!) require supported heap resource type to be declared in the flags, but create_heap only currently passes D3D12_HEAP_FLAGS(0) to the heap descriptor.

So modifying the trianglell example and adding a custom create_heap_flagged function, I got it running the following way:

// in dx12ll factory.rs
pub fn create_heap_flagged(&mut self, heap_type: &core::HeapType, size: u64, flags: winapi::D3D12_HEAP_FLAGS) -> native::Heap {
        let mut heap = ptr::null_mut();
        let desc = winapi::D3D12_HEAP_DESC {
            SizeInBytes: size,
            Properties: data::map_heap_properties(heap_type.properties),
            Alignment: 0,
            Flags: flags, 
        };
...
}


// in trianglell main.rs
let bufferheapflags = winapi::D3D12_HEAP_FLAGS(0) | winapi::D3D12_HEAP_FLAG_DENY_NON_RT_DS_TEXTURES | winapi::D3D12_HEAP_FLAG_DENY_RT_DS_TEXTURES;
let textureheapflags = winapi::D3D12_HEAP_FLAGS(0) | winapi::D3D12_HEAP_FLAG_DENY_RT_DS_TEXTURES | D3D12_HEAP_FLAG_DENY_BUFFERS;
let rendertextureheapflags = winapi::D3D12_HEAP_FLAGS(0) | winapi::D3D12_HEAP_FLAG_DENY_NON_RT_DS_TEXTURES | D3D12_HEAP_FLAG_DENY_BUFFERS; //unused

...

let heap = factory.create_heap_flagged(upload_heap, 1024, bufferheapflags);
...
let image_upload_heap = factory.create_heap_flagged(upload_heap, upload_size, bufferheapflags);
...
let image_heap = factory.create_heap_flagged(device_heap, image_req.size, textureheapflags);

The best course of action in my view would be to expose separate create_texture_heap(), create_render_texture_heap() and create_buffer_heap() functions which simply map to create_heap() in the vulkanll and metall factory.rs.

@kvark
Copy link
Member

kvark commented Jul 18, 2017

I don't understand how Vulkan efficiently runs on that tier-1 hardware, given that the memory you allocate there doesn't have the type of resources specified in advance.

@ConorLPBoyle
Copy link
Author

Well, it runs. I leave the rest to your discretion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants