Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proof of concept toucan integration #54

Closed

Conversation

darbyjohnston
Copy link
Contributor

Hi! I have been using raven to help me debug a new timeline render I am working on:
https://github.com/darbyjohnston/toucan

This is just a proof of concept, but I thought I would share the PR to see if there was interest in taking it further.

Note that toucan is a software renderer, so it's a bit slow compared to a playback application. However it still might be useful for previewing/inspecting frames in raven.

Here is a screenshot showing raven rendering frames from a timeline with toucan:
raven_toucan

Signed-off-by: Darby Johnston <darbyjohnston@yahoo.com>
@jminor
Copy link
Member

jminor commented Aug 22, 2024

Wow! That's really impressive!

@darbyjohnston
Copy link
Contributor Author

Thanks! Having the Raven timeline + image preview has been very helpful for troubleshooting.

One downside is that imgui doesn't support texture loading, so there would need to be three different implementations for OpenGL, Metal, and Direct3D.

@meshula
Copy link
Member

meshula commented Oct 7, 2024

@darbyjohnston Just noticed your note about texture loading ~ I've got ImGui texture loading in the bag if you ever want to have a go on that part. It can be done with very little code.

@darbyjohnston
Copy link
Contributor Author

@meshula Oh nice, how does that work? I closed this since I thought cross-platform texture loading might be too much effort.

@meshula
Copy link
Member

meshula commented Oct 7, 2024

An ImTextureId is just a platform handle void casted under the hood :) It's complicated under Vulkan, but all the other targets are trivial.

Minimally, you can just cast and use a texture as follows

   ImGui::Image(texture, viewSize, uv0, uv1);

Here's an example of displaying a texture inside ImGui on Metal, with a color correction shader in an imgui callback. I haven't included the shader, we can follow up on Slack if you like.



struct InspectorShaderConstants {
    simd::float2 textureSize;
    bool forceNearestSampling;
    float inK0;
    float inPhi;
    float inLinearBias;
    float inputGamma;
    float outK0;
    float outPhi;
    float outLinearBias;
    float outputGamma;
    float exposure; // precomputed as pow(2.0, exposure)
    float lift;
    simd::float4 grid;
    simd::float2 gridWidth;
    simd::float4x4 colorTransform;
    simd::float3 backgroundColor;
    float premultiplyAlpha;
    float disableFinalAlpha;
};

void BackEnd_SetShader(const ImDrawList*, const ImDrawCmd*, const Inspector* inspector) {
    static id<MTLRenderPipelineState> rps = nil;
    NSError* error = nil;
    while (!rps) {
        LabMetalResources* mr = GetCurrentLabMetalResources();
        id<MTLDevice> device = mr.device;
        if (!device)
            break;

        id<MTLLibrary> shaderLib = [device newDefaultLibrary];
        if(!shaderLib)
        {
            NSLog(@" ERROR: Couldnt create a default shader library");
            // assert here because if the shader libary isn't loading, nothing good will happen
            break;
        }
        
        id<MTLFunction> vertexFunction = [shaderLib newFunctionWithName:@"textureMode_vertex_main"];
        if (!vertexFunction)
        {
            NSLog(@">> ERROR: Couldn't load vertex function from default library");
            break;
        }
        
        id<MTLFunction> fragmentFunction = [shaderLib newFunctionWithName:@"textureMode_fragment_main"];
        if (!fragmentFunction)
        {
            NSLog(@" ERROR: Couldn't load fragment function from default library");
            break;
        }
        
        // this vertex descriptor MUST match that in imgui_impl_metal
        // see renderPipelineStateForFramebufferDescriptor as a sanity check
        MTLVertexDescriptor* vertexDescriptor = [MTLVertexDescriptor vertexDescriptor];
        vertexDescriptor.attributes[0].offset = offsetof(ImDrawVert, pos);
        vertexDescriptor.attributes[0].format = MTLVertexFormatFloat2; // position
        vertexDescriptor.attributes[0].bufferIndex = 0;
        vertexDescriptor.attributes[1].offset = offsetof(ImDrawVert, uv);
        vertexDescriptor.attributes[1].format = MTLVertexFormatFloat2; // texCoords
        vertexDescriptor.attributes[1].bufferIndex = 0;
        vertexDescriptor.attributes[2].offset = offsetof(ImDrawVert, col);
        vertexDescriptor.attributes[2].format = MTLVertexFormatUChar4; // color
        vertexDescriptor.attributes[2].bufferIndex = 0;
        vertexDescriptor.layouts[0].stepRate = 1;
        vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
        vertexDescriptor.layouts[0].stride = sizeof(ImDrawVert);

        MTLRenderPassDescriptor* rpd = mr.renderpassDescriptor;
        
        MTLRenderPipelineDescriptor* pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
        pipelineDescriptor.vertexFunction = vertexFunction;
        pipelineDescriptor.fragmentFunction = fragmentFunction;
        pipelineDescriptor.vertexDescriptor = vertexDescriptor;
        pipelineDescriptor.rasterSampleCount = rpd.colorAttachments[0].texture.sampleCount;
        pipelineDescriptor.colorAttachments[0].pixelFormat = rpd.colorAttachments[0].texture.pixelFormat;
        pipelineDescriptor.colorAttachments[0].blendingEnabled = YES;
        pipelineDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperationAdd;
        pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactorSourceAlpha;
        pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
        pipelineDescriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperationAdd;
        pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorOne;
        pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
        pipelineDescriptor.depthAttachmentPixelFormat = rpd.depthAttachment.texture.pixelFormat;
        pipelineDescriptor.stencilAttachmentPixelFormat = rpd.stencilAttachment.texture.pixelFormat;

        rps = [device newRenderPipelineStateWithDescriptor:pipelineDescriptor error:&error];
        if (error != nil) {
            rps = nil;
            NSLog(@"Error: failed to create Metal pipeline state: %@", error);
            break;
        }
    }
    
    if (!rps)
        return;
    
    const ShaderOptions* options = &inspector->CachedShaderOptions;
    static InspectorShaderConstants sc;
    sc.inK0 = options->inputK0;
    sc.inPhi = options->inputPhi;
    sc.inLinearBias = options->InputColorspace.linearBias;
    sc.inputGamma = options->InputColorspace.gamma;
    sc.outK0 = options->outputK0;
    sc.outPhi = options->outputPhi;
    sc.outLinearBias = options->OutputColorspace.linearBias;
    sc.outputGamma = options->OutputColorspace.gamma;
    sc.exposure = powf(2.0f, options->exposure);
    sc.lift = options->lift;
    sc.textureSize = {inspector->TextureSize.x, inspector->TextureSize.y};
    sc.forceNearestSampling = options->ForceNearestSampling;
    sc.grid = {options->GridColor.x, options->GridColor.y, options->GridColor.z, options->GridColor.w };
    sc.gridWidth = {options->GridWidth.x, options->GridWidth.y};
    memcpy(&sc.colorTransform, options->ColorTransform, sizeof(float) * 16);
    sc.backgroundColor = {options->BackgroundColor.x, options->BackgroundColor.y, options->BackgroundColor.z };
    sc.premultiplyAlpha = options->PremultiplyAlpha;
    sc.disableFinalAlpha = options->DisableFinalAlpha;

    auto commandEncoder = [DearImGui currentRenderCommandEncoder];
    [commandEncoder setFragmentBytes:&sc
                           length:sizeof(sc)
                          atIndex:2];
    [commandEncoder setRenderPipelineState:rps];

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants