-
Notifications
You must be signed in to change notification settings - Fork 1k
Hardware accelerated encoding on Windows #976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (12)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (20)
crates/export/src/gif.rs (1)
60-60: Potential panic on paths without a parentoutput_path may be a bare filename; parent() can be None or empty, causing unwrap() to panic or create_dir_all("") to fail. Guard it.
- std::fs::create_dir_all(gif_output_path.parent().unwrap()).map_err(|e| e.to_string())?; + if let Some(parent) = gif_output_path.parent().filter(|p| !p.as_os_str().is_empty()) { + std::fs::create_dir_all(parent).map_err(|e| e.to_string())?; + }crates/enc-ffmpeg/src/audio/opus.rs (3)
260-268: Fix infinite recursion in trait impl (calls itself).
impl AudioEncoder for OpusEncodercallsself.queue_frame/self.finishwhich resolves to the trait methods, not the inherent ones, causing immediate recursion/panic.Apply:
impl AudioEncoder for OpusEncoder { fn queue_frame(&mut self, frame: frame::Audio, output: &mut format::context::Output) { - self.queue_frame(frame, output); + OpusEncoder::queue_frame(self, frame, output); } fn finish(&mut self, output: &mut format::context::Output) { - self.finish(output); + OpusEncoder::finish(self, output); } }
137-156: Guard against encoder.frame_size()==0 to avoid tight infinite loop/panic.If
self.encoder.frame_size()==0,frame_size_bytesis 0, theloopnever breaks, andsend_framelikely errors → panic viaunwrap().Apply this pattern in both loops:
- loop { - let frame_size_bytes = self.encoder.frame_size() as usize + loop { + let enc_frame_size = self.encoder.frame_size() as usize; + if enc_frame_size == 0 { + tracing::error!("Opus encoder reported frame_size=0; skipping chunk to avoid dead-loop"); + break; + } + let frame_size_bytes = enc_frame_size * self.encoder.channels() as usize * self.encoder.format().bytes();Also applies to: 162-181
88-101: Resampler constructionunwrap()can panic at init-time.If format conversion is unsupported, this will crash recording.
Return an error (extend
OpusEncoderError) instead of unwrapping.crates/enc-avfoundation/src/mp4.rs (3)
281-285: Audio-first start: avoid panic onsegment_first_timestamp.unwrap().If audio arrives before any video, this unwrap panics. Start the session on first audio too and seed
segment_first_timestamp.- let pts = self - .start_time - .add(self.elapsed_duration) - .add(time.sub(self.segment_first_timestamp.unwrap())); + if !self.is_writing { + self.is_writing = true; + self.asset_writer.start_session_at_src_time(time); + self.start_time = time; + self.segment_first_timestamp = Some(time); + } + let pts = self + .start_time + .add(self.elapsed_duration) + .add(time.sub(self.segment_first_timestamp.unwrap()));
119-126: Use integer bitrate for AVVideoAverageBitRateKey.Apple expects an integer bps NSNumber; using f32 risks rounding/bridging issues.
- &[ns::Number::with_f32(bitrate).as_id_ref()], + &[ns::Number::with_u64(bitrate as u64).as_id_ref()],Also change
get_average_bitrateto returnu64to avoid cast surprises.
245-247: Make “NotReady” audio backpressure mirror video path by returning Ok(())This ensures consistency with the video path’s handling of backpressure and avoids bubbling up an error when the input simply isn’t ready:
- if !audio_input.is_ready_for_more_media_data() { - return Err(QueueAudioFrameError::NotReady); - } + if !audio_input.is_ready_for_more_media_data() { + return Ok(()); + }crates/scap-direct3d/src/lib.rs (1)
477-533: Fix staging texture format and ensure Unmap is called (resource leak).
- CopyResource requires matching formats; currently hard-coded to R8G8B8A8 regardless of source. Use self.pixel_format.as_dxgi().
- You Map but never Unmap the staging texture; keep the texture and context in FrameBuffer and Unmap in Drop.
- let texture_desc = D3D11_TEXTURE2D_DESC { + let texture_desc = D3D11_TEXTURE2D_DESC { Width: self.width, Height: self.height, MipLevels: 1, ArraySize: 1, - Format: DXGI_FORMAT(DirectXPixelFormat::R8G8B8A8UIntNormalized.0), // (self.color_format as i32), + Format: self.pixel_format.as_dxgi(), SampleDesc: DXGI_SAMPLE_DESC { Count: 1, Quality: 0, }, Usage: D3D11_USAGE_STAGING, BindFlags: 0, CPUAccessFlags: D3D11_CPU_ACCESS_READ.0 as u32 | D3D11_CPU_ACCESS_WRITE.0 as u32, MiscFlags: 0, }; @@ - Ok(FrameBuffer { + Ok(FrameBuffer { data, width: self.width, height: self.height, stride: mapped_resource.RowPitch, pixel_format: self.pixel_format, - resource: mapped_resource, + resource: mapped_resource, + texture, + d3d_context: self.d3d_context.clone(), })Add these fields to FrameBuffer and implement Drop:
- pub struct FrameBuffer<'a> { + pub struct FrameBuffer<'a> { data: &'a mut [u8], width: u32, height: u32, stride: u32, resource: D3D11_MAPPED_SUBRESOURCE, pixel_format: PixelFormat, + texture: ID3D11Texture2D, + d3d_context: ID3D11DeviceContext, }// place near impl<'a> FrameBuffer<'a> impl<'a> Drop for FrameBuffer<'a> { fn drop(&mut self) { unsafe { self.d3d_context.Unmap(&self.texture, 0) }; } }crates/scap-ffmpeg/src/direct3d.rs (1)
24-34: Guard against short strides to avoid OOB panics.Row copy slices will panic if
src_strideordest_stride<row_length. Add defensive checks (debug_assert at minimum).let dest_stride = ff_frame.stride(0); let dest_bytes = ff_frame.data_mut(0); let row_length = width * 4; + debug_assert!( + src_stride >= row_length && dest_stride >= row_length, + "stride too small: src_stride={}, dest_stride={}, row_length={}", + src_stride, dest_stride, row_length + );Apply the same to the BGRA branch.
Also applies to: 45-55
crates/enc-ffmpeg/src/audio/aac.rs (4)
119-127: Hard-coded 2-channel buffer will break mono/surround.Size the per-plane buffers from the encoder’s channel count.
- buffer: vec![VecDeque::new(); 2], + buffer: vec![VecDeque::new(); encoder.channels() as usize],
183-201: Finalize logic: per-channel math/conditions are off.
- Compute
channel_size_bytesonce.- Use it in conditions and copies (not
frame_size_bytes).pub fn finish(&mut self, output: &mut format::context::Output) { let frame_size_bytes = self.encoder.frame_size() as usize * self.encoder.channels() as usize * self.encoder.format().bytes(); + let channel_size_bytes = frame_size_bytes / self.encoder.channels() as usize;
194-201: Use correct plane when extending from resampler output.Currently always reads
data(0).- for i in 0..self.resampled_frame.planes() { - self.buffer[i].extend( - &self.resampled_frame.data(0)[0..self.resampled_frame.samples() - * self.resampled_frame.format().bytes()], - ); - } + for i in 0..self.resampled_frame.planes() { + let bytes = self.resampled_frame.samples() * self.resampled_frame.format().bytes(); + self.buffer[i].extend(&self.resampled_frame.data(i)[0..bytes]); + }
201-219: Condition and plane writes in flush loop are wrong.
self.buffer.len()is channel count, not readiness; check lengths per channel.- Write to
data_mut(i), notdata_mut(0).- while self.buffer.len() >= frame_size_bytes { + while self.buffer.iter().all(|b| b.len() >= channel_size_bytes) { let mut frame = frame::Audio::new( self.encoder.format(), self.encoder.frame_size() as usize, self.encoder.channel_layout(), ); for i in 0..frame.planes() { - let bytes = self.buffer[i] - .drain(0..frame_size_bytes) + let bytes = self.buffer[i] + .drain(0..channel_size_bytes) .collect::<Vec<_>>(); - frame.data_mut(0)[0..frame_size_bytes].copy_from_slice(&bytes); + frame.data_mut(i)[0..channel_size_bytes].copy_from_slice(&bytes); } self.encoder.send_frame(&frame).unwrap(); self.process_packets(output); }crates/export/src/mp4.rs (2)
75-113: Avoid movingbaseand (likely)video_infointo the blocking closure.The closure currently captures
baseandvideo_info. This will either fail to compile (moved values used later) or over-capture more than needed.Apply this diff to capture only clones you need:
@@ - let encoder_thread = tokio::task::spawn_blocking(move || { + let output_path_for_encoder = output_path.clone(); + let video_info_for_encoder = video_info.clone(); + let encoder_thread = tokio::task::spawn_blocking(move || { @@ - let mut encoder = MP4File::init( + let mut encoder = MP4File::init( "output", - base.output_path.clone(), + output_path_for_encoder.clone(), |o| { - H264Encoder::builder("output_video", video_info) + H264Encoder::builder("output_video", video_info_for_encoder) .with_bpp(self.compression.bits_per_pixel()) .build(o) }, @@ - Ok::<_, String>(base.output_path) + Ok::<_, String>(output_path_for_encoder) })
151-154: Round audio PTS to minimize drift at non-integer SR/FPS ratios.Integer rounding prevents systematic truncation.
- let pts = ((frame_number * frame.rate()) as f64 / fps as f64) as i64; + let pts = (((frame_number as i64) * (frame.rate() as i64) + + (fps as i64 / 2)) / (fps as i64)); frame.set_pts(Some(pts));crates/enc-ffmpeg/Cargo.toml (1)
1-15: Add a root rust-toolchain.toml to pin MSRV ≥1.71.0
Createrust-toolchain.tomlat the repo root specifying a stable Rust version that supports edition 2024 (e.g. channel = "1.71.0"), and update CI workflows (e.g. use actions/setup-rust@v1) to honor it. Workspace lints are already defined in the root Cargo.toml.crates/recording/src/studio_recording.rs (1)
705-712: Avoid unwrap on pipeline construction; map error into CreateSegmentPipelineErrorPrevent crash if pipeline setup fails (e.g., encoder init).
- ScreenCaptureMethod::make_studio_mode_pipeline( + ScreenCaptureMethod::make_studio_mode_pipeline( pipeline_builder, (screen_source, screen_rx), screen_output_path.clone(), - ) - .unwrap(); + ) + .map_err(|e| CreateSegmentPipelineError::Media( + MediaError::Any(e.to_string().into()) + ))?;crates/recording/src/sources/screen_capture/mod.rs (1)
247-256: Linux build risk:Configlackscrop_boundson non-Windows/macOS.With cfg-gated fields for Windows and macOS only, Linux builds of this module may fail where
Config { crop_bounds, .. }is constructed. Provide a fallback field for other OSes to keep the shape consistent.Apply:
#[derive(Clone, Debug)] pub struct Config { display: DisplayId, #[cfg(windows)] crop_bounds: Option<PhysicalBounds>, #[cfg(target_os = "macos")] crop_bounds: Option<LogicalBounds>, + #[cfg(all(not(windows), not(target_os = "macos")))] + crop_bounds: Option<PhysicalBounds>, fps: u32, show_cursor: bool, } impl Config { pub fn fps(&self) -> u32 { self.fps } }Also applies to: 258-262
crates/recording/src/sources/screen_capture/windows.rs (2)
15-15: Fix missing tracing macro imports (build will fail).
warn!anderror!are used unqualified but not imported; this fails under Rust 2018 unless referenced astracing::warn!/tracing::error!.Apply:
-use tracing::{info, trace}; +use tracing::{error, info, trace, warn};Also applies to: 136-146, 160-167, 334-339, 502-509
35-45: Avoid panics inaudio_info()when no default output device/config.The
unwrap()s will panic on systems without a default output device or configuration (common on headless/VMs).Consider gracefully handling
None/Errand falling back (e.g., warn and return a sane default), or change the API to returnResult<AudioInfo, E>. If you want, I can draft a minimal, non-breaking fallback that preserves the current signature.
🧹 Nitpick comments (79)
packages/ui-solid/src/auto-imports.d.ts (1)
14-14: Harmless style-only change; prefer generator-driven consistency (don’t hand-edit generated file).These quote flips have no type/behavior impact. Since this file is generated, avoid manual edits—ensure the unplugin-auto-import/unplugin-icons config yields consistent quoting across all entries (single vs double) to prevent churn. If this was hand-edited, revert and re-run generation or exclude the file from VCS if reproducible. This also aligns with our learning to rely on icon auto-imports rather than manual imports.
Also applies to: 29-29, 41-41, 45-45, 47-47, 51-51, 56-56, 60-60, 79-79
crates/recording/examples/recording-cli.rs (3)
23-23: Use structured fields in tracing for better searchabilityPrefer key-value fields over string formatting.
- info!("Recording to directory '{}'", dir.path().display()); + info!(path = %dir.path().display(), "Recording to directory");
32-35: Audio disabled — confirm intent and documentSetting
capture_system_audio: false,mic_feed: None, andcamera_feed: Noneyields video-only samples. Verify this is intentional for current MF pipeline; otherwise muxing or expectations downstream may differ.Add a clarifying comment to avoid confusion:
RecordingBaseInputs { capture_target: ScreenCaptureTarget::Display { id: Display::primary().id(), }, - capture_system_audio: false, + // Audio intentionally disabled until MF audio/AVPacket integration lands. + capture_system_audio: false, camera_feed: None, mic_feed: None, },
18-20: Remove unused /tmp scaffolding (breaks on Windows, not used)Hardcoded POSIX path is ignored on Windows and unused here.
- let _ = std::fs::remove_dir_all("/tmp/bruh"); - let _ = std::fs::create_dir("/tmp/bruh"); -apps/desktop/src-tauri/src/lib.rs (2)
2241-2265: Avoid double map lookups and unnecessary cloneMinor polish: capture
webview_windows()once and reuse the iterator; also you can callset_focuson the borrowed window without cloning the handle.- let has_window = _handle.webview_windows().iter().any(|(label, _)| { - label.starts_with("editor-") - || label.as_str() == "settings" - || label.as_str() == "signin" - }); + let windows = _handle.webview_windows(); + let is_relevant = |label: &str| { + label.starts_with("editor-") || label == "settings" || label == "signin" + }; + let has_window = windows.iter().any(|(label, _)| is_relevant(label.as_str())); @@ - if let Some(window) = _handle - .webview_windows() - .iter() - .find(|(label, _)| { - label.starts_with("editor-") - || label.as_str() == "settings" - || label.as_str() == "signin" - }) - .map(|(_, window)| window.clone()) + if let Some((_, window)) = + windows.iter().find(|(label, _)| is_relevant(label.as_str())) { window.set_focus().ok(); }
2247-2264: Consider a deterministic focus order (editor > settings > signin)Current selection depends on map iteration order. If you want predictable behavior, pick in priority order.
- if has_window { - if let Some((_, window)) = - windows.iter().find(|(label, _)| is_relevant(label.as_str())) - { - window.set_focus().ok(); - } - } else { + if let Some((_, window)) = windows + .iter() + .find(|(label, _)| label.starts_with("editor-")) + .or_else(|| windows.iter().find(|(label, _)| label.as_str() == "settings")) + .or_else(|| windows.iter().find(|(label, _)| label.as_str() == "signin")) + { + window.set_focus().ok(); + } else { let handle = _handle.clone(); tokio::spawn(async move { let _ = ShowCapWindow::Main.show(&handle).await; }); }crates/rendering/src/composite_frame.rs (1)
129-137: Linear mipmap filter is a no-op without mip levels; consider enabling mips for stable downscale quality.You switched mipmap_filter to Linear (Line 136), but the frame texture is created with mip_level_count = 1 (Lines 171–172). This yields no mip sampling and can accentuate aliasing before your shader-side unsharp mask. Suggest enabling mip chains to improve downscale stability and text clarity.
Apply this diff to provision mip levels (you’ll still need a mip-gen pass elsewhere):
@@ - let sampler = device.create_sampler( + let sampler = device.create_sampler( &(wgpu::SamplerDescriptor { @@ - mipmap_filter: wgpu::FilterMode::Linear, + mipmap_filter: wgpu::FilterMode::Linear, ..Default::default() }), ); @@ - pub fn create_frame_texture(device: &wgpu::Device, width: u32, height: u32) -> wgpu::Texture { + pub fn create_frame_texture(device: &wgpu::Device, width: u32, height: u32) -> wgpu::Texture { device.create_texture( &(wgpu::TextureDescriptor { size: wgpu::Extent3d { width, height, depth_or_array_layers: 1, }, - mip_level_count: 1, + mip_level_count: Self::mip_count(width, height), sample_count: 1, dimension: wgpu::TextureDimension::D2, format: wgpu::TextureFormat::Rgba8UnormSrgb, usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::RENDER_ATTACHMENT | wgpu::TextureUsages::COPY_DST, label: Some("Frame Composite texture"), view_formats: &[], }), ) } + + #[inline] + fn mip_count(width: u32, height: u32) -> u32 { + let max_dim = width.max(height).max(1); + 32 - max_dim.leading_zeros() + }If you already generate mips for other paths, confirm they’re being generated for this texture too; otherwise the above will just allocate extra levels without using them.
Also applies to: 163-181
crates/rendering/src/shaders/composite-video-frame.wgsl (5)
164-169: Guard against zero target_size and compute a reusable downscale factor.Avoid potential div-by-zero and reuse the factor elsewhere.
- let source_size = uniforms.frame_size * (crop_bounds_uv.zw - crop_bounds_uv.xy); - let target_size = uniforms.target_size; - let scale_ratio = source_size / target_size; - let is_downscaling = max(scale_ratio.x, scale_ratio.y) > 1.1; + let source_size = uniforms.frame_size * (crop_bounds_uv.zw - crop_bounds_uv.xy); + let safe_target = max(uniforms.target_size, vec2<f32>(1.0)); + let scale_ratio = source_size / safe_target; + let downscale_factor = max(scale_ratio.x, scale_ratio.y); + let is_downscaling = downscale_factor > 1.1;
171-199: Heavier sampling path: consider a toggle and/or early exit threshold.The downscale sharpen path adds 5 texture reads per call (center + 4 neighbors), and it’s used inside the motion blur sampling loop. Consider a uniform flag to disable sharpening during motion blur or on low-power presets, or short-circuit when downscale_factor is only slightly above threshold to save bandwidth.
If you add a boolean uniform like enable_sharpen_when_downscaling, gate the if is_downscaling block accordingly.
192-194: Use both axes for sharpness scaling.You detect downscaling via max(x, y) but drive sharpness from x only. Use the same factor for consistency and better anisotropic cases.
- let sharpness = min(scale_ratio.x * 0.3, 0.7); // Cap at 0.7 to avoid over-sharpening + let sharpness = min(downscale_factor * 0.3, 0.7); // Cap at 0.7 to avoid over-sharpening
205-218: Rounded corners: remove dead variable or restore soft edge.distance_blur is unused after switching to a hard cut. Either remove it or restore the soft fade for nicer edges.
- let distance_blur = 1.0; + // let distance_blur = 1.0; // Uncomment if restoring soft edge @@ - if target_rounding_coord.x >= 0.0 && target_rounding_coord.y >= 0.0 && distance >= -distance_blur/2.0 { - return vec4<f32>(0.0); - // return mix(current_color, vec4<f32>(0.0), min(distance / distance_blur + 0.5, 1.0)); + if target_rounding_coord.x >= 0.0 && target_rounding_coord.y >= 0.0 /* && distance >= -distance_blur/2.0 */ { + return vec4<f32>(0.0); + // Soft edge alternative: + // return mix(current_color, vec4<f32>(0.0), min(distance / distance_blur + 0.5, 1.0));
226-229: Custom smoothstep shadows the WGSL builtin.Not wrong, but can confuse maintainers and tooling. Prefer the builtin or rename to smoothstep01 to avoid ambiguity.
-fn smoothstep(edge0: f32, edge1: f32, x: f32) -> f32 { +fn smoothstep01(edge0: f32, edge1: f32, x: f32) -> f32 { let t = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0); return t * t * (3.0 - 2.0 * t); }Then update call sites accordingly, or just use the builtin smoothstep().
Cargo.toml (1)
12-19: Workspace-wide Tokio features: consider scoping to crates that need themEnabling many Tokio features at the workspace level increases compile time and binary size for crates that don’t use them. Prefer enabling features per-crate unless they’re truly universal.
crates/enc-gif/Cargo.toml (2)
1-15: Add crate metadata (license, repository, description) for clarity and reuseIncluding standard metadata helps downstream users and tooling.
[package] name = "cap-enc-gif" version = "0.1.0" edition = "2024" +description = "GIF encoding utilities for Cap, powered by gifski" +license = "MIT OR Apache-2.0" +repository = "https://github.com/CapSoftware/Cap" +authors = ["Cap Software"]
6-12: Versioning nit: consider pinning gifski with a compatible range or via workspaceTo reduce surprise updates, you could use a tilde/caret range (e.g., "^1.32") or pull gifski/imgref/rgb into [workspace.dependencies] for consistency.
crates/export/src/gif.rs (2)
85-99: Progress callback fires before frame is actually addedCall on_progress after a successful add_frame, and report the incremented count so the first frame is 1, not 0.
- while let Some((frame, _frame_number)) = video_rx.blocking_recv() { - (on_progress)(frame_count); - - if let Err(e) = - gif_encoder.add_frame(&frame.data, frame.padded_bytes_per_row as usize) - { + while let Some((frame, _frame_number)) = video_rx.blocking_recv() { + if let Err(e) = gif_encoder.add_frame(&frame.data, frame.padded_bytes_per_row as usize) { return Err(ExportError::Other(format!( "Failed to add frame to GIF: {e}" ))); } - frame_count += 1; + frame_count += 1; + (on_progress)(frame_count); }
44-45: Channel buffer size may throttle rendering at higher FPSA capacity of 4 can cause avoidable backpressure on render_video_to_channel for large frames/high FPS. Consider making it configurable or bumping it (e.g., based on fps).
crates/enc-ffmpeg/src/audio/opus.rs (3)
145-153: Reduce per-frame allocations when drainingVecDeque.
drain(..).collect::<Vec<_>>()allocates every frame. Usemake_contiguous()and copy a slice, thendrain(..n)to drop consumed bytes.Sketch:
let n = frame_size_bytes; let buf = self.buffer.make_contiguous(); frame.data_mut(0)[..n].copy_from_slice(&buf[..n]); self.buffer.drain(..n);Also applies to: 170-178
106-118: Time base consistency.You set stream time base to
1/ratebut encoder time base tooutput_config.time_base. If they differ,rescale_tswill convert; verify this is intentional. Many encoders use1/ratefor both.
33-36: Bitrate configurability.
OUTPUT_BITRATEis hardcoded (128k). If the PR goal includes “configurable bitrate,” consider threading audio bitrate from config.Also applies to: 106-107
crates/enc-mediafoundation/src/unsafe_send.rs (1)
32-39: Display bound should beDisplaynotDebugif used asDisplay.Minor:
impl<T> Display for UnsafeSend<T> where T: Debugformats viaDebug. Either rename toDebugDisplayor bound onDisplayand usefmt::Display.-impl<T> Display for UnsafeSend<T> -where - T: std::fmt::Debug, +impl<T> Display for UnsafeSend<T> +where + T: std::fmt::Display, { - fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { - self.0.fmt(f) + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!(f, "{}", self.0) } }crates/enc-avfoundation/src/mp4.rs (2)
318-323: Pause before start: unwrap can panic.If
pause()is called before any frame,segment_first_timestampisNone.- self.elapsed_duration = self - .elapsed_duration - .add(time.sub(self.segment_first_timestamp.unwrap())); - self.segment_first_timestamp = None; + if let Some(seg0) = self.segment_first_timestamp.take() { + self.elapsed_duration = self.elapsed_duration.add(time.sub(seg0)); + }
98-105: Downscale rounding: ensure width is multiple of 2.H.264 typically requires even dimensions. Round
output_width/output_heightto even numbers to avoid encoder failures.- let output_width = (video_config.width as f32 * downscale) as u32; - let output_height = output_height.unwrap_or(video_config.height); + let mut output_width = (video_config.width as f32 * downscale) as u32; + let mut output_height = output_height.unwrap_or(video_config.height); + output_width &= !1; + output_height &= !1;crates/enc-avfoundation/src/lib.rs (1)
5-5: Avoid re-exporting the entire module surfaceConsider exporting only the intended public API instead of
pub use mp4::*to keep internals private and reduce accidental API surface. Also add a brief crate-level doc.Apply:
-#![cfg(target_os = "macos")] +#![cfg(target_os = "macos")] +//! AVFoundation-based MP4 encoder (macOS only). -mod mp4; +mod mp4; -pub use mp4::*; +pub use mp4::{MP4Encoder /*, other intended types */};crates/enc-ffmpeg/src/mux/ogg.rs (2)
11-26: Unify error surface with MP4 muxerMP4 uses a typed InitError; here
Box<dyn Error>loses context. IntroduceOggInitErrorwith variants (Ffmpeg, EncoderInit) for parity and better diagnostics.
27-29: Optional: API symmetry with AudioEncoder traitIf other encoders return Results on queue, consider returning
Result<(), E>here for parity and backpressure on encoder/output failures.crates/enc-mediafoundation/src/media.rs (4)
9-9: Magic number for MF_VERSIONPrefer deriving
MF_VERSIONfrom SDK/API constants if available (MF_SDK_VERSION/MF_API_VERSION), or document the derivation inline to avoid confusion.
11-13: Make packing helper const and tighten castsSmall improvement: allow compile-time folding and clarify intent.
Apply:
-fn pack_2_u32_as_u64(high: u32, low: u32) -> u64 { - ((high as u64) << 32) | low as u64 -} +const fn pack_2_u32_as_u64(high: u32, low: u32) -> u64 { + ((high as u64) << 32) | (low as u64) +}
15-23: Remove unnecessary inner unsafe blocksThese methods are already in an
unsafe fn; the calls themselves are safe. Drop nestedunsafe { ... }to reduce noise.Apply:
-unsafe fn MFSetAttribute2UINT32asUINT64( +unsafe fn MFSetAttribute2UINT32asUINT64( @@ -) -> Result<()> { - unsafe { attributes.SetUINT64(key, pack_2_u32_as_u64(high, low)) } -} +) -> Result<()> { + attributes.SetUINT64(key, pack_2_u32_as_u64(high, low)) +}
25-33: These wrappers can likely be safeThey simply set attributes on a valid IMFAttributes; consider making them safe fns to push unsafety to true FFI boundaries. If call sites rely on
unsafe, keep as-is for now.Apply:
-#[allow(non_snake_case)] -pub unsafe fn MFSetAttributeSize( +#[allow(non_snake_case)] +pub fn MFSetAttributeSize( @@ -) -> Result<()> { - unsafe { MFSetAttribute2UINT32asUINT64(attributes, key, width, height) } +) -> Result<()> { + MFSetAttribute2UINT32asUINT64(attributes, key, width, height) }And same for
MFSetAttributeRatio. Adjust call sites accordingly if you adopt this.crates/enc-mediafoundation/src/mft.rs (3)
56-58: Be explicit with ActivateObject genericType inference can be brittle here. Specify the target interface for clarity.
Apply:
- pub fn create_transform(&self) -> Result<IMFTransform> { - unsafe { self.source.ActivateObject() } - } + pub fn create_transform(&self) -> Result<IMFTransform> { + unsafe { self.source.ActivateObject::<IMFTransform>() } + }
34-48: Pre-reserve capacity for discovered devicesMinor allocation win when many encoders are present.
Apply:
- let mut encoder_devices = Vec::new(); + let mut encoder_devices = Vec::with_capacity(encoders.len());
23-33: API naming vs. behavior
enumerate(major_type, subtype)always queriesMFT_CATEGORY_VIDEO_ENCODER. Either rename toenumerate_video_encodersor addcategory: GUIDto the signature for future-proofing.crates/enc-mediafoundation/src/async_callback.rs (1)
34-36: Optional: Return default parameters instead of E_NOTIMPL.Not required, but you can set flags/queue to defaults to avoid E_NOTIMPL branches in some callers.
- fn GetParameters(&self, _pdwflags: *mut u32, _pdwqueue: *mut u32) -> windows::core::Result<()> { - Err(windows::core::Error::from_hresult(E_NOTIMPL)) - } + fn GetParameters(&self, pdwflags: *mut u32, pdwqueue: *mut u32) -> windows::core::Result<()> { + unsafe { + if !pdwflags.is_null() { *pdwflags = 0; } + if !pdwqueue.is_null() { *pdwqueue = 0; } + } + Ok(()) + }crates/enc-mediafoundation/src/video/video_processor.rs (3)
96-104: Color space bitfield composition is fragile; prefer ColorSpace1 APIs when available.Manually OR-ing _bitfield is error-prone. If your min OS supports it, use VideoProcessorSetOutputColorSpace1/SetStreamColorSpace1 with D3D11_VIDEO_PROCESSOR_COLOR_SPACE1 and set NominalRange explicitly. Otherwise, document the bit layout and constants used.
249-259: Guard against zero dimensions in aspect-ratio helpers.Division by zero is possible if any dimension is 0 (defensive programming).
fn compute_scale_factor(output_size: Vector2, input_size: Vector2) -> f32 { - let output_ratio = output_size.X / output_size.Y; - let input_ratio = input_size.X / input_size.Y; + if output_size.X == 0.0 || output_size.Y == 0.0 || input_size.X == 0.0 || input_size.Y == 0.0 { + return 1.0; + } + let output_ratio = output_size.X / output_size.Y; + let input_ratio = input_size.X / input_size.Y;Also applies to: 261-290
125-138: Bind flags: ensure they match format requirements.
- Output NV12 surfaces commonly work with D3D11_BIND_RENDER_TARGET for VP output, but some drivers are picky. If you hit CreateTexture2D failures on NV12, drop RENDER_TARGET and rely on VIDEO_ENCODER; otherwise keep as is.
- Input texture probably doesn’t need D3D11_BIND_RENDER_TARGET; SHADER_RESOURCE is sufficient for VP input.
- texture_desc.BindFlags = (D3D11_BIND_RENDER_TARGET.0 | D3D11_BIND_SHADER_RESOURCE.0) as u32; + texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE.0 as u32;Also applies to: 166-176
crates/scap-direct3d/src/lib.rs (2)
193-198: Simplify and avoid double unwrap; propagate errors cleanly.The current map/transponse/unwrap chain is hard to read and can panic. Prefer explicit handling.
- let (d3d_device, d3d_context) = d3d_device - .map(|d| unsafe { d.GetImmediateContext() }.map(|v| (d, v))) - .transpose() - .unwrap() - .unwrap(); + let d = d3d_device.expect("D3D11CreateDevice should have initialized a device"); + let d3d_context = unsafe { d.GetImmediateContext() }.map_err(StartRunnerError::D3DDevice)?; + let (d3d_device, d3d_context) = (d, d3d_context);
267-284: Handle optional FrameArrived parameter without panicking.The handler receives Option<…>. Don’t expect(); ignore None safely.
- let frame = frame_pool - .as_ref() - .expect("FrameArrived parameter was None") - .TryGetNextFrame()?; + let Some(frame_pool) = frame_pool.as_ref() else { return Ok(()); }; + let frame = frame_pool.TryGetNextFrame()?;crates/scap-ffmpeg/src/direct3d.rs (1)
18-35: Deduplicate row-copy logic.Both branches duplicate the blit; factor into a small helper to reduce surface for bugs.
fn copy_rows(src: &[u8], src_stride: usize, dst: &mut [u8], dst_stride: usize, row_len: usize, rows: usize) { for i in 0..rows { let s = &src[i * src_stride..i * src_stride + row_len]; let d = &mut dst[i * dst_stride..i * dst_stride + row_len]; d.copy_from_slice(s); } }Also applies to: 38-56
crates/scap-direct3d/examples/cli.rs (2)
15-16: Unuseddisplay(post-.raw_handle()).If you go with Option B (stub), drop the unused variable to avoid warnings. If Option A, use it in
Capturer::new.
10-13: Trim unused imports.Prefer importing only
AsFFmpeg. RemoveD3D11_BOXunless cropping is re-enabled.- use scap_ffmpeg::*; + use scap_ffmpeg::AsFFmpeg; - use windows::Win32::Graphics::Direct3D11::D3D11_BOX;crates/enc-ffmpeg/src/audio/aac.rs (2)
34-35: Comment contradicts bitrate.Value is 320k, comment says 128k. Fix the comment or the constant.
- const OUTPUT_BITRATE: usize = 320 * 1000; // 128k + const OUTPUT_BITRATE: usize = 320 * 1000; // 320k
255-262: Disambiguate inherent vs trait method calls.Use fully-qualified syntax to avoid accidental recursion and silence clippy.
fn queue_frame(&mut self, frame: frame::Audio, output: &mut format::context::Output) { - self.queue_frame(frame, output); + AACEncoder::queue_frame(self, frame, output); } fn finish(&mut self, output: &mut format::context::Output) { - self.finish(output); + AACEncoder::finish(self, output); }crates/enc-ffmpeg/src/mux/mod.rs (1)
4-5: Consider feature-gating optional containers.If OGG isn’t always needed, gate it behind a feature to reduce compile time for non-audio-only builds.
Example:
- mod ogg; - pub use ogg::*; +#[cfg(feature = "ogg")] +mod ogg; +#[cfg(feature = "ogg")] +pub use ogg::*;crates/export/Cargo.toml (1)
28-30: Gate GIF stack behind a feature to avoid pulling heavy deps when not exporting GIFs.Optionalize these deps and add a “gif” feature.
-[dependencies] +[dependencies] ... -cap-enc-gif = { path = "../enc-gif" } +cap-enc-gif = { path = "../enc-gif", optional = true } ... -gifski = "1.32" -imgref = "1.10" -rgb = "0.8" +gifski = { version = "1.32", optional = true } +imgref = { version = "1.10", optional = true } +rgb = { version = "0.8", optional = true } + +[features] +default = [] +gif = ["dep:cap-enc-gif", "dep:gifski", "dep:imgref", "dep:rgb"]crates/export/src/mp4.rs (4)
127-131: 6s recv timeout is aggressive; risk of premature export truncation under load.Make this configurable or compute from frame budget.
- match tokio::time::timeout(Duration::from_secs(6), video_rx.recv()).await { + let rx_timeout = Duration::from_secs(6); // TODO: make configurable + match tokio::time::timeout(rx_timeout, video_rx.recv()).await {
175-199: Screenshot file naming may overwrite previous exports.Optional: append timestamp or frame number.
- let screenshot_path = screenshots_dir.join("display.jpg"); + let ts = chrono::Local::now().format("%Y%m%d-%H%M%S"); + let screenshot_path = screenshots_dir.join(format!("display-{ts}.jpg"));
52-54: Channel sizing: consider increasing if encoders are slower than render.Small buffers can cause backpressure and timeouts during IO spikes.
- let (tx_image_data, mut video_rx) = tokio::sync::mpsc::channel::<(RenderedFrame, u32)>(4); - let (frame_tx, frame_rx) = std::sync::mpsc::sync_channel::<MP4Input>(4); + let (tx_image_data, mut video_rx) = tokio::sync::mpsc::channel::<(RenderedFrame, u32)>(8); + let (frame_tx, frame_rx) = std::sync::mpsc::sync_channel::<MP4Input>(8);
78-93: Future MF path: keep H.264 profile/level aligned with muxer.When swapping in hardware frames from Media Foundation, ensure the encoder’s SPS/PPS (profile/level, time_base) match the muxer’s expectations to avoid “non-monotone PTS” and “invalid extradata” at moov finalize.
crates/enc-ffmpeg/src/video/mod.rs (1)
1-2: Optional: feature-gate codec modules.If more codecs are added, consider per-codec features to trim builds.
- mod h264; - pub use h264::*; +#[cfg(feature = "h264")] +mod h264; +#[cfg(feature = "h264")] +pub use h264::*;crates/recording/Cargo.toml (1)
37-37: Removetracing-subscriberfrom[dependencies]. It’s only used in the crate’s examples—keep it under[dev-dependencies]and align its feature set (e.g.features = ["fmt"]) to prevent feature-unification surprises.crates/enc-mediafoundation/src/lib.rs (1)
1-10: Windows-gated crate root and re-exports look correctGood target scoping. Optional: consider namespacing the re-export (e.g., MfH264Encoder) to avoid collisions with similarly named types from enc-ffmpeg in downstream imports.
crates/enc-ffmpeg/src/lib.rs (1)
1-8: Be cautious with root-level glob re-exportspub use audio::, video::, mux::* creates a very wide public surface and increases semver churn risk. Consider exposing modules (pub mod audio/video/mux) and curating root re-exports for only the intended types.
crates/enc-ffmpeg/src/audio/mod.rs (1)
1-8: Consider curating re-exportsGlob re-exports ease use but make the API less explicit. Optionally re-export only the stable entry points (e.g., AudioEncoder, OpusEncoder, AacEncoder) and keep the rest under audio::.
crates/recording/src/instant_recording.rs (1)
192-201: Prefer configurable FPS/cursor settings over hardcoded valuesHardcoding true and 30 makes tuning difficult. Thread these from inputs/config to keep Instant mode aligned with Studio.
crates/enc-mediafoundation/src/video/mod.rs (1)
1-7: Gate Media Foundation modules to Windows to avoid cross-platform build breaksIf this crate is compiled on non-Windows targets in the workspace, the submodules will fail to build. Either gate the crate at Cargo level or gate the modules/exports here.
Apply:
-mod h264; -mod sample_writer; -mod video_processor; +#[cfg(target_os = "windows")] +mod h264; +#[cfg(target_os = "windows")] +mod sample_writer; +#[cfg(target_os = "windows")] +mod video_processor; -pub use h264::*; -pub use sample_writer::*; -pub use video_processor::*; +#[cfg(target_os = "windows")] +pub use h264::*; +#[cfg(target_os = "windows")] +pub use sample_writer::*; +#[cfg(target_os = "windows")] +pub use video_processor::*;crates/enc-ffmpeg/src/mux/mp4.rs (1)
115-121: Exposing mutable access to the inner H264Encoder—clarify invariants or narrow APIDirect &mut access allows callers to call finish/configure in ways that can violate MP4File’s lifecycle assumptions. Either document constraints or offer narrow methods on MP4File instead.
+ /// Access the video encoder. Do not call `finish` directly; use `MP4File::finish`. pub fn video(&self) -> &H264Encoder { &self.video } + /// Mutable access for runtime tuning; do not take ownership or finalize here. pub fn video_mut(&mut self) -> &mut H264Encoder { &mut self.video }crates/recording/src/studio_recording.rs (1)
685-686: Max FPS hardcoded to 60—confirm intent and consider making it configurableIf high-refresh displays (120/144/165Hz) are expected, wiring this from settings or capture_target avoids silent downsampling.
crates/enc-mediafoundation/Cargo.toml (1)
11-44: Trim Windows feature-set to what’s actually used to cut build time.The feature list is broad; reducing to only referenced APIs (e.g., D3D11, DXGI, MediaFoundation, WinRT Direct3D11) speeds CI and avoids accidental API surface creep.
If you want, I can scan the crate and derive a minimal features list.
crates/recording/src/sources/screen_capture/mod.rs (3)
201-203: New D3D device field + unsafe Send/Sync: verify thread-safety contract.Adding
ID3D11Devicechanges the safety story of the existingunsafe impl Send + Sync. D3D11 devices are generally multi-thread capable, but please confirm the windows-rs wrapper’s Send/Sync status and that the device is used per D3D11 multithread rules.Optionally enforce protection at device creation (in enc-mediafoundation/d3d.rs):
+use windows::Win32::Graphics::Direct3D11::ID3D11Multithread; // after successful creation: -Ok(device.unwrap()) +// Enable D3D11 multithread protection (no-op if cast fails) +unsafe { + if let Ok(mt) = device.cast::<ID3D11Multithread>() { + mt.SetMultithreadProtected(true); + } +} +Ok(device.unwrap())If you want, I can add a compile-time check to assert Send+Sync on Windows only.
Also applies to: 220-222
290-290: Avoid fps=0 edge case.If
max_fpsis 0,fpsbecomes 0. Clamp to at least 1 to avoid downstream divide-by-zero or encoder setup failures.- let fps = max_fps.min(display.refresh_rate() as u32); + let fps = max_fps.max(1).min(display.refresh_rate() as u32);
373-389: Even-dimension alignment only applied when cropped (Windows).Some H.264 MFTs require even width/height; displays are usually even, but not guaranteed. Apply the alignment after computing
output_sizeso it also covers the non-cropped path.let output_size = crop_bounds .and_then(|b| { #[cfg(target_os = "macos")] { let logical_size = b.size(); let scale = display.raw_handle().scale()?; Some(PhysicalSize::new( logical_size.width() * scale, logical_size.height() * scale, )) } #[cfg(windows)] - Some(b.size().map(|v| (v / 2.0).floor() * 2.0)) + Some(b.size()) }) .or_else(|| display.physical_size()) .ok_or(ScreenCaptureInitError::NoBounds)?; + + // Ensure even dimensions for Windows encoders (cropped or full-screen). + #[cfg(windows)] + let output_size = output_size.map(|s| s.map(|v| (v / 2.0).floor() * 2.0));crates/enc-mediafoundation/src/d3d.rs (4)
48-53: Surface the WARP fallback decision (log or metric).When
DXGI_ERROR_UNSUPPORTEDtriggers WARP, emit a warning/metric to aid diagnostics on systems without HW acceleration.If
tracingis available in this crate:+use tracing::warn; ... if let Err(error) = &result { if error.code() == DXGI_ERROR_UNSUPPORTED { + warn!("D3D11 hardware device unsupported; falling back to WARP."); result = create_d3d_device_with_type(D3D_DRIVER_TYPE_WARP, flags, &mut device); } }
39-56: Replaceunwrap()on the device with a proper error.Very unlikely, but returning Ok with
Nonewould panic. Prefer converting to a Windows error.-use windows::Win32::Foundation::HMODULE; +use windows::Win32::Foundation::{HMODULE, E_UNEXPECTED}; ... - result?; - Ok(device.unwrap()) + result?; + let device = device.ok_or(windows::core::Error::from(E_UNEXPECTED))?; + Ok(device)
39-56: Enable D3D11 multithread protection after device creation.Protects against accidental multi-threaded use of the immediate context; harmless otherwise.
use windows::Win32::Graphics::Direct3D11::{D3D11_CREATE_DEVICE_DEBUG, ID3D11Texture2D}; +use windows::Win32::Graphics::Direct3D11::ID3D11Multithread; ... - let device = device.ok_or(windows::core::Error::from(E_UNEXPECTED))?; + let device = device.ok_or(windows::core::Error::from(E_UNEXPECTED))?; + unsafe { + if let Ok(mt) = device.cast::<ID3D11Multithread>() { + mt.SetMultithreadProtected(true); + } + } Ok(device)
19-37: Optional: specify desired feature levels when creating the device.Passing an explicit feature-level array can avoid surprises on older adapters and make capability checks deterministic.
I can wire
D3D_FEATURE_LEVEL_11_0.._12_1selection and plumb the negotiated level if useful.crates/enc-mediafoundation/examples/cli.rs (3)
90-97: Use borderless CLI flag to configure capturer.You accept --borderless but hardcode is_border_required: Some(false). Wire it up.
- scap_direct3d::Settings { - is_border_required: Some(false), + scap_direct3d::Settings { + is_border_required: Some(!args.borderless), ..Default::default() },Also pass args.borderless into run(...) or plumb a parameter.
Also applies to: 306-313
219-220: Prefer logical OR (||) over bitwise OR (|) for booleans.Minor style/readability; avoids evaluating RHS unnecessarily.
- verbose | wait_for_debugger, + verbose || wait_for_debugger,
246-257: Make validate_path case-insensitive for .mp4.Windows file systems are typically case-insensitive; accept MP4 as well.
- if let Some(extension) = path.extension() { - if extension != "mp4" { + if let Some(extension) = path.extension().and_then(|e| e.to_str()) { + if extension.eq_ignore_ascii_case("mp4") == false { valid = false; }crates/enc-mediafoundation/src/video/sample_writer.rs (2)
20-22: Reconsider unsafe Send/Sync on SampleWriter.IMFSinkWriter/threading model can be nuanced; marking Send + Sync blindly risks UB if used concurrently.
- If you only use SampleWriter from one thread, drop Sync and avoid cross-thread sharing.
- If cross-thread use is required, document/thread-confine actual calls and guard with a Mutex and a single worker thread.
66-83: Validate stream setup sequence and attributes.When writing H.264 bitstream, Sink Writer typically needs codec config (e.g., MF_MT_MPEG_SEQUENCE_HEADER) on the input type. Ensure H264Encoder::output_type() carries SPS/PPS; if not, populate before BeginWriting.
If needed, set:
- MF_MT_MPEG_SEQUENCE_HEADER on output_type
- MF_MT_FRAME_RATE / FRAME_SIZE already present
crates/recording/src/capture_pipeline.rs (2)
525-583: Deduplicate D3D device creation (use shared helper).This re-implements create_d3d_device; prefer cap_enc_mediafoundation::d3d::create_d3d_device() to avoid divergence.
-#[cfg(windows)] -pub fn create_d3d_device() ... +#[cfg(windows)] +pub use cap_enc_mediafoundation::d3d::create_d3d_device;
473-476: Avoid unwrap() on frame.as_ffmpeg() in Instant mode.This can panic on format conversion failure; log and continue.
- mp4.queue_video_frame(frame.as_ffmpeg().unwrap()); + if let Ok(ff) = frame.as_ffmpeg() { + mp4.queue_video_frame(ff); + } else { + tracing::warn!("Failed to convert frame to ffmpeg format; skipping frame"); + }crates/recording/src/sources/screen_capture/windows.rs (2)
419-422: Remove unused timestamp computation (dead code).
timestampis computed but never used. Drop it or wire it into downstream PTS.Apply:
- let timestamp = TimeSpan { - Duration: frame_time.Duration - first_time.Duration, - };
151-170: Drop-rate abort threshold: consider configurability and debounce.Hard-exiting when drop rate >25% over ~3s might be too aggressive on busy systems; users may prefer continued capture with warning.
Make the threshold and window duration configurable (via settings/flags), and gate shutdown on sustained exceedance across multiple windows.
Also applies to: 172-182
crates/enc-mediafoundation/src/video/h264.rs (3)
371-385: Set sample duration to help encoders expecting CFR.Many H.264 MFTs behave better when
MFSampleDurationis set; you already setSampleTime. Storingframe_rateon the encoder enables precise duration.Apply:
pub struct H264Encoder { @@ - output_type: IMFMediaType, + output_type: IMFMediaType, + frame_rate: u32, } @@ - Ok(Self { + Ok(Self { _d3d_device: d3d_device.clone(), @@ - output_type, + output_type, + frame_rate, }) @@ - unsafe { + unsafe { mf_sample.AddBuffer(&input_buffer)?; mf_sample.SetSampleTime(sample.timestamp.Duration)?; + // 10_000_000 ticks per second in MF Time + mf_sample.SetSampleDuration((10_000_000u64 / self.frame_rate as u64) as i64)?; self.transform .ProcessInput(self.input_stream_id, &mf_sample, 0)?; };
407-409: Bitrate heuristic: avoid integer truncation and odd scaling.
((fps - 30) / 2 + 30)truncates; e.g., 59→44. Prefer a continuous function and float math to avoid step changes.Example:
-fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { - ((width * height * ((fps - 30) / 2 + 30)) as f32 * multiplier) as u32 -} +fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { + ((width as f32) * (height as f32) * (fps as f32) * multiplier) as u32 +}Optionally clamp to a min/max to keep devices happy.
226-228: Consider removingMF_MT_ALL_SAMPLES_INDEPENDENTfor H.264.H.264 streams aren’t all independent frames; some MFTs ignore this, but it can confuse stream descriptors.
Safe to omit this attribute unless you have a specific device requiring it.
| #[cfg(windows)] | ||
| let d3d_device = crate::capture_pipeline::create_d3d_device().unwrap(); | ||
|
|
||
| let (screen_source, screen_rx) = create_screen_capture( | ||
| &capture_target, | ||
| !custom_cursor_capture, | ||
| 120, | ||
| 60, | ||
| system_audio.0, | ||
| start_time, | ||
| #[cfg(windows)] | ||
| d3d_device, | ||
| ) | ||
| .await?; | ||
| .await | ||
| .unwrap(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unwraps in the hot path; propagate errors instead (prevents user-facing panics)
Unwraps here will crash on device/init failures. Convert to typed errors consistent with CreateSegmentPipelineError.
#[cfg(windows)]
-let d3d_device = crate::capture_pipeline::create_d3d_device().unwrap();
+let d3d_device = crate::capture_pipeline::create_d3d_device()
+ .map_err(|e| CreateSegmentPipelineError::Media(MediaError::Any(e.to_string().into())))?;
let (screen_source, screen_rx) = create_screen_capture(
&capture_target,
!custom_cursor_capture,
- 60,
+ 60,
system_audio.0,
start_time,
#[cfg(windows)]
d3d_device,
)
-.await
-.unwrap();
+.await?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[cfg(windows)] | |
| let d3d_device = crate::capture_pipeline::create_d3d_device().unwrap(); | |
| let (screen_source, screen_rx) = create_screen_capture( | |
| &capture_target, | |
| !custom_cursor_capture, | |
| 120, | |
| 60, | |
| system_audio.0, | |
| start_time, | |
| #[cfg(windows)] | |
| d3d_device, | |
| ) | |
| .await?; | |
| .await | |
| .unwrap(); | |
| #[cfg(windows)] | |
| let d3d_device = crate::capture_pipeline::create_d3d_device() | |
| .map_err(|e| CreateSegmentPipelineError::Media(MediaError::Any(e.to_string().into())))?; | |
| let (screen_source, screen_rx) = create_screen_capture( | |
| &capture_target, | |
| !custom_cursor_capture, | |
| 60, | |
| system_audio.0, | |
| start_time, | |
| #[cfg(windows)] | |
| d3d_device, | |
| ) | |
| .await?; |
🤖 Prompt for AI Agents
In crates/recording/src/studio_recording.rs around lines 679–693, replace the
hot-path unwraps so failures are propagated as typed errors instead of
panicking: change crate::capture_pipeline::create_d3d_device().unwrap() to
propagate the error (use ? or map_err to CreateSegmentPipelineError) and remove
the final .unwrap() after create_screen_capture().await by returning its Result
(use ? or map_err to CreateSegmentPipelineError). Update the enclosing function
signature to return Result<..., CreateSegmentPipelineError> (or wrap/match
errors into that type), ensure the #[cfg(windows)] d3d_device handling compiles
for non-Windows (use an Option or cfg-specific binding), and propagate any
intermediate errors rather than panicking.
| if d3d_device.is_none() { | ||
| unsafe { | ||
| D3D11CreateDevice( | ||
| None, | ||
| D3D_DRIVER_TYPE_HARDWARE, | ||
| HMODULE::default(), | ||
| Default::default(), | ||
| None, | ||
| D3D11_SDK_VERSION, | ||
| Some(&mut d3d_device), | ||
| None, | ||
| None, | ||
| ) | ||
| } | ||
| .map_err(StartRunnerError::D3DDevice)?; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Create D3D device with BGRA (and optionally VIDEO_SUPPORT) flags for WinRT capture interop.
Windows.Graphics.Capture interop typically requires D3D11_CREATE_DEVICE_BGRA_SUPPORT. Without it, CreateDirect3D11DeviceFromDXGIDevice may fail.
- D3D11CreateDevice(
+ let flags = windows::Win32::Graphics::Direct3D11::D3D11_CREATE_DEVICE_BGRA_SUPPORT;
+ D3D11CreateDevice(
None,
D3D_DRIVER_TYPE_HARDWARE,
HMODULE::default(),
- Default::default(),
+ flags,
None,
D3D11_SDK_VERSION,
Some(&mut d3d_device),
None,
None,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if d3d_device.is_none() { | |
| unsafe { | |
| D3D11CreateDevice( | |
| None, | |
| D3D_DRIVER_TYPE_HARDWARE, | |
| HMODULE::default(), | |
| Default::default(), | |
| None, | |
| D3D11_SDK_VERSION, | |
| Some(&mut d3d_device), | |
| None, | |
| None, | |
| ) | |
| } | |
| .map_err(StartRunnerError::D3DDevice)?; | |
| } | |
| if d3d_device.is_none() { | |
| unsafe { | |
| // Enable BGRA support for WinRT capture interop | |
| let flags = windows::Win32::Graphics::Direct3D11::D3D11_CREATE_DEVICE_BGRA_SUPPORT; | |
| D3D11CreateDevice( | |
| None, | |
| D3D_DRIVER_TYPE_HARDWARE, | |
| HMODULE::default(), | |
| flags, | |
| None, | |
| D3D11_SDK_VERSION, | |
| Some(&mut d3d_device), | |
| None, | |
| None, | |
| ) | |
| } | |
| .map_err(StartRunnerError::D3DDevice)?; | |
| } |
🤖 Prompt for AI Agents
In crates/scap-direct3d/src/lib.rs around lines 176–191, the D3D11CreateDevice
call is creating the device without D3D11_CREATE_DEVICE_BGRA_SUPPORT (and
optionally D3D11_CREATE_DEVICE_VIDEO_SUPPORT) which is required for
Windows.Graphics.Capture/WinRT interop; update the creation flags argument
(currently Default::default()) to include D3D11_CREATE_DEVICE_BGRA_SUPPORT
(bitwise-OR with D3D11_CREATE_DEVICE_VIDEO_SUPPORT if video decoding is needed)
so CreateDirect3D11DeviceFromDXGIDevice can succeed; keep the rest of the
parameters unchanged and ensure the flags value is passed in the same position
as the current Default::default().
| let direct3d_device = (|| { | ||
| let dxgi_device = d3d_device.cast::<IDXGIDevice>()?; | ||
| let inspectable = unsafe { CreateDirect3D11DeviceFromDXGIDevice(&dxgi_device) }?; | ||
| inspectable.cast::<IDirect3DDevice>() | ||
| })() | ||
| .unwrap(); | ||
| // .map_err(StartRunnerError::Direct3DDevice)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace panicking unwrap/expect with error propagation.
Numerous unwrap/expect calls can crash the app on recoverable OS/driver errors. Convert to Result and bubble via NewCapturerError/StartRunnerError.
- })()
- .unwrap();
+ })().map_err(StartRunnerError::Direct3DDevice)?;
@@
- ).unwrap();
+ ).map_err(StartRunnerError::FramePool)?;
@@
- let session = frame_pool.CreateCaptureSession(&item).unwrap();
+ let session = frame_pool.CreateCaptureSession(&item).map_err(StartRunnerError::CaptureSession)?;
@@
- session.SetIsBorderRequired(border_required).unwrap();
+ session.SetIsBorderRequired(border_required).map_err(StartRunnerError::CaptureSession)?;
@@
- .SetIsCursorCaptureEnabled(cursor_capture_enabled)
- .unwrap();
+ .SetIsCursorCaptureEnabled(cursor_capture_enabled)
+ .map_err(StartRunnerError::CaptureSession)?;
@@
- .SetMinUpdateInterval(min_update_interval.into())
- .unwrap();
+ .SetMinUpdateInterval(min_update_interval.into())
+ .map_err(StartRunnerError::CaptureSession)?;
@@
- .unwrap();
+ .map_err(StartRunnerError::RegisterFrameArrived)?;
@@
- )
- .unwrap();
+ ).map_err(StartRunnerError::RegisterClosed)?;
@@
- self.session.Close().unwrap();
+ self.session.Close().map_err(|_| StopCapturerError::PostMessageFailed)?;Also applies to: 211-219, 220-222, 223-237, 267-328, 331-337, 402-403
🤖 Prompt for AI Agents
In crates/scap-direct3d/src/lib.rs around lines 203-209 (and similarly for
ranges 211-219, 220-222, 223-237, 267-328, 331-337, 402-403), the code uses
unwrap/expect which panics on recoverable errors; change these sites to return
Result types and propagate errors instead of panicking. Replace the immediate
unwraps by using the ? operator (or map_err to convert underlying HRESULT/COM
errors) so the closure and calling functions return Result<...,
StartRunnerError> or Result<..., NewCapturerError> as appropriate, mapping
errors to the correct variant (e.g., StartRunnerError::Direct3DDevice or
NewCapturerError) and update function signatures to return Result so callers can
bubble the error up. Ensure all affected blocks and closures return Result,
convert all unwrap/expect to ?/map_err, and wire the error conversions
consistently across the listed line ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (12)
crates/recording/src/capture_pipeline.rs (1)
533-547: Add missingd3d_deviceargument tocreate_screen_capturecalls
Crates/recording/src/instant_recording.rs (line 192) and crates/recording/src/studio_recording.rs (line 682) need to pass thed3d_device(e.g. fromcreate_d3d_device().unwrap()). Consider centralizing device creation in a shared helper.crates/camera-windows/src/lib.rs (2)
89-93: Don’t unwrap in the capture callback; handle Media Foundation errors gracefully.A failed
GetBufferCountwill panic and kill the capture thread.- let len = unsafe { sample.GetBufferCount() }.unwrap(); + let len = match unsafe { sample.GetBufferCount() } { + Ok(len) => len, + Err(e) => { + eprintln!("GetBufferCount failed: {e}"); + return; + } + };
128-129: Handle negativebiHeight(top-down DIB) correctly.DirectShow can report negative height; casting directly to
usizeyields a huge value.- width: video_info.bmiHeader.biWidth as usize, - height: video_info.bmiHeader.biHeight as usize, + width: video_info.bmiHeader.biWidth as usize, + height: if video_info.bmiHeader.biHeight < 0 { + (-video_info.bmiHeader.biHeight) as usize + } else { + video_info.bmiHeader.biHeight as usize + },crates/recording/src/pipeline/audio_buffer.rs (2)
54-58: Bug: slice length for packed audio is off by 4× and drops samples.
frame.data(0)is a byte slice; you must slice in bytes (samples × channels × 4 for f32). Also, deinterleave into per-channel queues sonext_framecan consistently interleave.- self.data[0].extend(unsafe { - cast_bytes_to_f32_slice( - &frame.data(0)[0..frame.samples() * frame.channels() as usize], - ) - }); + let byte_len = frame.samples() * frame.channels() as usize * std::mem::size_of::<f32>(); + let src = &frame.data(0)[..byte_len]; + let samples = unsafe { cast_bytes_to_f32_slice(src) }; + let ch = self.config.channels as usize; + for (i, &s) in samples.iter().enumerate() { + self.data[i % ch].push_back(s); + }
71-76: Always produce packed output and zero-pad the tail on drain.Current path relies on
self.frame.is_planar()(it’s created as packed), so the planar branch is dead and planar inputs never get emitted. Also, the drain path can leave stale bytes at the end of the frame.- if self.frame.is_planar() { - for channel in 0..self.config.channels { - for (index, byte) in self.data[channel] - .drain(0..actual_samples_per_channel) - .enumerate() - { - self.frame.plane_data_mut(channel)[index * 4..(index + 1) * 4] - .copy_from_slice(&byte.to_ne_bytes()); - } - } - } else { - for (index, byte) in self.data[0] - .drain(0..actual_samples_per_channel * self.config.channels) - .enumerate() - { - self.frame.plane_data_mut(0)[index * 4..(index + 1) * 4] - .copy_from_slice(&byte.to_ne_bytes()); - } - } + // Interleave per-channel queues into packed f32 frame + let ch = self.config.channels as usize; + let dst = self.frame.plane_data_mut(0); + for s in 0..actual_samples_per_channel { + for c in 0..ch { + let sample = self.data[c].pop_front().unwrap_or(0.0); + let i = s * ch + c; + let off = i * 4; + dst[off..off + 4].copy_from_slice(&sample.to_ne_bytes()); + } + } + // Zero-pad remaining samples on drain to avoid stale data + if drain && actual_samples_per_channel < self.frame_size { + let start = actual_samples_per_channel * ch * 4; + let end = self.frame_size * ch * 4; + for b in &mut dst[start..end] { + *b = 0; + } + }Also applies to: 77-95, 97-98
crates/cpal-ffmpeg/Cargo.toml (1)
4-4: Pin Rust toolchain to support edition "2024"
Add arust-toolchain.tomlat the repository root pinning a Rust version that supports Edition 2024 for CI and local development.crates/media-info/src/lib.rs (2)
119-121: Potential panic on oversized input; clamp copy length.Copying
data.len()bytes intoframe.data_mut(0)can slice out of bounds. Clamp to the dest buffer size.- } else if self.channels == 1 || frame.is_packed() { - frame.data_mut(0)[0..data.len()].copy_from_slice(data) + } else if self.channels == 1 || frame.is_packed() { + let max = std::cmp::min(data.len(), frame.data_mut(0).len()); + frame.data_mut(0)[..max].copy_from_slice(&data[..max])
219-254: Stride check is wrong and NV12 second plane is ignored.
- Branch compares
frame.stride(0)toself.width, which is unrelated (width != bytes/line).- NV12 has two planes (Y + interleaved UV); only plane 0 is filled, producing incorrect colors.
pub fn wrap_frame(&self, data: &[u8], timestamp: i64, stride: usize) -> frame::Video { let mut frame = frame::Video::new(self.pixel_format, self.width, self.height); frame.set_pts(Some(timestamp)); - let frame_stride = frame.stride(0); + let frame_stride_y = frame.stride(0); let frame_height = self.height as usize; - // Ensure we don't try to copy more data than we have - if frame.stride(0) == self.width as usize { - let copy_len = std::cmp::min(data.len(), frame.data_mut(0).len()); - frame.data_mut(0)[0..copy_len].copy_from_slice(&data[0..copy_len]); + // Fast path: identical line stride + if stride == frame_stride_y { + match self.pixel_format { + Pixel::NV12 => { + let y_bytes = frame_stride_y * frame_height; + let uv_stride = frame.stride(1); + let uv_height = frame_height / 2; + let uv_bytes = uv_stride * uv_height; + + // Copy Y + let y_copy = std::cmp::min(y_bytes, data.len()); + frame.data_mut(0)[..y_copy].copy_from_slice(&data[..y_copy]); + + // Copy UV (if present in source) + if data.len() > y_bytes && uv_bytes > 0 { + let uv_src = &data[y_bytes..std::cmp::min(y_bytes + uv_bytes, data.len())]; + let uv_dst = &mut frame.data_mut(1)[..uv_src.len()]; + uv_dst.copy_from_slice(uv_src); + } + } + _ => { + let needed = frame_stride_y * frame_height; + let copy_len = std::cmp::min(data.len(), needed.min(frame.data_mut(0).len())); + frame.data_mut(0)[..copy_len].copy_from_slice(&data[..copy_len]); + } + } } else { - for line in 0..frame_height { - if line * stride >= data.len() { - break; // Stop if we run out of source data - } - - let src_start = line * stride; - let src_end = std::cmp::min(src_start + frame_stride, data.len()); - if src_end <= src_start { - break; // Stop if we can't get any more source data - } - - let dst_start = line * frame_stride; - let dst_end = dst_start + (src_end - src_start); - - // Only copy if we have enough destination space - if dst_end <= frame.data_mut(0).len() { - frame.data_mut(0)[dst_start..dst_end] - .copy_from_slice(&data[src_start..src_end]); - } - } + // Line-by-line copy + match self.pixel_format { + Pixel::NV12 => { + // Y plane + for line in 0..frame_height { + let src_start = line * stride; + if src_start >= data.len() { break; } + let src_end = std::cmp::min(src_start + frame_stride_y, data.len()); + let dst_start = line * frame_stride_y; + let dst_end = dst_start + (src_end - src_start); + if dst_end <= frame.data_mut(0).len() { + frame.data_mut(0)[dst_start..dst_end] + .copy_from_slice(&data[src_start..src_end]); + } else { break; } + } + // UV plane + let y_region = frame_height * stride; + let uv_stride = frame.stride(1); + let uv_height = frame_height / 2; + for line in 0..uv_height { + let src_start = y_region + line * stride; + if src_start >= data.len() { break; } + let src_end = std::cmp::min(src_start + uv_stride, data.len()); + let dst_start = line * uv_stride; + let dst_end = dst_start + (src_end - src_start); + if dst_end <= frame.data_mut(1).len() { + frame.data_mut(1)[dst_start..dst_end] + .copy_from_slice(&data[src_start..src_end]); + } else { break; } + } + } + _ => { + for line in 0..frame_height { + let src_start = line * stride; + if src_start >= data.len() { break; } + let src_end = std::cmp::min(src_start + frame_stride_y, data.len()); + let dst_start = line * frame_stride_y; + let dst_end = dst_start + (src_end - src_start); + if dst_end <= frame.data_mut(0).len() { + frame.data_mut(0)[dst_start..dst_end] + .copy_from_slice(&data[src_start..src_end]); + } else { break; } + } + } + } } frame }I can extend this to handle 3-plane YUV (e.g., YUV420P) if needed.
crates/scap-ffmpeg/src/cpal.rs (1)
31-37: If planar ever becomes reachable: fix potential length mismatch panic.
plane_data_mut(i)likely yieldslinesize[i]bytes;plane_sizemay be smaller. Use slicing on the destination to avoid runtime panics.- ffmpeg_frame - .plane_data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); + let dst = ffmpeg_frame.plane_data_mut(i as usize); + dst[..plane_size].copy_from_slice(&self.bytes()[base..base + plane_size]);crates/cpal-ffmpeg/src/lib.rs (1)
31-37: Avoid possible copy length mismatch panic in planar path.If you later enable planar, slice the destination to
plane_size.- ffmpeg_frame - .plane_data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); + let dst = ffmpeg_frame.plane_data_mut(i as usize); + dst[..plane_size].copy_from_slice(&self.bytes()[base..base + plane_size]);crates/camera-mediafoundation/src/lib.rs (2)
51-63: Device enumeration leaks COM allocations; add Drop to release IMFActivate array and CoTaskMemFree.MFEnumDeviceSources allocates an array that must be freed; current iterator never releases it. This leaks both the array and the IMFActivate references.
Apply:
use windows::Win32::{ @@ - System::Com::{CLSCTX_INPROC_SERVER, CoCreateInstance, CoInitialize}, + System::Com::{CLSCTX_INPROC_SERVER, CoCreateInstance, CoInitialize, CoTaskMemFree}, }; @@ impl DeviceSourcesIterator { pub fn new() -> Result<Self, windows_core::Error> { @@ Ok(DeviceSourcesIterator { _attributes: attributes, devices: unsafe { devices.assume_init() }, count, index: 0, }) } } + +impl Drop for DeviceSourcesIterator { + fn drop(&mut self) { + unsafe { + if !self.devices.is_null() { + // Release each IMFActivate + for i in 0..self.count { + let slot = self.devices.add(i as usize); + // Move out the Option<IMFActivate> so Drop runs (decrements COM refcount) + let _ = std::ptr::read(slot); + } + // Free the array memory allocated by MFEnumDeviceSources + CoTaskMemFree(Some(self.devices.cast())); + self.devices = std::ptr::null_mut(); + self.count = 0; + } + } + } +}Also applies to: 38-39
290-293: Fragile equality check against magic constant 0b1111.IMFMediaType::IsEqual returning Ok(0b1111) is not a reliable equality test. Use is_ok() (S_OK) or check documented flags explicitly.
- if media_type.IsEqual(requested_format) == Ok(0b1111) { + if media_type.IsEqual(requested_format).is_ok() { maybe_format = Some((media_type, stream_index)); }
♻️ Duplicate comments (1)
crates/recording/src/capture_pipeline.rs (1)
239-251: Pair COM/MF init with shutdown; avoid duplicate init in taskYou initialize Ro/MF at function entry and again inside the spawned task. Remove the inner init and call MFShutdown/RoUninitialize on task teardown to balance counts and prevent process-wide leaks.
- unsafe { - let _ = RoInitialize(RO_INIT_MULTITHREADED); - let _ = MFStartup(MF_VERSION, MFSTARTUP_FULL); - } + unsafe { + let _ = RoInitialize(RO_INIT_MULTITHREADED); + let _ = MFStartup(MF_VERSION, MFSTARTUP_FULL); + } @@ - let _ = unsafe { RoInitialize(RO_INIT_MULTITHREADED) }; - let _ = unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }; + // Ro/MF already initialized at function entry. @@ - Ok(()) + unsafe { + windows::Win32::Media::MediaFoundation::MFShutdown().ok(); + windows::Win32::System::WinRT::RoUninitialize(); + } + Ok(())Also applies to: 328-330
🧹 Nitpick comments (38)
crates/recording/src/capture_pipeline.rs (4)
136-144: Confirm the meaning of the1080argumentIf this is a height cap, ensure it matches UI/config and that aspect-ratio preservation is handled upstream. Otherwise, consider making it configurable like bitrate.
239-239: Remove unused SampleWriter import and large commented blocksAvoid dead/commented-out code in the hot path; it obscures the active muxing strategy and risks bitrot.
- use cap_enc_mediafoundation::{media::MF_VERSION, video::SampleWriter}; + use cap_enc_mediafoundation::media::MF_VERSION;Also applies to: 293-305, 298-305, 382-385
255-266: Expose bitrate configuration instead of hardcoded multiplierThe 0.07 multiplier is opaque. Wire this to the existing configurable bitrate or derive from
screen_configto keep behavior predictable.
553-611: Deduplicate D3D device creationThis implementation mirrors enc-mediafoundation’s
create_d3d_device(). Prefer reusing that helper to reduce divergence and testing surface; re-export if needed.Also, avoid
unwrap()ondevicein case of unexpected API behavior:- Ok(device.unwrap()) + Ok(device.ok_or_else(|| windows::core::Error::from(windows::core::HRESULT(0x80004005u32 as i32)))?)crates/mediafoundation-utils/Cargo.toml (1)
8-14: Trim Windows feature set to reduce compile time (optional).If this crate only needs IMFMediaBuffer, you can likely drop "Win32_Media_DirectShow". Keeps builds leaner.
crates/mediafoundation-utils/src/lib.rs (3)
11-25: Guard against null buffer pointer from Lock.Add a debug assertion to catch unexpected null ptr with non-zero length.
Apply this diff:
fn lock(&self) -> Result<IMFMediaBufferLock<'_>> { let mut bytes_ptr = null_mut(); let mut size = 0; unsafe { self.Lock(&mut bytes_ptr, None, Some(&mut size))?; } + debug_assert!( + !bytes_ptr.is_null() || size == 0, + "IMFMediaBuffer::Lock returned null with non-zero size" + );
32-36: Unlock errors are swallowed; consider logging.Logging here helps diagnose rare MFT/driver issues.
Apply this diff:
impl<'a> Drop for IMFMediaBufferLock<'a> { fn drop(&mut self) { - let _ = unsafe { self.source.Unlock() }; + if let Err(e) = unsafe { self.source.Unlock() } { + #[cfg(debug_assertions)] + eprintln!("IMFMediaBuffer::Unlock failed: {e:?}"); + } } }
38-50: Consider dropping DerefMut unless mutation is required.Exposing &mut [u8] invites accidental writes without updating CurrentLength. If callers only read, keep Deref and remove DerefMut.
crates/mediafoundation-ffmpeg/src/h264.rs (3)
1-1: Avoid glob import.Import only the trait to keep the surface small.
Apply this diff:
-use cap_mediafoundation_utils::*; +use cap_mediafoundation_utils::IMFMediaBufferExt;
96-100: Rescale may be redundant.You rescale MF time to self.time_base and set the stream’s time_base to the same value; rescale_ts then becomes a no-op. Keep it only if output can mutate the stream time_base.
15-22: Consider carrying extradata via config.Add an optional avcc field so MP4 muxing works without ad-hoc extraction in the muxer.
Apply this diff:
pub struct MuxerConfig { pub width: u32, pub height: u32, pub fps: u32, pub bitrate: u32, + // Optional AVC Decoder Configuration Record (avcC) + pub extradata: Option<Vec<u8>>, }Then, inside add_stream after setting codecpar, copy extradata if provided. I can supply the exact av_malloc/zero-copy snippet if you want.
crates/media-info/Cargo.toml (1)
11-12: Prefer workspace dependency over local path (if crate is in the workspace).Using
workspace = truekeeps versions/features consistent across the workspace and simplifies publishing.-cap-ffmpeg-utils = { path = "../ffmpeg-utils" } +cap-ffmpeg-utils.workspace = truecrates/camera-windows/src/lib.rs (1)
4-4: Avoid wildcard import; bring only the needed items into scope.Prevents namespace pollution and makes trait usage explicit.
-use cap_mediafoundation_utils::*; +use cap_mediafoundation_utils::{IMFMediaBufferExt, IMFMediaBufferLock};crates/camera-windows/Cargo.toml (1)
12-12: Use workspace dependency (if applicable).Keeps dependency management consistent across the repo.
-cap-mediafoundation-utils = { path = "../mediafoundation-utils" } +cap-mediafoundation-utils.workspace = truecrates/recording/src/pipeline/audio_buffer.rs (1)
2-3: Import the trait explicitly instead of glob-importing the crate.Reduces accidental symbol leakage and clarifies intent.
-use cap_ffmpeg_utils::*; +use cap_ffmpeg_utils::PlanarData;crates/camera-mediafoundation/Cargo.toml (1)
12-12: Use workspace dependency for the new utility crate (if it’s part of the workspace).Aligns with other internal crates and eases version bumps.
-cap-mediafoundation-utils = { path = "../mediafoundation-utils" } +cap-mediafoundation-utils.workspace = truecrates/cpal-ffmpeg/Cargo.toml (1)
9-9: Use workspace dep to avoid path drift (optional).If ffmpeg-utils is a workspace member, prefer
workspace = trueto keep manifests uniform and avoid per-crate relative paths.Apply in root: add
cap-ffmpeg-utilsunder[workspace.dependencies], then:-cap-ffmpeg-utils = { path = "../ffmpeg-utils" } +cap-ffmpeg-utils.workspace = truecrates/mediafoundation-ffmpeg/src/lib.rs (1)
1-7: Improve docs with doc-cfg (optional).Consider marking Windows-only API in docs so consumers see platform gating on docs.rs.
+#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg(windows)] mod audio; mod h264; -pub use audio::AudioExt; -pub use h264::{H264StreamMuxer, MuxerConfig}; +#[cfg_attr(docsrs, doc(cfg(windows)))] +pub use audio::AudioExt; +#[cfg_attr(docsrs, doc(cfg(windows)))] +pub use h264::{H264StreamMuxer, MuxerConfig};crates/media-info/src/lib.rs (2)
1-1: Avoid glob import; bring the trait explicitly.Narrow the import to the trait needed for method resolution; reduces namespace pollution.
-use cap_ffmpeg_utils::*; +use cap_ffmpeg_utils::PlanarData;
144-153: RenameRawVideoFormat::YUYV420toYuv420P
YUYVdenotes packed 4:2:2, but this variant is mapped to planar 4:2:0 (Pixel::YUV420P); usingYuv420Pmakes the intent clear and aligns with the PascalCase style used elsewhere (e.g.Nv12).crates/recording/Cargo.toml (1)
74-74: Deduplicate tracing-subscriber (optional).Having it in both deps and dev-deps with different features can surprise via feature unification in tests. Keep one.
-[dev-dependencies] -tempfile = "3.20.0" -tracing-subscriber = { version = "0.3.19", default-features = false } +[dev-dependencies] +tempfile = "3.20.0"If you need a different feature set for tests, prefer enabling via a feature flag rather than duplicating the crate in dev-deps.
crates/mediafoundation-ffmpeg/Cargo.toml (1)
13-47: Trim Windows features to only MediaFoundation
Narrow thewindowsfeatures in Cargo.toml to just"Win32_Media_MediaFoundation"(covers IMFSample, MFSampleExtension_CleanPoint, MFCreateSample, MFCreateMemoryBuffer) to cut build time and binary size.crates/mediafoundation-ffmpeg/README.md (2)
3-16: Use the correct product naming (“Media Foundation”) and codec spelling (“H.264”).Replace “MediaFoundation” → “Media Foundation” and “H264” → “H.264” throughout for accuracy and searchability. Minor but user-facing.
49-61: Clarify expected raw H.264 bitstream format and extradata requirements.State whether
write_h264_dataexpects Annex B (start codes) or AVCC (length-prefixed) and how SPS/PPS are provided (e.g., extracted from MF decoder config vs. first keyframe). This avoids integration pitfalls.Proposed snippet to add below the example:
+Note: `write_h264_data` expects Annex B NAL units (00 00 00 01 start codes). +Ensure SPS/PPS are present (e.g., preceding the first IDR) or passed as codec extradata +when constructing the muxer. If your source is AVCC (length‑prefixed), convert to Annex B first.crates/scap-ffmpeg/src/cpal.rs (2)
11-12: Planar branch is unreachable; format type is hard-coded to Packed.
format_typis set tosample::Type::Packed, so the planar copy path never runs. Either remove the dead branch or compute the type dynamically if you plan to support planar input here.Apply one:
- let format_typ = sample::Type::Packed; + let format_typ = sample::Type::Packed; // cpal delivers interleaved; keep packed and drop planar branch @@ - if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - - ffmpeg_frame - .plane_data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { + { ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + }Also applies to: 29-41
16-27: Channel layout: prefer explicit layout overdefault(n).
ChannelLayout::default(n)can be ambiguous for >2 channels. Consider mapping 1→MONO, 2→STEREO, elseChannelLayout::from_channels(n)if available, or explicit masks.crates/mediafoundation-ffmpeg/src/audio.rs (2)
17-18: Guard against oversized buffers when casting to u32.If
length > u32::MAX,MFCreateMemoryBuffer(length as u32)truncates. Check and error out.- let buffer = unsafe { MFCreateMemoryBuffer(length as u32)? }; + let len_u32 = u32::try_from(length).map_err(|_| windows::core::Error::new( + windows::core::HRESULT(0x8007000E_u32 as i32), // E_OUTOFMEMORY-ish + "audio buffer too large".into(), + ))?; + let buffer = unsafe { MFCreateMemoryBuffer(len_u32)? };
21-31: Prefer RAII for buffer unlock to avoid leaks on future early-returns.Wrap the lock in a guard that calls
Unlock()in Drop.crates/cpal-ffmpeg/src/lib.rs (2)
11-12: Same as scap-ffmpeg: planar branch never executes.
format_typis hard-coded toPacked. Remove the dead code or compute dynamically.Apply:
- let format_typ = sample::Type::Packed; + let format_typ = sample::Type::Packed; @@ - if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - ffmpeg_frame - .plane_data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { + { ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + }Also applies to: 29-41
16-27: Channel layout: prefer explicit mapping.Same note as in scap-ffmpeg about
ChannelLayout::default.crates/ffmpeg-utils/src/lib.rs (1)
1-4: Document plane length semantics.Add a brief doc comment describing that returned slices are exactly the frame’s plane lengths (linesize per plane).
-pub trait PlanarData { +/// Access per-plane audio data backed by FFmpeg frame storage. +/// Returned slices reflect the frame's per-plane length (linesize for that plane). +pub trait PlanarData {crates/mediafoundation-ffmpeg/examples/usage.rs (4)
9-10: Initialize FFmpeg once (move ffmpeg::init() to main).Calling ffmpeg::init() multiple times is unnecessary; do it once at process start for clarity.
Apply:
fn example_with_shared_output() -> Result<(), Box<dyn std::error::Error>> { - // Initialize FFmpeg - ffmpeg::init()?; + // FFmpeg is initialized in main()fn example_with_owned_muxer() -> Result<(), Box<dyn std::error::Error>> { - // Initialize FFmpeg - ffmpeg::init()?; + // FFmpeg is initialized in main()fn main() -> Result<(), Box<dyn std::error::Error>> { + // Initialize FFmpeg once + ffmpeg::init()?;Also applies to: 134-135, 162-162
41-51: Example H.264 payloads lack SPS/PPS; resulting MP4s likely won’t be playable.Annex B keyframe alone (0x65) is insufficient; first GOP must include SPS/PPS (and the muxer needs extradata/AVCC). Consider documenting this or emitting SPS/PPS before the IDR in examples.
Also applies to: 146-154, 180-183
156-158: Owned muxer: explicit finish() is optional.If trailer is written on drop, finish() becomes redundant; keep if you want explicit error handling, otherwise you can remove it.
7-7: Unify error types across example functions.Mixing Box and ffmpeg::Error is noisy in examples. Consider using a single Result type (e.g., anyhow::Result) for all example fns.
Example:
-fn example_with_shared_output() -> Result<(), Box<dyn std::error::Error>> { +fn example_with_shared_output() -> anyhow::Result<()> {Also applies to: 69-69, 130-130, 162-162
crates/camera-mediafoundation/src/lib.rs (3)
4-4: Avoid wildcard import from cap_mediafoundation_utils.Import only the extension trait you need to keep the prelude clean and avoid collisions.
-use cap_mediafoundation_utils::*; +use cap_mediafoundation_utils::IMFMediaBufferExt;
660-675: wait_for_event can block indefinitely; consider bounded wait with recv_timeout.A hung device/engine will deadlock start_capturing(). Prefer a timeout and surface a clear error.
If acceptable, retrofit:
-fn wait_for_event( - rx: &Receiver<CaptureEngineEvent>, - variant: CaptureEngineEventVariant, -) -> Result<CaptureEngineEvent, windows_core::HRESULT> { - rx.iter() - .find_map(|e| match e.variant() { - Some(v) if v == variant => Some(Ok(e)), - Some(CaptureEngineEventVariant::Error) => { - Some(Err(unsafe { e.0.GetStatus() }.unwrap())) - } - _ => None, - }) - .ok_or(windows_core::HRESULT::from_win32( - MF_E_INVALIDREQUEST.0 as u32, - )) - .and_then(|v| v) -} +fn wait_for_event( + rx: &Receiver<CaptureEngineEvent>, + variant: CaptureEngineEventVariant, +) -> Result<CaptureEngineEvent, windows_core::HRESULT> { + use std::time::Duration; + const TIMEOUT: Duration = Duration::from_secs(5); + loop { + match rx.recv_timeout(TIMEOUT) { + Ok(e) => match e.variant() { + Some(v) if v == variant => return Ok(e), + Some(CaptureEngineEventVariant::Error) => { + return Err(unsafe { e.0.GetStatus() }.unwrap_or(E_FAIL)) + } + _ => continue, + }, + Err(_) => return Err(windows_core::HRESULT::from_win32(MF_E_INVALIDREQUEST.0 as u32)), + } + } +}
666-669: Avoid unwrap() on GetStatus in error path.If GetStatus itself fails, unwrap() will panic. Fall back to E_FAIL.
- Some(CaptureEngineEventVariant::Error) => { - Some(Err(unsafe { e.0.GetStatus() }.unwrap())) - } + Some(CaptureEngineEventVariant::Error) => { + Some(Err(unsafe { e.0.GetStatus() }.unwrap_or(E_FAIL))) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (23)
crates/camera-mediafoundation/Cargo.toml(1 hunks)crates/camera-mediafoundation/src/lib.rs(1 hunks)crates/camera-windows/Cargo.toml(1 hunks)crates/camera-windows/src/lib.rs(1 hunks)crates/cpal-ffmpeg/Cargo.toml(1 hunks)crates/cpal-ffmpeg/src/lib.rs(1 hunks)crates/ffmpeg-utils/Cargo.toml(1 hunks)crates/ffmpeg-utils/src/lib.rs(1 hunks)crates/media-info/Cargo.toml(1 hunks)crates/media-info/src/lib.rs(1 hunks)crates/mediafoundation-ffmpeg/Cargo.toml(1 hunks)crates/mediafoundation-ffmpeg/README.md(1 hunks)crates/mediafoundation-ffmpeg/examples/usage.rs(1 hunks)crates/mediafoundation-ffmpeg/src/audio.rs(1 hunks)crates/mediafoundation-ffmpeg/src/h264.rs(1 hunks)crates/mediafoundation-ffmpeg/src/lib.rs(1 hunks)crates/mediafoundation-utils/Cargo.toml(1 hunks)crates/mediafoundation-utils/src/lib.rs(1 hunks)crates/recording/Cargo.toml(3 hunks)crates/recording/src/capture_pipeline.rs(8 hunks)crates/recording/src/pipeline/audio_buffer.rs(1 hunks)crates/scap-ffmpeg/Cargo.toml(1 hunks)crates/scap-ffmpeg/src/cpal.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/ffmpeg-utils/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (5)
crates/mediafoundation-ffmpeg/src/audio.rs (1)
crates/enc-mediafoundation/src/video/h264.rs (1)
sample(53-55)
crates/mediafoundation-ffmpeg/src/h264.rs (1)
crates/mediafoundation-ffmpeg/examples/usage.rs (3)
new(69-86)write_sample(89-97)finish(113-126)
crates/mediafoundation-utils/src/lib.rs (1)
crates/camera-windows/src/lib.rs (4)
std(194-194)deref(210-215)deref(481-483)deref(495-497)
crates/mediafoundation-ffmpeg/examples/usage.rs (1)
crates/mediafoundation-ffmpeg/src/h264.rs (3)
add_stream(28-81)write_sample(85-104)finish(153-165)
crates/recording/src/capture_pipeline.rs (6)
crates/recording/src/sources/screen_capture/mod.rs (3)
init(276-412)info(423-425)d3d_device(415-417)crates/enc-mediafoundation/src/video/h264.rs (3)
new(43-45)new(96-294)sample(53-55)crates/recording/src/sources/screen_capture/windows.rs (2)
new(355-361)new(489-519)crates/mediafoundation-ffmpeg/src/h264.rs (1)
add_stream(28-81)crates/enc-mediafoundation/src/video/sample_writer.rs (1)
add_stream(66-83)crates/enc-mediafoundation/src/d3d.rs (2)
create_d3d_device(39-56)create_d3d_device_with_type(19-37)
🪛 LanguageTool
crates/mediafoundation-ffmpeg/README.md
[grammar] ~14-~14: Ensure spelling is correct
Context: ...ndling (converts from MediaFoundation's 100ns units to microseconds) - Support for mu...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[style] ~51-~51: To make your writing flow more naturally, try moving the adverb ‘already’ closer to the verb ‘extracted’.
Context: ...?; ### Using Raw H264 Data If you already have extracted H264 data: rust muxer.write_h264_da...
(PERF_TENS_ADV_PLACEMENT)
[grammar] ~76-~76: Ensure spelling is correct
Context: ...les the conversion: - MediaFoundation: 100ns units - This crate's API: microseconds ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~76-~76: There might be a mistake here.
Context: ...version: - MediaFoundation: 100ns units - This crate's API: microseconds - FFmpeg ...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...s units - This crate's API: microseconds - FFmpeg internal: time_base units (config...
(QB_NEW_EN)
[grammar] ~88-~88: There might be a mistake here.
Context: ...Ext: Convert ffmpeg::frame::AudiotoIMFSample-PlanarData`: Access planar audio data ## Requireme...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...indows (MediaFoundation is Windows-only) - FFmpeg libraries - A MediaFoundation H26...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...tion is Windows-only) - FFmpeg libraries - A MediaFoundation H264 encoder (hardware...
(QB_NEW_EN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (13)
crates/recording/src/capture_pipeline.rs (3)
61-68: AVFoundation MP4Encoder swap looks goodInit + finish sequencing and error mapping look correct for Studio mode on macOS.
525-526: Windows capture method switch to Direct3D is correctAlias update aligns with the new D3D11 capture flow.
309-323: Ensure H.264 SPS/PPS (avcC) extradata is included before write_header() — confirmH264StreamMuxer::add_streampopulates the SPS/PPS extradata up-front or defer callingoutput.write_header()until you’ve received the encoder’s config (first keyframe) containing that data.crates/scap-ffmpeg/Cargo.toml (1)
10-11: Dependency addition looks correct; confirm workspace wiring.cap-ffmpeg-utils path dep is fine. Verify the crate is included in the workspace members and builds on all targets that compile scap-ffmpeg.
crates/mediafoundation-ffmpeg/src/h264.rs (1)
4-4: Gate Windows imports.This file otherwise pulls windows types on non-Windows builds.
Apply this diff:
-use windows::Win32::Media::MediaFoundation::{IMFSample, MFSampleExtension_CleanPoint}; +#[cfg(windows)] +use windows::Win32::Media::MediaFoundation::{IMFSample, MFSampleExtension_CleanPoint};If the crate itself is Windows-only, feel free to ignore.
crates/camera-mediafoundation/Cargo.toml (1)
15-18: LGTM on feature gating.Windows features look correct for Media Foundation/DirectShow/COM.
crates/mediafoundation-ffmpeg/src/lib.rs (1)
1-7: Windows gating and lean re-exports look good.Module-level cfg cleanly constrains the crate to Windows; exports are minimal.
crates/recording/Cargo.toml (2)
20-23: Good: explicit camera features and shared utils wiring.This aligns the recording crate with the new shared ffmpeg-utils and serde/specta needs.
61-62: Windows-only backends correctly target-gated.Keeping Media Foundation crates in a Windows cfg block avoids cross-platform build impact.
crates/mediafoundation-ffmpeg/Cargo.toml (1)
1-12: Manifest baseline looks solid.Workspace deps + media-info tie-in are appropriate for the MF–FFmpeg bridge.
crates/mediafoundation-ffmpeg/README.md (1)
15-16: Verify container support claim (MKV/others) matches what’s implemented.Doc says “Support for multiple output formats (MP4, MKV, etc.)”, but code in this PR appears to add MP4 (and an OGG module unrelated to H.264). Please confirm MKV is actually wired; otherwise narrow the claim to MP4 (and list roadmap separately).
Apply if needed:
-- Support for multiple output formats (MP4, MKV, etc.) +- Support for MP4 (additional containers like MKV may be added later)crates/cpal-ffmpeg/src/lib.rs (1)
18-23: HandleSampleFormat::U16if cpal exposes it in your version.Avoid panics on platforms using unsigned 16‑bit audio.
- SampleFormat::U8 => Sample::U8(format_typ), + SampleFormat::U8 => Sample::U8(format_typ), + // If available in your cpal version: + // SampleFormat::U16 => Sample::U16(format_typ),crates/mediafoundation-ffmpeg/examples/usage.rs (1)
43-50: Normalize timestamp units inwrite_h264_dataexamples – examples currently pass raw microseconds (e.g.33333) but the muxer API expects PTS/DTS/duration in the stream’stime_baseunits (1/frame rate). Confirm whetherH264StreamMuxer::write_h264_datainternally rescales from microseconds; if not, convert your inputs viaav_rescale_q(ts, Rational::new(1,1_000_000), time_base)or supply frame-count values (e.g. 1 per frame) to prevent A/V drift. Applies to usage.rs lines 43–50, 99–111, 149–154, 181–183.
| impl PlanarData for ffmpeg::frame::Audio { | ||
| #[inline] | ||
| fn plane_data(&self, index: usize) -> &[u8] { | ||
| if index >= self.planes() { | ||
| panic!("out of bounds"); | ||
| } | ||
|
|
||
| unsafe { | ||
| std::slice::from_raw_parts( | ||
| (*self.as_ptr()).data[index], | ||
| (*self.as_ptr()).linesize[0] as usize, | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| #[inline] | ||
| fn plane_data_mut(&mut self, index: usize) -> &mut [u8] { | ||
| if index >= self.planes() { | ||
| panic!("out of bounds"); | ||
| } | ||
|
|
||
| unsafe { | ||
| std::slice::from_raw_parts_mut( | ||
| (*self.as_mut_ptr()).data[index], | ||
| (*self.as_ptr()).linesize[0] as usize, | ||
| ) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe slice construction is incorrect and brittle (uses linesize[0] for all planes, ignores extended_data). Replace with safe data/data_mut accessors.
Current code can return wrong lengths for planes >0 and may panic when lengths differ. The ffmpeg crate already exposes safe per-plane slices.
Apply:
impl PlanarData for ffmpeg::frame::Audio {
#[inline]
fn plane_data(&self, index: usize) -> &[u8] {
- if index >= self.planes() {
- panic!("out of bounds");
- }
-
- unsafe {
- std::slice::from_raw_parts(
- (*self.as_ptr()).data[index],
- (*self.as_ptr()).linesize[0] as usize,
- )
- }
+ // Delegates to bindings that handle extended_data and correct linesize per plane.
+ self.data(index)
}
#[inline]
fn plane_data_mut(&mut self, index: usize) -> &mut [u8] {
- if index >= self.planes() {
- panic!("out of bounds");
- }
-
- unsafe {
- std::slice::from_raw_parts_mut(
- (*self.as_mut_ptr()).data[index],
- (*self.as_ptr()).linesize[0] as usize,
- )
- }
+ self.data_mut(index)
}
}🤖 Prompt for AI Agents
In crates/ffmpeg-utils/src/lib.rs around lines 6 to 34, the unsafe slice
construction uses (*self.as_ptr()).data[index] and linesize[0] for all planes
which is brittle and wrong for planes >0 and when extended_data is used; replace
the unsafe raw pointer/linesize logic with the ffmpeg crate's safe per-plane
accessors (use the provided data()/data_mut() or plane-specific slice getters)
so each plane's pointer and length come from the crate API (respecting
extended_data and per-plane linesizes) and remove the manual unsafe
slice_from_raw_parts usage.
| let length = (self.samples() * self.format().bytes()) | ||
| * (if self.is_planar() { | ||
| self.channels() as usize | ||
| } else { | ||
| 1 | ||
| }); | ||
| let buffer = unsafe { MFCreateMemoryBuffer(length as u32)? }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Planar audio is copied incorrectly; only plane 0 is written while buffer length includes all channels.
This corrupts/zeros the tail of the buffer and produces invalid MF samples. Interleave planar data per-sample into the MF buffer; use a direct copy for packed data.
Proposed fix for to_sample body:
fn to_sample(&self) -> windows::core::Result<IMFSample> {
let sample = unsafe { MFCreateSample()? };
-
- let length = (self.samples() * self.format().bytes())
- * (if self.is_planar() {
- self.channels() as usize
- } else {
- 1
- });
+ let bytes_per_sample = self.format().bytes();
+ let channels = self.channels() as usize;
+ let samples = self.samples();
+ let length = samples * bytes_per_sample * channels;
let buffer = unsafe { MFCreateMemoryBuffer(length as u32)? };
unsafe { sample.AddBuffer(&buffer)? };
let mut buffer_ptr: *mut u8 = std::ptr::null_mut();
unsafe { buffer.Lock(&mut buffer_ptr, None, None)? };
- unsafe {
- std::ptr::copy_nonoverlapping(self.data(0).as_ptr(), buffer_ptr, length as usize);
- }
+ // SAFETY: buffer_ptr points to `length` bytes we allocated above.
+ if self.is_planar() {
+ // Interleave: [s0.c0][s0.c1]...[s1.c0][s1.c1]...
+ for s in 0..samples {
+ for ch in 0..channels {
+ let src = &self.data(ch)[s * bytes_per_sample..(s + 1) * bytes_per_sample];
+ let dst_off = (s * channels + ch) * bytes_per_sample;
+ unsafe {
+ std::ptr::copy_nonoverlapping(
+ src.as_ptr(),
+ buffer_ptr.add(dst_off),
+ bytes_per_sample,
+ );
+ }
+ }
+ }
+ } else {
+ let src = self.data(0);
+ unsafe {
+ std::ptr::copy_nonoverlapping(src.as_ptr(), buffer_ptr, src.len());
+ }
+ }
- unsafe { buffer.SetCurrentLength(length as u32)? }
+ unsafe { buffer.SetCurrentLength(length as u32)? }
unsafe { buffer.Unlock()? };
if let Some(pts) = self.pts() {
unsafe {
sample.SetSampleTime((pts as f64 / self.rate() as f64 * 10_000_000_f64) as i64)?
};
}
unsafe {
sample.SetSampleDuration(
((self.samples() as f64 / self.rate() as f64) * 10_000_000_f64) as i64,
)?
};
Ok(sample)
}Also applies to: 24-31
| let h264_codec = ffmpeg::codec::decoder::find(ffmpeg::codec::Id::H264) | ||
| .ok_or(ffmpeg::Error::DecoderNotFound)?; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use encoder codec when adding a stream (not decoder).
add_stream should pass an encoder codec to describe the stream for the muxer.
Apply this diff:
- let h264_codec = ffmpeg::codec::decoder::find(ffmpeg::codec::Id::H264)
- .ok_or(ffmpeg::Error::DecoderNotFound)?;
+ let h264_codec = ffmpeg::codec::encoder::find(ffmpeg::codec::Id::H264)
+ .ok_or(ffmpeg::Error::EncoderNotFound)?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let h264_codec = ffmpeg::codec::decoder::find(ffmpeg::codec::Id::H264) | |
| .ok_or(ffmpeg::Error::DecoderNotFound)?; | |
| let h264_codec = ffmpeg::codec::encoder::find(ffmpeg::codec::Id::H264) | |
| .ok_or(ffmpeg::Error::EncoderNotFound)?; |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/src/h264.rs around lines 35 to 37, the code
uses ffmpeg::codec::decoder::find(...) and a DecoderNotFound error when
selecting the H264 codec for add_stream; muxers require an encoder codec
descriptor instead. Replace the decoder lookup with the encoder lookup
(ffmpeg::codec::encoder::find(ffmpeg::codec::Id::H264)) and return the
appropriate error (e.g., ffmpeg::Error::EncoderNotFound) so you pass an encoder
codec to add_stream; also rename the variable to reflect it is an encoder codec.
| // Configure stream parameters | ||
| unsafe { | ||
| let codecpar = (*stream.as_mut_ptr()).codecpar; | ||
| (*codecpar).codec_type = ffmpeg::ffi::AVMediaType::AVMEDIA_TYPE_VIDEO; | ||
| (*codecpar).codec_id = ffmpeg::ffi::AVCodecID::AV_CODEC_ID_H264; | ||
| (*codecpar).width = config.width as i32; | ||
| (*codecpar).height = config.height as i32; | ||
| (*codecpar).bit_rate = config.bitrate as i64; | ||
| (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NV12 as i32; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t set pixel format on packetized H.264; provide extradata for MP4.
AVCodecParameters.format=NV12 is incorrect for an H.264 bitstream. Also, MP4 typically requires SPS/PPS (AVCC) in extradata; without it, files may not play reliably.
Apply this minimal fix now and plan extradata plumb-through next:
- (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NV12 as i32;
+ // Packetized H.264: leave pixel format unspecified.
+ (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NONE as i32;Follow-up (separate change): pass AVC Decoder Configuration Record (e.g., from MF_MT_MPEG_SEQUENCE_HEADER) into codecpar.extradata/extradata_size. I can draft that if you expose it in MuxerConfig.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Configure stream parameters | |
| unsafe { | |
| let codecpar = (*stream.as_mut_ptr()).codecpar; | |
| (*codecpar).codec_type = ffmpeg::ffi::AVMediaType::AVMEDIA_TYPE_VIDEO; | |
| (*codecpar).codec_id = ffmpeg::ffi::AVCodecID::AV_CODEC_ID_H264; | |
| (*codecpar).width = config.width as i32; | |
| (*codecpar).height = config.height as i32; | |
| (*codecpar).bit_rate = config.bitrate as i64; | |
| (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NV12 as i32; | |
| // Configure stream parameters | |
| unsafe { | |
| let codecpar = (*stream.as_mut_ptr()).codecpar; | |
| (*codecpar).codec_type = ffmpeg::ffi::AVMediaType::AVMEDIA_TYPE_VIDEO; | |
| (*codecpar).codec_id = ffmpeg::ffi::AVCodecID::AV_CODEC_ID_H264; | |
| (*codecpar).width = config.width as i32; | |
| (*codecpar).height = config.height as i32; | |
| (*codecpar).bit_rate = config.bitrate as i64; | |
| // Packetized H.264: leave pixel format unspecified. | |
| (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NONE as i32; |
| /// Write an H264 sample from MediaFoundation to the output | ||
| #[cfg(windows)] | ||
| pub fn write_sample( | ||
| &mut self, | ||
| sample: &IMFSample, | ||
| output: &mut ffmpeg::format::context::Output, | ||
| ) -> Result<(), Box<dyn std::error::Error>> { | ||
| if self.is_finished { | ||
| return Err("Muxer is already finished".into()); | ||
| } | ||
|
|
||
| let mut packet = self.mf_sample_to_avpacket(sample)?; | ||
|
|
||
| packet.rescale_ts( | ||
| self.time_base, | ||
| output.stream(self.stream_index).unwrap().time_base(), | ||
| ); | ||
|
|
||
| packet.write_interleaved(output)?; | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Frame count never increments.
Increments are expected for logging/metrics.
Apply this diff:
packet.write_interleaved(output)?;
+ self.frame_count += 1;
Ok(())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Write an H264 sample from MediaFoundation to the output | |
| #[cfg(windows)] | |
| pub fn write_sample( | |
| &mut self, | |
| sample: &IMFSample, | |
| output: &mut ffmpeg::format::context::Output, | |
| ) -> Result<(), Box<dyn std::error::Error>> { | |
| if self.is_finished { | |
| return Err("Muxer is already finished".into()); | |
| } | |
| let mut packet = self.mf_sample_to_avpacket(sample)?; | |
| packet.rescale_ts( | |
| self.time_base, | |
| output.stream(self.stream_index).unwrap().time_base(), | |
| ); | |
| packet.write_interleaved(output)?; | |
| Ok(()) | |
| } | |
| /// Write an H264 sample from MediaFoundation to the output | |
| #[cfg(windows)] | |
| pub fn write_sample( | |
| &mut self, | |
| sample: &IMFSample, | |
| output: &mut ffmpeg::format::context::Output, | |
| ) -> Result<(), Box<dyn std::error::Error>> { | |
| if self.is_finished { | |
| return Err("Muxer is already finished".into()); | |
| } | |
| let mut packet = self.mf_sample_to_avpacket(sample)?; | |
| packet.rescale_ts( | |
| self.time_base, | |
| output.stream(self.stream_index).unwrap().time_base(), | |
| ); | |
| packet.write_interleaved(output)?; | |
| self.frame_count += 1; | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/src/h264.rs around lines 83 to 104, the
function write_sample writes a packet but never increments the frame counter
used for logging/metrics; after packet.write_interleaved(output)? completes
successfully, increment the muxer's frame count (e.g. self.frame_count += 1 or
self.frames_written = self.frames_written.saturating_add(1)) before returning
Ok(()), ensuring any necessary mutability and visibility is handled (make sure
the field exists as a mutable integer on self and is updated only after
successful write to avoid counting failed writes).
| fn mf_sample_to_avpacket(&self, sample: &IMFSample) -> windows::core::Result<ffmpeg::Packet> { | ||
| let len = unsafe { sample.GetTotalLength()? }; | ||
| let mut packet = ffmpeg::Packet::new(len as usize); | ||
|
|
||
| { | ||
| let buffer = unsafe { sample.ConvertToContiguousBuffer()? }; | ||
| let data = buffer.lock()?; | ||
|
|
||
| packet | ||
| .data_mut() | ||
| .unwrap() | ||
| .copy_from_slice(&data[0..len as usize]); | ||
| } | ||
|
|
||
| let pts = unsafe { sample.GetSampleTime() } | ||
| .ok() | ||
| .map(|v| mf_from_mf_time(self.time_base, v)); | ||
| packet.set_pts(pts); | ||
| packet.set_dts(pts); | ||
|
|
||
| let duration = unsafe { sample.GetSampleDuration() } | ||
| .ok() | ||
| .map(|v| mf_from_mf_time(self.time_base, v)) | ||
| .unwrap_or_default(); | ||
| packet.set_duration(duration); | ||
|
|
||
| if let Ok(t) = unsafe { sample.GetUINT32(&MFSampleExtension_CleanPoint) } | ||
| && t != 0 | ||
| { | ||
| packet.set_flags(packet::Flags::KEY); | ||
| } | ||
|
|
||
| packet.set_stream(self.stream_index); | ||
|
|
||
| // if let Ok(decode_timestamp) = | ||
| // unsafe { sample.GetUINT64(&MFSampleExtension_DecodeTimestamp) } | ||
| // { | ||
| // packet.set_dts(Some(mf_from_mf_time( | ||
| // self.time_base, | ||
| // decode_timestamp as i64, | ||
| // ))); | ||
| // } | ||
|
|
||
| Ok(packet) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Set DTS from MFSampleExtension_DecodeTimestamp when available.
Using PTS for both PTS/DTS breaks B-frame streams. You already have the code—just enable it.
Apply this diff:
- packet.set_dts(pts);
+ packet.set_dts(pts);
@@
- // if let Ok(decode_timestamp) =
- // unsafe { sample.GetUINT64(&MFSampleExtension_DecodeTimestamp) }
- // {
- // packet.set_dts(Some(mf_from_mf_time(
- // self.time_base,
- // decode_timestamp as i64,
- // )));
- // }
+ #[cfg(windows)]
+ if let Ok(decode_timestamp) =
+ unsafe { sample.GetUINT64(&windows::Win32::Media::MediaFoundation::MFSampleExtension_DecodeTimestamp) }
+ {
+ packet.set_dts(Some(mf_from_mf_time(
+ self.time_base,
+ decode_timestamp as i64,
+ )));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn mf_sample_to_avpacket(&self, sample: &IMFSample) -> windows::core::Result<ffmpeg::Packet> { | |
| let len = unsafe { sample.GetTotalLength()? }; | |
| let mut packet = ffmpeg::Packet::new(len as usize); | |
| { | |
| let buffer = unsafe { sample.ConvertToContiguousBuffer()? }; | |
| let data = buffer.lock()?; | |
| packet | |
| .data_mut() | |
| .unwrap() | |
| .copy_from_slice(&data[0..len as usize]); | |
| } | |
| let pts = unsafe { sample.GetSampleTime() } | |
| .ok() | |
| .map(|v| mf_from_mf_time(self.time_base, v)); | |
| packet.set_pts(pts); | |
| packet.set_dts(pts); | |
| let duration = unsafe { sample.GetSampleDuration() } | |
| .ok() | |
| .map(|v| mf_from_mf_time(self.time_base, v)) | |
| .unwrap_or_default(); | |
| packet.set_duration(duration); | |
| if let Ok(t) = unsafe { sample.GetUINT32(&MFSampleExtension_CleanPoint) } | |
| && t != 0 | |
| { | |
| packet.set_flags(packet::Flags::KEY); | |
| } | |
| packet.set_stream(self.stream_index); | |
| // if let Ok(decode_timestamp) = | |
| // unsafe { sample.GetUINT64(&MFSampleExtension_DecodeTimestamp) } | |
| // { | |
| // packet.set_dts(Some(mf_from_mf_time( | |
| // self.time_base, | |
| // decode_timestamp as i64, | |
| // ))); | |
| // } | |
| Ok(packet) | |
| } | |
| fn mf_sample_to_avpacket(&self, sample: &IMFSample) -> windows::core::Result<ffmpeg::Packet> { | |
| let len = unsafe { sample.GetTotalLength()? }; | |
| let mut packet = ffmpeg::Packet::new(len as usize); | |
| { | |
| let buffer = unsafe { sample.ConvertToContiguousBuffer()? }; | |
| let data = buffer.lock()?; | |
| packet | |
| .data_mut() | |
| .unwrap() | |
| .copy_from_slice(&data[0..len as usize]); | |
| } | |
| let pts = unsafe { sample.GetSampleTime() } | |
| .ok() | |
| .map(|v| mf_from_mf_time(self.time_base, v)); | |
| packet.set_pts(pts); | |
| packet.set_dts(pts); | |
| #[cfg(windows)] | |
| if let Ok(decode_timestamp) = | |
| unsafe { sample.GetUINT64(&windows::Win32::Media::MediaFoundation::MFSampleExtension_DecodeTimestamp) } | |
| { | |
| packet.set_dts(Some(mf_from_mf_time( | |
| self.time_base, | |
| decode_timestamp as i64, | |
| ))); | |
| } | |
| let duration = unsafe { sample.GetSampleDuration() } | |
| .ok() | |
| .map(|v| mf_from_mf_time(self.time_base, v)) | |
| .unwrap_or_default(); | |
| packet.set_duration(duration); | |
| if let Ok(t) = unsafe { sample.GetUINT32(&MFSampleExtension_CleanPoint) } | |
| && t != 0 | |
| { | |
| packet.set_flags(packet::Flags::KEY); | |
| } | |
| packet.set_stream(self.stream_index); | |
| Ok(packet) | |
| } |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/src/h264.rs around lines 106 to 150, the code
currently sets both PTS and DTS to the same value which breaks B-frame streams;
re-enable and use the existing GetUINT64(MFSampleExtension_DecodeTimestamp)
logic to set DTS when available: after computing pts, attempt unsafe
sample.GetUINT64(&MFSampleExtension_DecodeTimestamp) and if Ok(decode_timestamp)
set packet.set_dts(Some(mf_from_mf_time(self.time_base, decode_timestamp as
i64))) else fallback to packet.set_dts(pts); remove the commented block and
ensure types match (cast u64->i64) and you only call set_dts once with the
chosen value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
crates/recording/src/capture_pipeline.rs (3)
77-89: macOS: First video frame is dropped before the loop.You recv() once to prime timestamp_tx but never queue that first frame, causing a missing frame and timestamp drift.
let Ok(frame) = source.1.recv() else { return Ok(()); }; if let Some(timestamp_tx) = timestamp_tx.take() { let _ = timestamp_tx.send(frame.1); } + // Queue the first frame too to avoid dropping it. + let _ = screen_encoder.queue_video_frame(frame.0.as_ref());
171-179: macOS: PTS conversion risks divide-by-zero and integer truncation. Use av_rescale_q.time.scale / AV_TIME_BASE_Q.den is integer division and can hit zero. Convert with av_rescale_q.
- let elapsed = (pts as f64 / AV_TIME_BASE_Q.den as f64) - time.1; + let elapsed = (pts as f64 / AV_TIME_BASE_Q.den as f64) - time.1; let time = time.0.add(cm::Time::new( (elapsed * time.0.scale as f64 + time.1 * time.0.scale as f64) as i64, time.0.scale, )); - frame.set_pts(Some(time.value / (time.scale / AV_TIME_BASE_Q.den) as i64)); + use ffmpeg::ffi::{av_rescale_q, AVRational}; + let pts = unsafe { + av_rescale_q( + time.value, + AVRational { num: 1, den: time.scale as i32 }, + AV_TIME_BASE_Q, + ) + }; + frame.set_pts(Some(pts));
494-512: Trailer write races audio write; coordinate finalization.Video thread writes trailer while audio task may still be queueing frames to the same Output, risking a corrupted file or errors.
Minimal fix: signal when audio task completes and wait before writing trailer.
- let output = Arc::new(std::sync::Mutex::new(output)); + let output = Arc::new(std::sync::Mutex::new(output)); + // Coordinate trailer with audio task (if any). + let (audio_done_tx, audio_done_rx) = flume::bounded::<()>(1); @@ - builder.spawn_task("audio_encoding", move |ready| { + let audio_done = audio_done_tx.clone(); + builder.spawn_task("audio_encoding", move |ready| { let _ = ready.send(Ok(())); while let Ok(frame) = audio_rx.recv() { @@ } - Ok(()) + let _ = audio_done.send(()); + Ok(()) }); } @@ - // finish() already called when initiating drain - let mut out = output.lock().unwrap(); - out.write_trailer().map_err(|e| format!("WriteTrailer: {e}"))?; + // Wait for audio to finish (if present) before writing trailer. + let _ = audio_done_rx.recv_timeout(std::time::Duration::from_secs(2)); + let mut out = output.lock().unwrap(); + out.write_trailer().map_err(|e| format!("WriteTrailer: {e}"))?;If Output isn’t Send/Sync in this crate, prefer a single-threaded writer task receiving encoded audio/video packets and owning the Output.
Also applies to: 568-571
♻️ Duplicate comments (9)
crates/scap-direct3d/examples/cli.rs (1)
53-53: Build break from prior commit resolved—danglingcapturer.stop()removed.Thanks for addressing the compile error noted earlier.
crates/enc-mediafoundation/examples/cli.rs (3)
85-85: Wire the CLI bit_rate into the encoder (and avoid overflow).Currently bit_rate is computed but unused; the encoder uses a hardcoded 0.07. Map Mbps→bps with saturating_mul and derive a bitrate_multiplier from pixels/sec.
- let bit_rate = bit_rate * 1000000; + // Mbps -> bps (saturating to avoid u32 overflow on large inputs) + let bit_rate = bit_rate.saturating_mul(1_000_000); @@ - let mut video_encoder = H264Encoder::new( - &d3d_device, - capturer.settings().pixel_format.as_dxgi(), - resolution, - frame_rate, - 0.07, - ) + let pixels_per_sec = (resolution.Width as u32) + .saturating_mul(resolution.Height as u32) + .saturating_mul(frame_rate); + let bitrate_multiplier = if pixels_per_sec == 0 { + 0.07 + } else { + (bit_rate as f32) / (pixels_per_sec as f32) + }; + + let mut video_encoder = H264Encoder::new( + &d3d_device, + capturer.settings().pixel_format.as_dxgi(), + resolution, + frame_rate, + bitrate_multiplier, + ) .unwrap();Also applies to: 127-134
148-152: Avoid duplicate MFStartup and pair with shutdown.MFStartup is called in run() and again in the worker. Remove the worker call and add MFShutdown/RoUninitialize once at the end of run().
- unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }.unwrap(); + // MF already started in run(); avoid duplicate startup.- Ok(()) + unsafe { + windows::Win32::Media::MediaFoundation::MFShutdown().ok(); + windows::Win32::System::WinRT::RoUninitialize(); + } + Ok(())Also applies to: 181-182
163-166: Respect handle_has_output() returning None.write() expects an IMFSample; handle_has_output() returns Option.
- let output_sample = video_encoder.handle_has_output().unwrap(); - sample_writer.write(stream_index, &output_sample).unwrap(); + if let Some(output_sample) = video_encoder.handle_has_output().unwrap() { + sample_writer.write(stream_index, &output_sample).unwrap(); + }crates/enc-mediafoundation/src/video/h264.rs (1)
225-226: Use the negotiated frame_rate instead of hardcoded 60.Hardcoding 60 fps can break type negotiation and A/V sync when frame_rate differs.
- MFSetAttributeRatio(&attributes, &MF_MT_FRAME_RATE, 60, 1)?; + MFSetAttributeRatio(&attributes, &MF_MT_FRAME_RATE, frame_rate, 1)?;- MFSetAttributeRatio(&attributes, &MF_MT_FRAME_RATE, 60, 1)?; + MFSetAttributeRatio(&attributes, &MF_MT_FRAME_RATE, frame_rate, 1)?;Also applies to: 254-254
crates/recording/src/capture_pipeline.rs (4)
345-347: Avoid panic on timestamp send when receiver is gone.send(...).unwrap() can panic during shutdown.
- timestamp_tx.send(frame.1).unwrap(); + let _ = timestamp_tx.send(frame.1);- timestamp_tx.send(frame.1).unwrap(); + let _ = timestamp_tx.send(frame.1);Also applies to: 406-408
247-250: Pair MFStartup/RoInitialize with shutdown; avoid duplicate startup.Init is done both before spawning and inside the task; also missing MFShutdown/RoUninitialize. Initialize per-thread and balance shutdown.
- unsafe { - let _ = RoInitialize(RO_INIT_MULTITHREADED); - let _ = MFStartup(MF_VERSION, MFSTARTUP_FULL); - } + // Defer to per-thread init inside the task(s).- let _ = unsafe { RoInitialize(RO_INIT_MULTITHREADED) }; - let _ = unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }; + cap_mediafoundation_utils::thread_init();- let _ = encoder.finish(); - // let _ = sample_writer.stop(); + let _ = encoder.finish(); + // Balance MF init calls on this thread. + unsafe { + windows::Win32::Media::MediaFoundation::MFShutdown().ok(); + windows::Win32::System::WinRT::RoUninitialize(); + }Also applies to: 327-329
308-321: Don’t unwrap container/muxer/trailer operations; propagate or map errors.Unwraps here will crash the pipeline on I/O/format errors. Replace with proper error handling.
- let mut output = ffmpeg::format::output(&output_path).unwrap(); + let mut output = ffmpeg::format::output(&output_path) + .map_err(|e| MediaError::Any(format!("OpenOutput: {e}").into()))?; @@ - .unwrap(); + .map_err(|e| MediaError::Any(format!("AddStream: {e}").into()))?; @@ - output.write_header().unwrap(); + output + .write_header() + .map_err(|e| MediaError::Any(format!("WriteHeader: {e}").into()))?; @@ - mf_ffmpeg_muxer - .write_sample(&output_sample, &mut output) - .unwrap(); + mf_ffmpeg_muxer + .write_sample(&output_sample, &mut output) + .map_err(|e| format!("MuxWrite: {e}"))?; @@ - output.write_trailer().unwrap(); + output + .write_trailer() + .map_err(|e| format!("WriteTrailer: {e}"))?;Also applies to: 370-373, 390-391
335-388: Drain the MFT and write trailer only after DrainComplete; don’t break on NeedInput when source closes.On source close you break immediately, which can drop queued output. Initiate drain, keep handling outputs until METransformDrainComplete, then write trailer.
- while let Ok(e) = encoder.get_event() { + let mut draining = false; + while let Ok(e) = encoder.get_event() { match e { MediaFoundation::METransformNeedInput => { @@ - let Ok(frame) = source.1.recv() else { - break; - }; + if draining { + continue; + } + let Ok(frame) = source.1.recv() else { + // No more input: begin draining + encoder + .finish() + .map_err(|e| format!("Finish/Drain: {e}"))?; + draining = true; + continue; + }; @@ - } + } MediaFoundation::METransformHaveOutput => { if let Some(output_sample) = encoder .handle_has_output() .map_err(|e| format!("HasOutput: {e}"))? { @@ } } + MediaFoundation::METransformDrainComplete => { + break; + } _ => {} } } @@ - let _ = encoder.finish(); + // finish() already called to initiate drainAlso applies to: 390-396
🧹 Nitpick comments (18)
crates/recording/src/pipeline/control.rs (1)
34-56: Confirm that last() intentionally blocks unless the last value is Play.As written,
last()will block wheneverlast_value != Some(Control::Play). If the API contract expects a non-blocking read, consider renaming topeek()or adding a truly non-blocking variant (and document the current behavior).crates/scap-direct3d/examples/cli.rs (2)
18-36: Option: restore a minimal working capture demo (keeps example useful).If you want this example to actually exercise the capture path, uncomment and adjust as below; it does not depend on the
raw_handle()from Line 16.- // let mut capturer = Capturer::new( - // display.try_as_capture_item().unwrap(), - // Settings { - // is_border_required: Some(false), - // is_cursor_capture_enabled: Some(true), - // pixel_format: PixelFormat::R8G8B8A8Unorm, - // // crop: Some(D3D11_BOX { - // // left: 0, - // // top: 0, - // // right: 500, - // // bottom: 400, - // // front: 0, - // // back: 1, - // // }), - // ..Default::default() - // }, - // ) - // .unwrap(); + let capturer = Capturer::new( + Display::primary().try_as_capture_item().unwrap(), + Settings { + is_border_required: Some(false), + is_cursor_capture_enabled: Some(true), + pixel_format: PixelFormat::R8G8B8A8Unorm, + // crop: Some(D3D11_BOX { left: 0, top: 0, right: 500, bottom: 400, front: 0, back: 1 }), + ..Default::default() + }, + ) + .unwrap();
51-55: Placeholder sleeps: consider clarifying or trimming.If the example stays as a stub, either log that it’s a no-op or remove one sleep to avoid confusion.
- std::thread::sleep(Duration::from_secs(3)); - - // capturer.stop().unwrap(); - - std::thread::sleep(Duration::from_secs(3)); + eprintln!("Windows example stub: sleeping for 3s…"); + std::thread::sleep(Duration::from_secs(3));crates/camera-avfoundation/examples/cli.rs (3)
1-3: Emit a clear message on non-macOS builds.Currently, main() is a no-op on non-macOS targets, which is confusing at runtime. Print a message so users know why nothing happens.
fn main() { - #[cfg(target_os = "macos")] - macos::main(); + #[cfg(target_os = "macos")] + macos::main(); + #[cfg(not(target_os = "macos"))] + { + eprintln!("camera-avfoundation example requires macOS; skipping."); + } }
16-20: Fix unused CLI parser (prevents dead_code/unused warnings, keeps path for future subcommands).Cli/Commands are defined but never used. Make the subcommand optional and parse argv so this compiles cleanly even with -D warnings.
#[derive(Parser)] struct Cli { #[command(subcommand)] - command: Commands, + command: Option<Commands>, }- pub fn main() { + pub fn main() { + // Parse flags (subcommand optional for now) + let _ = Cli::parse(); let _devices = list_video_devices();Also applies to: 28-29
147-152: Drop the extra sleep after stop_running().The second sleep delays exit without benefit in this example.
- std::thread::sleep(std::time::Duration::from_secs(10)); - - session.stop_running(); - - std::thread::sleep(std::time::Duration::from_secs(10)); + std::thread::sleep(std::time::Duration::from_secs(10)); + session.stop_running();crates/enc-mediafoundation/examples/cli.rs (2)
74-77: Several CLI options are ignored.display (index), borderless, encoder, and console_mode are parsed but unused; display_index is also unused in run(). Wire these or drop the flags.
Would you like a patch that:
- selects the requested display index,
- maps borderless -> Settings.is_border_required = Some(!borderless),
- supports encoder selection (plumb index to H264Encoder/new or constructor that accepts an EncoderDevice),
- skips pause() when console_mode is true?
Also applies to: 201-208, 279-337
247-257: Make extension check case-insensitive on Windows.Users may pass “.MP4”; compare lowercase.
- if let Some(extension) = path.extension() { - if extension != "mp4" { + if let Some(extension) = path.extension().and_then(|e| e.to_str()) { + if extension.to_ascii_lowercase() != "mp4" { valid = false; }crates/enc-mediafoundation/src/video/h264.rs (3)
205-211: Avoid transmute; pass the manager pointer explicitly.Use a typed pointer cast to reduce unsafe surface and keep ref-count clear.
- unsafe { - let temp = media_device_manager.clone(); - transform - .ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, std::mem::transmute(temp)) - .map_err(NewVideoEncoderError::EncoderTransform)?; - }; + unsafe { + use windows::core::Interface; + let ptr = media_device_manager.as_raw() as isize; + transform + .ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, ptr) + .map_err(NewVideoEncoderError::EncoderTransform)?; + };
227-228: Question: All samples independent for H.264.Setting MF_MT_ALL_SAMPLES_INDEPENDENT=1 is typically false for inter-dependent frames (P/B). Suggest omitting or setting 0 unless an encoder requires it.
- output_type.SetUINT32(&MF_MT_ALL_SAMPLES_INDEPENDENT, 1)?; + // Usually not true for H.264 GOP streams; consider omitting or 0. + // output_type.SetUINT32(&MF_MT_ALL_SAMPLES_INDEPENDENT, 0)?;
96-103: Typo in API: bitrate_multipler.Consider renaming to bitrate_multiplier for clarity; defer if it’s already public API.
crates/recording/src/capture_pipeline.rs (3)
316-318: Use encoder-provided bitrate consistently (avoid hard-coded 5 Mbps).Studio path hard-codes 5_000_000 while instant path uses encoder.bitrate(). Prefer the latter.
- bitrate: 5_000_000, + bitrate: encoder.bitrate(),Also applies to: 478-479
616-674: De-duplicate D3D device creation; reuse enc-mediafoundation::d3d::create_d3d_device.This function duplicates logic already provided in enc-mediafoundation. Prefer re-export to avoid divergence.
-#[cfg(windows)] -pub fn create_d3d_device() --> windows::core::Result<windows::Win32::Graphics::Direct3D11::ID3D11Device> { - ... - Ok(device.unwrap()) -} +#[cfg(windows)] +pub use cap_enc_mediafoundation::d3d::create_d3d_device;
254-265: H264Encoder quality/bitrate parameter (0.07) needs explicit config or rationale.Magic constant lacks context; tie it to user-configurable bitrate/CRF or document expected units.
I can wire this to your “configurable bitrate” setting for consistency with other paths.
crates/mediafoundation-utils/src/lib.rs (4)
48-51: Optional: log Unlock failures in Drop for diagnosability.You can’t return errors from Drop, but a debug log helps trace mismatched Lock/Unlock cases.
Example:
impl<'a> Drop for IMFMediaBufferLock<'a> { fn drop(&mut self) { - let _ = unsafe { self.source.Unlock() }; + if let Err(e) = unsafe { self.source.Unlock().ok() } { + #[cfg(debug_assertions)] + eprintln!("IMFMediaBuffer::Unlock failed: {e:?}"); + } } }
32-34: If you plan to write into the buffer, request max length.Currently you pass None for pcb_max_length and use pcb_current_length as the slice size. That’s perfect for read paths. For write paths, expose a separate method that returns a &mut [u8] sized to pcb_max_length and remembers to SetCurrentLength() after writes.
23-25: Add docs about thread-safety and usage constraints.Clarify that the read-only lock is re-entrant but each guard balances an Unlock; document that the mutable lock (if added) requires &mut self and is not re-entrant.
43-46: Consider explicitly forbidding Send/Sync on the guard.Depending on windows-rs auto-traits for IMFMediaBuffer, the guard might auto-implement Send/Sync. It’s safer to forbid cross-thread transfer of a live lock.
Add:
impl<'a> !Send for IMFMediaBufferLock<'a> {} impl<'a> !Sync for IMFMediaBufferLock<'a> {}Alternatively, use a marker:
std::marker::PhantomData<*mut ()>to inhibit Sync/Send.Would you like me to confirm whether
IMFMediaBufferin your pinned windows-rs version is Send/Sync? If so, I’ll provide a quick check.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (9)
crates/camera-avfoundation/examples/cli.rs(1 hunks)crates/cursor-capture/src/position.rs(1 hunks)crates/enc-mediafoundation/examples/cli.rs(1 hunks)crates/enc-mediafoundation/src/video/h264.rs(1 hunks)crates/mediafoundation-utils/src/lib.rs(1 hunks)crates/recording/Cargo.toml(2 hunks)crates/recording/src/capture_pipeline.rs(10 hunks)crates/recording/src/pipeline/control.rs(1 hunks)crates/scap-direct3d/examples/cli.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/cursor-capture/src/position.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/recording/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (6)
crates/mediafoundation-utils/src/lib.rs (1)
crates/camera-windows/src/lib.rs (4)
std(194-194)deref(210-215)deref(481-483)deref(495-497)
crates/enc-mediafoundation/examples/cli.rs (5)
crates/enc-mediafoundation/src/d3d.rs (1)
create_d3d_device(39-56)crates/scap-direct3d/src/lib.rs (4)
d3d_device(204-204)d3d_device(356-358)d3d_device(469-471)new(151-346)crates/enc-mediafoundation/src/video/h264.rs (2)
new(43-45)new(97-296)crates/enc-mediafoundation/src/video/sample_writer.rs (1)
new(24-64)crates/enc-mediafoundation/src/mft.rs (2)
enumerate(23-50)display_name(52-54)
crates/scap-direct3d/examples/cli.rs (2)
crates/scap-ffmpeg/examples/cli.rs (1)
main(6-81)crates/scap-targets/src/lib.rs (2)
display(146-148)primary(20-22)
crates/recording/src/capture_pipeline.rs (4)
crates/recording/src/sources/screen_capture/mod.rs (2)
init(276-412)d3d_device(415-417)crates/recording/src/sources/screen_capture/windows.rs (2)
new(355-361)new(489-519)crates/mediafoundation-utils/src/lib.rs (1)
thread_init(18-21)crates/enc-mediafoundation/src/d3d.rs (2)
create_d3d_device(39-56)create_d3d_device_with_type(19-37)
crates/camera-avfoundation/examples/cli.rs (3)
crates/enc-mediafoundation/examples/cli.rs (2)
main(1-4)main(184-228)crates/scap-direct3d/examples/cli.rs (2)
main(1-4)main(14-56)crates/camera-avfoundation/src/lib.rs (1)
list_video_devices(14-36)
crates/enc-mediafoundation/src/video/h264.rs (4)
crates/enc-mediafoundation/src/media.rs (2)
MFSetAttributeRatio(36-43)MFSetAttributeSize(26-33)crates/enc-mediafoundation/src/async_callback.rs (1)
new(22-27)crates/enc-mediafoundation/src/video/video_processor.rs (1)
new(55-213)crates/enc-mediafoundation/src/mft.rs (1)
enumerate(23-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (7)
crates/scap-direct3d/examples/cli.rs (1)
37-50: Incorrect suggestion for Direct3D example
TheCapturer::startin this crate has signaturefn start(&mut self) -> Result<()>, so it doesn’t return a handle. If you re-enable capture, simply call:capturer.start().unwrap(); // …sleep or do work… capturer.stop().unwrap();Likely an incorrect or invalid review comment.
crates/camera-avfoundation/examples/cli.rs (3)
89-96: Double-check plane metrics for non-planar buffers.For non-planar pixel buffers, plane_count() is 0; calling plane_* with index 0 may be undefined depending on the wrapper. Confirm that plane_bytes_per_row(0)/plane_height(0) are valid for non-planar formats in cidre; otherwise use a non-planar accessor (e.g., bytes_per_row()) if available.
128-135: Verify that desc.media_sub_type() matches kCVPixelBufferPixelFormatTypeKey expectations.Some AVFormat descriptions expose codec subtypes; VideoDataOutput expects CVPixelBuffer pixel formats (e.g., 420f, 2vuy, BGRA). Confirm these values align for all devices; otherwise map to a supported CV pixel format list before applying.
136-145: Good: start_running() under the config lock.This prevents AVFoundation from clobbering the active format at start. Nice catch and aligned with Apple’s guidance.
crates/recording/src/capture_pipeline.rs (1)
494-512: Confirm ffmpeg Output Send/Sync before sharing across threads.Arc<Mutex> assumes thread safety. Verify Output is Send + Sync in the ffmpeg bindings. If not, route all writes through a single writer task.
Would you like a small refactor sketch for a single-writer task that receives frames via channels?
crates/mediafoundation-utils/src/lib.rs (2)
15-16: MF_VERSION constant looks correct for Win7+ (0x00020070 = 131_184).No changes needed.
54-60: Read-only Deref is fine.Exposing &[u8] via Deref keeps this ergonomic and safe.
| pub fn main() { | ||
| let _devices = list_video_devices(); | ||
| let devices = _devices | ||
| .iter() | ||
| .enumerate() | ||
| .map(|(i, v)| CaptureDeviceSelectOption(v, i)) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| let selected = Select::new("Select a device", devices).prompt().unwrap(); | ||
| let mut selected_device = _devices.get(selected.1).unwrap(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle “no devices” and avoid indexing back into the array for the selected item.
Selecting from an empty list panics; also you can use AsRef to avoid a second lookup.
let selected = Select::new("Select a device", devices).prompt().unwrap();
- let mut selected_device = _devices.get(selected.1).unwrap();
+ // Guard: no devices
+ if selected_device_count == 0 {
+ // If needed, compute this as: let selected_device_count = _devices.len();
+ eprintln!("No video capture devices found.");
+ return;
+ }
+ // Prefer the selected item directly
+ let mut selected_device = selected.as_ref();Note: If you don’t track selected_device_count elsewhere, insert before the prompt:
if devices.is_empty() {
eprintln!("No video capture devices found.");
return;
}🤖 Prompt for AI Agents
In crates/camera-avfoundation/examples/cli.rs around lines 28 to 38, the code
can panic when no devices are returned and then does a second lookup into
_devices for the selected item; first check devices.is_empty() and print a
message and return early (e.g. eprintln!("No video capture devices found.");
return;) to handle the empty-list case, and avoid indexing back into the array
by using the selected value directly (use the captured reference/AsRef from the
selected option or destructure selected to obtain the device) so you don’t
perform a second get on _devices.
|
|
||
| let mut _formats = vec![]; | ||
|
|
||
| for (i, format) in formats.iter().enumerate() { | ||
| let desc = format.format_desc(); | ||
|
|
||
| let color_space = desc | ||
| .ext(cm::FormatDescExtKey::ycbcr_matrix()) | ||
| .map(|v| { | ||
| v.try_as_string() | ||
| .and_then(|v| YCbCrMatrix::try_from(v).ok()) | ||
| }) | ||
| .unwrap_or(Some(YCbCrMatrix::Rec601)); | ||
|
|
||
| let fr_ranges = format.video_supported_frame_rate_ranges(); | ||
|
|
||
| for fr_range in fr_ranges.iter() { | ||
| _formats.push(Format { | ||
| index: i, | ||
| width: desc.dims().width, | ||
| height: desc.dims().height, | ||
| fourcc: desc.media_sub_type(), | ||
| color_space, | ||
| max_frame_rate: ( | ||
| fr_range.min_frame_duration().value, | ||
| fr_range.min_frame_duration().scale, | ||
| ), | ||
| }); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against devices that report zero supported frame‑rate ranges.
empty formats leads to a panic at remove(0).
- let selected_format = if _formats.len() > 1 {
+ if _formats.is_empty() {
+ eprintln!("The selected device reports no supported frame-rate ranges.");
+ return;
+ }
+ let selected_format = if _formats.len() > 1 {
inquire::Select::new("Select a format", _formats)
.prompt()
.unwrap()
} else {
_formats.remove(0)
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let formats = selected_device.formats(); | |
| let mut _formats = vec![]; | |
| for (i, format) in formats.iter().enumerate() { | |
| let desc = format.format_desc(); | |
| let color_space = desc | |
| .ext(cm::FormatDescExtKey::ycbcr_matrix()) | |
| .map(|v| { | |
| v.try_as_string() | |
| .and_then(|v| YCbCrMatrix::try_from(v).ok()) | |
| }) | |
| .unwrap_or(Some(YCbCrMatrix::Rec601)); | |
| let fr_ranges = format.video_supported_frame_rate_ranges(); | |
| for fr_range in fr_ranges.iter() { | |
| _formats.push(Format { | |
| index: i, | |
| width: desc.dims().width, | |
| height: desc.dims().height, | |
| fourcc: desc.media_sub_type(), | |
| color_space, | |
| max_frame_rate: ( | |
| fr_range.min_frame_duration().value, | |
| fr_range.min_frame_duration().scale, | |
| ), | |
| }); | |
| } | |
| } | |
| let formats = selected_device.formats(); | |
| let mut _formats = vec![]; | |
| for (i, format) in formats.iter().enumerate() { | |
| let desc = format.format_desc(); | |
| let color_space = desc | |
| .ext(cm::FormatDescExtKey::ycbcr_matrix()) | |
| .map(|v| { | |
| v.try_as_string() | |
| .and_then(|v| YCbCrMatrix::try_from(v).ok()) | |
| }) | |
| .unwrap_or(Some(YCbCrMatrix::Rec601)); | |
| let fr_ranges = format.video_supported_frame_rate_ranges(); | |
| for fr_range in fr_ranges.iter() { | |
| _formats.push(Format { | |
| index: i, | |
| width: desc.dims().width, | |
| height: desc.dims().height, | |
| fourcc: desc.media_sub_type(), | |
| color_space, | |
| max_frame_rate: ( | |
| fr_range.min_frame_duration().value, | |
| fr_range.min_frame_duration().scale, | |
| ), | |
| }); | |
| } | |
| } | |
| if _formats.is_empty() { | |
| eprintln!("The selected device reports no supported frame-rate ranges."); | |
| return; | |
| } | |
| let selected_format = if _formats.len() > 1 { | |
| inquire::Select::new("Select a format", _formats) | |
| .prompt() | |
| .unwrap() | |
| } else { | |
| _formats.remove(0) | |
| }; |
🤖 Prompt for AI Agents
In crates/camera-avfoundation/examples/cli.rs around lines 41 to 71, the code
assumes each device format has at least one supported frame‑rate range and thus
later removing index 0 panics when _formats is empty; update the loop to skip
formats whose video_supported_frame_rate_ranges() is empty (i.e., if
fr_ranges.is_empty() continue) so no entries are pushed for formats with no
frame‑rate ranges (optionally log or warn when skipping), ensuring _formats
remains nonempty only when valid entries were added and preventing the remove(0)
panic.
| if s.can_add_input(&input) { | ||
| s.add_input(&input); | ||
| } else { | ||
| image_buf.plane_bytes_per_row(0) * image_buf.plane_height(0) | ||
| }; | ||
| panic!("can't add input"); | ||
| } | ||
|
|
||
| let mut format = image_buf.pixel_format().0.to_be_bytes(); | ||
| let format_fourcc = four_cc_to_str(&mut format); | ||
| s.add_output(&output); | ||
|
|
||
| println!( | ||
| "New frame: {}x{}, {:.2}pts, {total_bytes} bytes, format={format_fourcc}", | ||
| image_buf.width(), | ||
| image_buf.height(), | ||
| data.sample_buf.pts().value as f64 / data.sample_buf.pts().scale as f64, | ||
| ) | ||
| }))); | ||
|
|
||
| let mut output = av::capture::VideoDataOutput::new(); | ||
| let mut _lock = selected_device.config_lock().unwrap(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Check can_add_output() and remove redundant pre-start format set.
Add the missing capability check and avoid setting the active format here since you set it again while holding the lock before start_running().
session.configure(|s| {
if s.can_add_input(&input) {
s.add_input(&input);
} else {
panic!("can't add input");
}
-
- s.add_output(&output);
+ if s.can_add_output(&output) {
+ s.add_output(&output);
+ } else {
+ panic!("can't add output");
+ }
-
- let mut _lock = selected_device.config_lock().unwrap();
-
- _lock.set_active_format(&formats[selected_format.index]);
+ // Defer active format change until start_running() while holding the lock.
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| session.configure(|s| { | |
| if s.can_add_input(&input) { | |
| s.add_input(&input); | |
| } else { | |
| image_buf.plane_bytes_per_row(0) * image_buf.plane_height(0) | |
| }; | |
| panic!("can't add input"); | |
| } | |
| let mut format = image_buf.pixel_format().0.to_be_bytes(); | |
| let format_fourcc = four_cc_to_str(&mut format); | |
| s.add_output(&output); | |
| println!( | |
| "New frame: {}x{}, {:.2}pts, {total_bytes} bytes, format={format_fourcc}", | |
| image_buf.width(), | |
| image_buf.height(), | |
| data.sample_buf.pts().value as f64 / data.sample_buf.pts().scale as f64, | |
| ) | |
| }))); | |
| let mut output = av::capture::VideoDataOutput::new(); | |
| let mut _lock = selected_device.config_lock().unwrap(); | |
| session.configure(|s| { | |
| if s.can_add_input(&input) { | |
| s.add_input(&input); | |
| } else { | |
| panic!("can't add input"); | |
| } | |
| if s.can_add_output(&output) { | |
| s.add_output(&output); | |
| } else { | |
| panic!("can't add output"); | |
| } | |
| // Defer active format change until start_running() while holding the lock. | |
| }); |
🤖 Prompt for AI Agents
In crates/camera-avfoundation/examples/cli.rs around lines 112 to 121, the
session configuration is missing a can_add_output() capability check and
redundantly sets the device's active format before acquiring the config lock;
modify the block to first call s.can_add_output(&output) and only call
s.add_output(&output) when that returns true (otherwise panic or return an
error), and remove the earlier active format assignment from this configure
closure so the active format is only set while holding
selected_device.config_lock() prior to start_running().
| std::thread::spawn({ | ||
| let sample_writer = sample_writer.clone(); | ||
| move || { | ||
| unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }.unwrap(); | ||
|
|
||
| video_encoder.start().unwrap(); | ||
|
|
||
| while let Ok(e) = video_encoder.get_event() { | ||
| match e { | ||
| MediaFoundation::METransformNeedInput => { | ||
| let Some(frame) = frame_rx.recv().unwrap() else { | ||
| break; | ||
| }; | ||
|
|
||
| if video_encoder.handle_needs_input(frame).is_err() { | ||
| break; | ||
| } | ||
| } | ||
| MediaFoundation::METransformHaveOutput => { | ||
| let output_sample = video_encoder.handle_has_output().unwrap(); | ||
| sample_writer.write(stream_index, &output_sample).unwrap(); | ||
| } | ||
| _ => {} | ||
| } | ||
| } | ||
|
|
||
| video_encoder.finish().unwrap(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Join the worker before stopping the writer to avoid races.
The thread may still write while main stops the writer; capture the JoinHandle and join before stop.
- std::thread::spawn({
+ let worker = std::thread::spawn({
let sample_writer = sample_writer.clone();
move || {
@@
}
});
pause();
capturer.stop().unwrap();
- sample_writer.stop()?;
+ let _ = worker.join();
+ sample_writer.stop()?;Also applies to: 175-179
🤖 Prompt for AI Agents
In crates/enc-mediafoundation/examples/cli.rs around lines 145-173 (and
similarly for the other spawn at 175-179), the worker thread is spawned but its
JoinHandle is not captured so main may stop the writer while the thread is still
running; capture the JoinHandle returned by std::thread::spawn (assign it to a
variable that lives until shutdown), and before stopping/closing the
sample_writer call join() on the handle to ensure the thread has finished
writing; apply the same change to the second spawn block at lines 175-179.
| let Some(frame) = frame_rx.recv().unwrap() else { | ||
| break; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle channel close without panic.
recv().unwrap() will panic when senders drop; break cleanly instead.
- let Some(frame) = frame_rx.recv().unwrap() else {
- break;
- };
+ let frame = match frame_rx.recv() {
+ Ok(v) => v,
+ Err(_) => break,
+ };
+ let Some(frame) = frame else { break };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let Some(frame) = frame_rx.recv().unwrap() else { | |
| break; | |
| }; | |
| let frame = match frame_rx.recv() { | |
| Ok(v) => v, | |
| Err(_) => break, | |
| }; | |
| let Some(frame) = frame else { break }; |
🤖 Prompt for AI Agents
In crates/enc-mediafoundation/examples/cli.rs around lines 155-157, the code
uses recv().unwrap() which will panic when the channel is closed; replace the
unwrap with proper error handling by matching on frame_rx.recv() (or its
Result/Option) and break cleanly when it returns Err or None, otherwise proceed
with the received frame.
| pub struct IMFMediaBufferLock<'a> { | ||
| source: &'a IMFMediaBuffer, | ||
| bytes: &'a mut [u8], | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make guard hold &[u8], not &mut [u8].
This avoids creating a unique mutable reference you cannot guarantee is unique.
Apply:
pub struct IMFMediaBufferLock<'a> {
source: &'a IMFMediaBuffer,
- bytes: &'a mut [u8],
+ bytes: &'a [u8],
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub struct IMFMediaBufferLock<'a> { | |
| source: &'a IMFMediaBuffer, | |
| bytes: &'a mut [u8], | |
| } | |
| pub struct IMFMediaBufferLock<'a> { | |
| source: &'a IMFMediaBuffer, | |
| bytes: &'a [u8], | |
| } |
🤖 Prompt for AI Agents
In crates/mediafoundation-utils/src/lib.rs around lines 43 to 46, the
IMFMediaBufferLock guard currently stores bytes: &'a mut [u8] which creates an
exclusive mutable reference that may not be provably unique; change the field to
bytes: &'a [u8] instead, update any methods/uses to treat the guard as holding
an immutable slice (adjust signatures and internal logic to avoid returning or
storing &mut from the guard), and if mutable access is required, use controlled
safe APIs (e.g., provide explicit methods that perform copies or use interior
mutability/unsafe only in tightly-scoped, documented places) so no unique &mut
slice is retained on the guard.
| impl<'a> DerefMut for IMFMediaBufferLock<'a> { | ||
| fn deref_mut(&mut self) -> &mut Self::Target { | ||
| self.bytes | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove DerefMut; provide a separate lock_mut API if/when needed.
DerefMut reintroduces the unsafe aliasing problem discussed above.
Apply:
-impl<'a> DerefMut for IMFMediaBufferLock<'a> {
- fn deref_mut(&mut self) -> &mut Self::Target {
- self.bytes
- }
-}
+// No DerefMut on read-only guard.🤖 Prompt for AI Agents
In crates/mediafoundation-utils/src/lib.rs around lines 62–66, remove the impl
DerefMut for IMFMediaBufferLock<'a> (delete the block implementing deref_mut)
and remove any unused DerefMut import; instead, add a dedicated safe
mutable-access API (e.g., a pub fn lock_mut(&mut self) -> &mut [u8] or a guarded
mutable-lock type) that requires &mut self to obtain mutable bytes, ensuring no
DerefMut impl is present to avoid unsafe aliasing.
| pub last_value: Option<Control>, | ||
| pub receiver: Receiver<Control>, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Don’t expose PipelineControlSignal internals; keep last_value private and hand out the receiver via an explicit API.
Making last_value public weakens invariants: external mutation can desync the cached state from the channel and alter blocking behavior. Public receiver also allows replacement, which combined with Clone can create multi-consumer drains that violate the “naive broadcast” intent.
Preferred:
- Keep both fields private; add an owning accessor to move the receiver out when truly needed (e.g., for selects).
- If you must expose one, keep
last_valueprivate at minimum and document the cloning caveat onreceiver.
Apply this minimal revert and add an owning accessor:
pub struct PipelineControlSignal {
- pub last_value: Option<Control>,
- pub receiver: Receiver<Control>,
+ last_value: Option<Control>,
+ receiver: Receiver<Control>,
}Add outside the shown range:
impl PipelineControlSignal {
/// Consume self and return the underlying receiver.
pub fn into_receiver(self) -> Receiver<Control> {
self.receiver
}
}Optionally, add doc comments warning that cloning the receiver creates additional consumers and that last_value is an internal cache driving blocking semantics.
If external crates already depend on these being public, I can draft a transitional plan (doc warnings now, privatize in the next minor with a deprecation window). Want me to scan the workspace for direct .last_value/.receiver field access to gauge impact?
🏁 Script executed:
#!/bin/bash
# Find direct field accesses that would be affected by re-privatizing
rg -nP --type=rust -C2 '\.(last_value|receiver)\b'Length of output: 3692
Privatize last_value and receiver, keeping accessor API
No external direct field usages were found; rely on the existing pub fn receiver(&self) and pub fn last_cached(&self). Make both fields private and add an owning accessor:
pub struct PipelineControlSignal {
- pub last_value: Option<Control>,
- pub receiver: Receiver<Control>,
+ last_value: Option<Control>,
+ receiver: Receiver<Control>,
}impl PipelineControlSignal {
/// Consume self and return the underlying receiver.
pub fn into_receiver(self) -> Receiver<Control> {
self.receiver
}
}Document that cloning the receiver creates additional consumers and that last_value is internal cache driving blocking semantics.
🤖 Prompt for AI Agents
In crates/recording/src/pipeline/control.rs around lines 13 to 15, make the
struct fields last_value and receiver private (remove pub) since there are no
external direct usages, keep the existing pub fn receiver(&self) and pub fn
last_cached(&self) accessors, add a consuming accessor pub fn
into_receiver(self) -> Receiver<Control> that returns the owned receiver, and
update docs: note that cloning the receiver creates additional consumers and
that last_value is an internal cache driving blocking semantics and should not
be used externally.
| use scap_ffmpeg::*; | ||
| use scap_targets::*; | ||
| use std::time::Duration; | ||
| use windows::Win32::Graphics::Direct3D11::D3D11_BOX; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix crate/module name collision for the windows crate import.
Inside mod windows, windows::… resolves to the module, not the external crate; this import fails on Windows. Use an absolute path or rename the module.
- use windows::Win32::Graphics::Direct3D11::D3D11_BOX;
+ // Use absolute path to the external `windows` crate to avoid colliding with `mod windows`.
+ use ::windows::Win32::Graphics::Direct3D11::D3D11_BOX;🤖 Prompt for AI Agents
In crates/scap-direct3d/examples/cli.rs around line 12, the line "use
windows::Win32::Graphics::Direct3D11::D3D11_BOX;" conflicts with a local mod
windows (so it resolves to the module, not the external crate) and fails on
Windows; fix by referencing the external crate with an absolute path ("use
::windows::Win32::Graphics::Direct3D11::D3D11_BOX;") or by renaming the local
module (and updating its references) so "windows::…" unambiguously refers to the
external crate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
crates/scap-screencapturekit/examples/cli.rs (3)
1-4: Optional: emit a helpful message on non-macOS instead of silently no-op.Prevents confusion when running the example on unsupported platforms.
fn main() { #[cfg(target_os = "macos")] macos::main(); + #[cfg(not(target_os = "macos"))] + { + eprintln!("This example is macOS-only (ScreencaptureKit)."); + } }
19-24: Remove commented-out exploratory code.These commented lines add noise to the example; rely on git history if needed later.
- // let windows = block_on(Window::list()).expect("Failed to list windows"); - // let window = windows - // .iter() - // .find(|w| w.title().map(|t| t.starts_with("native")).unwrap_or(false)) - // .expect("No native window found");
35-40: Avoid per-framedbg!spam in the callback.At 60 FPS this floods stderr. Gate it to debug builds.
- .with_output_sample_buf_cb(|frame| { - dbg!(frame.output_type()); - // if let Some(image_buf) = buf.image_buf() { - // image_buf.show(); - // } - }) + .with_output_sample_buf_cb(|frame| { + #[cfg(debug_assertions)] + { + eprintln!("{:?}", frame.output_type()); + } + })
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
crates/enc-mediafoundation/examples/cli.rs(1 hunks)crates/scap-screencapturekit/examples/cli.rs(1 hunks)crates/scap-targets/Cargo.toml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/scap-targets/Cargo.toml
- crates/enc-mediafoundation/examples/cli.rs
🧰 Additional context used
🧬 Code graph analysis (1)
crates/scap-screencapturekit/examples/cli.rs (2)
crates/enc-mediafoundation/examples/cli.rs (2)
main(1-4)main(187-231)crates/scap-direct3d/examples/cli.rs (2)
main(1-4)main(14-56)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
🔇 Additional comments (2)
crates/scap-screencapturekit/examples/cli.rs (2)
6-7: LGTM: clean OS-gated module organization.The
#[cfg(target_os = "macos")] mod macospattern keeps platform code isolated and maintains cross-platform buildability.
47-54: Start/stop sequence looks fine for an example.The waits bracket capture cleanly; error paths are surfaced via
expect, which is acceptable for a demo.
| #[cfg(target_os = "macos")] | ||
| macos::main(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix visibility: macos::main is private, causing a compile error on macOS.
The parent module calls macos::main() (Line 3), but the function inside the macos module is not public (Line 15). This won’t compile on macOS due to Rust’s module privacy rules.
Apply this diff:
mod macos {
@@
- fn main() {
+ pub(super) fn main() {Also applies to: 15-15
🤖 Prompt for AI Agents
In crates/scap-screencapturekit/examples/cli.rs around lines 2-3 (and
referencing the macos module definition at line 15), the call to macos::main()
fails because the function inside the macos module is private; change the
function's declaration to be public (pub fn main) or re-export it (pub use
self::macos::main) so the parent module can call macos::main(), then recompile
to confirm visibility is resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (7)
crates/recording/src/capture_pipeline.rs (7)
247-250: Media Foundation cleanup is missing.MFStartup/RoInitialize are called but never paired with MFShutdown/RoUninitialize. This could cause resource leaks.
308-309: Replace unwraps with proper error handling.Multiple
unwrap()calls could cause panics. Use proper error propagation instead.Also applies to: 321-321, 390-390
341-342: Proper drain handling needed for encoder.When the source channel closes, the encoder should be properly drained to avoid losing buffered frames. Call
encoder.finish()to initiate drain and continue processingMETransformHaveOutputevents untilMETransformDrainComplete.Also applies to: 532-534
346-347: Don't panic on channel send failure.Replace
unwrap()with error handling to avoid panics if the receiver is dropped.Also applies to: 407-408
327-328: Remove duplicate Media Foundation initialization.MFStartup/RoInitialize are already called at lines 247-250. This duplicate initialization is unnecessary.
- let _ = unsafe { RoInitialize(RO_INIT_MULTITHREADED) }; - let _ = unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }; + // Already initialized at function entry
568-570: Incorrect order and error handling in instant mode finish.Similar issues with trailer/finish order and error handling.
- screen_encoder.finish().unwrap(); - - output.lock().unwrap().write_trailer().unwrap(); + screen_encoder.finish() + .map_err(|e| format!("Failed to finish encoder: {e}"))?; + + output.lock().unwrap().write_trailer() + .map_err(|e| format!("Failed to write trailer: {e}"))?;
390-392: Incorrect order: write trailer after encoder finish.The trailer should be written after
encoder.finish()completes to ensure all frames are flushed.- output.write_trailer().unwrap(); - let _ = encoder.finish(); + output.write_trailer().unwrap();
🧹 Nitpick comments (11)
crates/recording/src/sources/screen_capture/macos.rs (1)
2-2: Remove unused importPlanarData.
cap_ffmpeg_utils::PlanarDataisn't referenced in this module and will trigger an unused import warning (may fail CI if warnings are denied).-use cap_ffmpeg_utils::PlanarData;crates/mediafoundation-ffmpeg/examples/usage.rs (7)
15-17: Initialize FFmpeg once (avoid redundant init calls).ffmpeg::init() is process-wide; call it once in win::main instead of in helpers.
- // Initialize FFmpeg - ffmpeg::init()?;
41-46: Redundant #[cfg(windows)] block inside a Windows-only module.This module is already gated by #[cfg(windows)]. Remove the nested cfg to reduce noise.
- #[cfg(windows)] - { - // Example: Write samples from MediaFoundation - // let sample: IMFSample = get_sample_from_media_foundation(); - // h264_muxer.write_sample(&sample)?; - } + // Example: Write samples from MediaFoundation (this module is already Windows-only) + // let sample: IMFSample = get_sample_from_media_foundation(); + // h264_muxer.write_sample(&sample)?;
106-118: Same timestamp-units concern as above.Ensure write_h264_data here also uses stream time_base units (or clearly document conversion).
137-142: Avoid duplicate ffmpeg::init() calls here too.Defer init to win::main.
- // Initialize FFmpeg - ffmpeg::init()?;
164-166: Comment contradicts explicit finish call.Either rely on Drop or keep explicit finish; align the comment.
- // The muxer automatically finishes and writes trailer when dropped - muxer.finish()?; + // Finish explicitly (alternatively rely on Drop) + muxer.finish()?;
170-176: Centralize initialization in win::main.Initialize FFmpeg once at entry; reduces duplication and clarifies lifecycle.
- fn main() -> Result<(), Box<dyn std::error::Error>> { - println!("Example 1: Using H264StreamMuxer with shared output"); + fn main() -> Result<(), Box<dyn std::error::Error>> { + // Initialize FFmpeg once + ffmpeg::init()?; + println!("Example 1: Using H264StreamMuxer with shared output");
188-191: Keep timestamp units consistent in the integrated example.Mirror the earlier change: use 1 tick duration for 30 fps if time_base is 1/fps.
- mp4_file.write_h264_data(&example_h264_data, 0, 0, 33333, true)?; + mp4_file.write_h264_data(&example_h264_data, 0, 0, 1, true)?;crates/enc-avfoundation/src/mp4.rs (1)
366-372: Unchecked unsafe operation needs proper documentation.The
result_uncheckedfunction performs an unchecked unwrap. While the name indicates this is intentional, it should have safety documentation explaining when this is safe to use.Add safety documentation:
+/// # Safety +/// The caller must ensure that `op` will populate the Option on success. +/// This function will perform an unchecked unwrap, causing undefined behavior if the Option is None. unsafe fn result_unchecked<T, R>(op: impl FnOnce(&mut Option<T>) -> R) -> cidre::os::Result<T> where R: Into<cidre::os::Result>,crates/recording/src/capture_pipeline.rs (2)
616-674: Extract D3D device creation to a shared utility.This D3D device creation code duplicates logic from
crates/enc-mediafoundation/src/d3d.rs. Consider using the existing implementation or moving this to a shared location.Since
cap_enc_mediafoundation::d3d::create_d3d_device()already exists with the same logic, consider reusing it:#[cfg(windows)] pub fn create_d3d_device() -> windows::core::Result<windows::Win32::Graphics::Direct3D11::ID3D11Device> { - // ... entire implementation ... + cap_enc_mediafoundation::d3d::create_d3d_device() }
263-263: Magic number for bitrate multiplier should be documented.The value
0.07for bitrate multiplier lacks explanation. Consider extracting to a named constant with documentation.+ // Bitrate multiplier of 0.07 provides a good balance between quality and file size + const BITRATE_MULTIPLIER: f32 = 0.07; SizeInt32 { Width: screen_config.width as i32, Height: screen_config.height as i32, }, source.0.config().fps(), - 0.07, + BITRATE_MULTIPLIER,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
crates/enc-avfoundation/Cargo.toml(1 hunks)crates/enc-avfoundation/src/mp4.rs(2 hunks)crates/mediafoundation-ffmpeg/examples/usage.rs(1 hunks)crates/recording/src/capture_pipeline.rs(10 hunks)crates/recording/src/sources/screen_capture/macos.rs(1 hunks)crates/scap-cpal/src/lib.rs(1 hunks)crates/scap-targets/Cargo.toml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/scap-targets/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (2)
crates/mediafoundation-ffmpeg/examples/usage.rs (1)
crates/mediafoundation-ffmpeg/src/h264.rs (3)
add_stream(28-81)write_sample(85-104)finish(153-165)
crates/recording/src/capture_pipeline.rs (6)
crates/recording/src/sources/screen_capture/mod.rs (3)
init(276-412)info(423-425)d3d_device(415-417)crates/enc-ffmpeg/src/mux/mp4.rs (2)
init(31-67)video(115-117)crates/enc-mediafoundation/src/video/h264.rs (2)
new(43-45)new(97-296)crates/mediafoundation-ffmpeg/src/h264.rs (1)
add_stream(28-81)crates/mediafoundation-utils/src/lib.rs (1)
thread_init(18-21)crates/enc-mediafoundation/src/d3d.rs (2)
create_d3d_device(39-56)create_d3d_device_with_type(19-37)
🪛 GitHub Check: Clippy
crates/recording/src/capture_pipeline.rs
[failure] 339-339: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:339:37
|
339 | ... use windows::Foundation::TimeSpan;
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
[failure] 338-338: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
error[E0433]: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
--> crates/recording/src/capture_pipeline.rs:338:37
|
338 | ... use cap_enc_mediafoundation::video::VideoEncoderInputSample;
| ^^^^^^^^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate cap_enc_mediafoundation
|
help: there is a crate or module with a similar name
|
338 - use cap_enc_mediafoundation::video::VideoEncoderInputSample;
338 + use cap_enc_avfoundation::video::VideoEncoderInputSample;
|
[failure] 325-325: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:325:25
|
325 | use windows::Win32::Media::MediaFoundation::{self, MFSTARTUP_FULL, MFStartup};
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
[failure] 324-324: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
error[E0433]: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
--> crates/recording/src/capture_pipeline.rs:324:25
|
324 | use cap_enc_mediafoundation::media::MF_VERSION;
| ^^^^^^^^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate cap_enc_mediafoundation
|
help: there is a crate or module with a similar name
|
324 - use cap_enc_mediafoundation::media::MF_VERSION;
324 + use cap_enc_avfoundation::media::MF_VERSION;
|
[failure] 239-239: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:239:13
|
239 | use windows::{
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
[failure] 238-238: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
error[E0433]: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
--> crates/recording/src/capture_pipeline.rs:238:13
|
238 | use cap_enc_mediafoundation::{media::MF_VERSION, video::SampleWriter};
| ^^^^^^^^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate cap_enc_mediafoundation
|
help: there is a crate or module with a similar name
|
238 - use cap_enc_mediafoundation::{media::MF_VERSION, video::SampleWriter};
238 + use cap_enc_avfoundation::{media::MF_VERSION, video::SampleWriter};
|
[warning] 438-438: unused imports: H264Encoder and MP4File
warning: unused imports: H264Encoder and MP4File
--> crates/recording/src/capture_pipeline.rs:438:56
|
438 | use cap_enc_ffmpeg::{AACEncoder, AudioEncoder, H264Encoder, MP4File};
| ^^^^^^^^^^^ ^^^^^^^
|
= note: #[warn(unused_imports)] on by default
[failure] 439-439: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:439:13
|
439 | use windows::Graphics::SizeInt32;
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
[failure] 530-530: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:530:29
|
530 | use windows::Foundation::TimeSpan;
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
[failure] 529-529: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
error[E0433]: failed to resolve: use of unresolved module or unlinked crate cap_enc_mediafoundation
--> crates/recording/src/capture_pipeline.rs:529:29
|
529 | use cap_enc_mediafoundation::video::VideoEncoderInputSample;
| ^^^^^^^^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate cap_enc_mediafoundation
|
help: there is a crate or module with a similar name
|
529 - use cap_enc_mediafoundation::video::VideoEncoderInputSample;
529 + use cap_enc_avfoundation::video::VideoEncoderInputSample;
|
[failure] 518-518: failed to resolve: use of unresolved module or unlinked crate windows
error[E0433]: failed to resolve: use of unresolved module or unlinked crate windows
--> crates/recording/src/capture_pipeline.rs:518:17
|
518 | use windows::Win32::Media::MediaFoundation;
| ^^^^^^^ use of unresolved module or unlinked crate windows
|
= help: if you wanted to use a crate named windows, use cargo add windows to add it to your Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (9)
crates/scap-cpal/src/lib.rs (1)
2-3: Cleanup of unused CPAL error imports verifiedNo remaining references to DefaultStreamConfigError or BuildStreamError were found in the codebase; the removal is safe and complete.
crates/mediafoundation-ffmpeg/examples/usage.rs (3)
120-134: Finish is idempotent; nice guard.is_finished prevents double-finalization; then writes trailer exactly once. Looks good.
95-104: No action required:Win32_Media_MediaFoundationis already enabled in Cargo.toml
Thecrates/mediafoundation-ffmpeg/Cargo.tomlunder[target.'cfg(windows)'.dependencies]includes"Win32_Media_MediaFoundation"in itswindowsfeatures.
49-57: Verify timestamp units and SPS/PPS handling
- Confirm whether
write_h264_dataexpects timestamps in microseconds or in streamtime_baseunits and update the example’s comments accordingly.- Raw Annex B NALs (e.g.
vec![0,0,0,1,0x65]) lack SPS/PPS; MP4 (avcC) requires SPS/PPS in extradata or in-band extraction—include or document feeding them.crates/enc-avfoundation/Cargo.toml (2)
2-2: LGTM! Package rename aligns with modular encoder architecture.The rename from
cap-media-encoderstocap-enc-avfoundationbetter reflects the crate's macOS-specific purpose and follows the new naming convention alongsidecap-enc-ffmpegandcap-enc-mediafoundation.
8-8: LGTM! Good extraction of shared FFmpeg utilities.Moving
PlanarDataand other FFmpeg utilities to a dedicatedcap-ffmpeg-utilscrate promotes code reuse across encoder implementations.crates/enc-avfoundation/src/mp4.rs (2)
1-2: LGTM! Import changes align with the architectural refactor.Moving
PlanarDatatocap_ffmpeg_utilsand keeping onlyAudioInfo/VideoInfoincap_media_infocreates a cleaner separation of concerns.
8-8: Public API rename improves clarity.The rename from
MP4AVAssetWriterEncodertoMP4Encoderis more concise while still being descriptive.Also applies to: 15-17
crates/recording/src/capture_pipeline.rs (1)
410-412: Handle frame conversion errors gracefully.The
as_ffmpeg()call can fail on format mismatches. Handle the error instead of usingunwrap().- let ff_frame = frame - .0 - .as_ffmpeg() - .map_err(|e| format!("FrameAsFfmpeg: {e}"))?; + let ff_frame = match frame.0.as_ffmpeg() { + Ok(ff) => ff, + Err(e) => { + tracing::error!("Failed to convert frame: {e}"); + continue; + } + };Likely an incorrect or invalid review comment.
| fn main() { | ||
| #[cfg(windows)] | ||
| win::main(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don't discard the Result from win::main; propagate errors and handle non-Windows gracefully.
Ignoring Result triggers a must_use warning and may fail CI with deny(warnings). Return a Result from the top-level main and provide a no-op path on non-Windows.
-fn main() {
- #[cfg(windows)]
- win::main();
-}
+fn main() -> Result<(), Box<dyn std::error::Error>> {
+ #[cfg(windows)]
+ return win::main();
+ #[cfg(not(windows))]
+ {
+ eprintln!("This example runs only on Windows.");
+ Ok(())
+ }
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn main() { | |
| #[cfg(windows)] | |
| win::main(); | |
| } | |
| fn main() -> Result<(), Box<dyn std::error::Error>> { | |
| #[cfg(windows)] | |
| return win::main(); | |
| #[cfg(not(windows))] | |
| { | |
| eprintln!("This example runs only on Windows."); | |
| Ok(()) | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/examples/usage.rs around lines 1 to 4, the call
to win::main() currently discards its Result which triggers must_use warnings
and breaks CI; change the top-level main to return a Result (e.g., fn main() ->
Result<(), Box<dyn std::error::Error>> or use anyhow::Result<()>), forward the
Result from the Windows entry (use win::main()? inside a cfg(windows) branch),
and provide a no-op Ok(()) path for non-Windows builds (cfg(not(windows)) {
Ok(()) }) so errors are propagated and non-Windows platforms compile cleanly.
| use cap_mediafoundation_ffmpeg::{H264StreamMuxer, MuxerConfig}; | ||
| use ffmpeg::format; | ||
| use std::path::PathBuf; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Crate import name mismatch (likely compile failure).
The first import uses cap_mediafoundation_ffmpeg while later you use mediafoundation_ffmpeg (Line 138). Unify to the actual crate name (hyphens in Cargo.toml become underscores in use paths).
Run the script below to confirm the crate name(s) defined in Cargo.toml and fix imports accordingly.
- use cap_mediafoundation_ffmpeg::{H264StreamMuxer, MuxerConfig};
+ use mediafoundation_ffmpeg::{H264StreamMuxer, MuxerConfig};🏁 Script executed:
#!/bin/bash
# Verify crate names and imports
rg -nP '^\s*name\s*=\s*".*mediafoundation.*ffmpeg.*"' -g '**/Cargo.toml' -C2
rg -nP '\buse\s+(cap_)?mediafoundation_ffmpeg' -g '**/*.rs' -C1Length of output: 945
Use the correct crate import path (cap_mediafoundation_ffmpeg) consistently.
Replace the later import of mediafoundation_ffmpeg with the actual crate name defined in Cargo.toml:
- use mediafoundation_ffmpeg::H264SampleMuxerOwned;
+ use cap_mediafoundation_ffmpeg::H264SampleMuxerOwned;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use cap_mediafoundation_ffmpeg::{H264StreamMuxer, MuxerConfig}; | |
| use ffmpeg::format; | |
| use std::path::PathBuf; | |
| use cap_mediafoundation_ffmpeg::{H264StreamMuxer, MuxerConfig}; | |
| use ffmpeg::format; | |
| use std::path::PathBuf; | |
| use cap_mediafoundation_ffmpeg::H264SampleMuxerOwned; |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/examples/usage.rs around lines 8 to 11, the
example imports use an inconsistent crate path: you currently import
cap_mediafoundation_ffmpeg at the top but later import mediafoundation_ffmpeg;
update that later import to use cap_mediafoundation_ffmpeg so all crate imports
match the name in Cargo.toml, and ensure any symbols referenced from the later
import are adjusted to the same cap_mediafoundation_ffmpeg path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
crates/mediafoundation-ffmpeg/src/h264.rs (4)
93-102: Avoid unwrap and bump frame counter after successful write.Propagate a clear error if the stream is missing, and increment the counter only after a successful interleaved write.
- packet.rescale_ts( - self.time_base, - output.stream(self.stream_index).unwrap().time_base(), - ); - - packet.write_interleaved(output)?; - - Ok(()) + let dst_tb = output + .stream(self.stream_index) + .ok_or("invalid stream index")? + .time_base(); + packet.rescale_ts(self.time_base, dst_tb); + + packet.write_interleaved(output)?; + self.frame_count = self.frame_count.saturating_add(1); + Ok(())
34-40: Use encoder codec when adding a stream (not decoder).Muxers expect an encoder descriptor. Using the decoder here can yield wrong defaults and header writing failures.
Apply:
- // Find H264 codec - let h264_codec = ffmpeg::codec::decoder::find(ffmpeg::codec::Id::H264) - .ok_or(ffmpeg::Error::DecoderNotFound)?; + // Find H264 encoder descriptor for the stream + let h264_codec = ffmpeg::codec::encoder::find(ffmpeg::codec::Id::H264) + .ok_or(ffmpeg::Error::EncoderNotFound)?;
45-54: Don’t set pixel format on packetized H.264; add (or plan) extradata for MP4.For compressed H.264, codecpar.format should be NONE. MP4 also needs SPS/PPS (AVCC) in codecpar.extradata (e.g., from MF_MT_MPEG_SEQUENCE_HEADER).
Apply minimal fix now:
- (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NV12 as i32; + // Packetized H.264 bitstream: leave pixel format unspecified + (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NONE as i32;Follow-up: plumb AVC Decoder Configuration Record into codecpar.extradata/extradata_size (I can draft once you expose it on MuxerConfig or via an init call).
119-124: Set DTS from MFSampleExtension_DecodeTimestamp when available.PTS==DTS breaks B-frames. Use MF’s decode timestamp when present; otherwise fall back to PTS.
- let pts = unsafe { sample.GetSampleTime() } + let pts = unsafe { sample.GetSampleTime() } .ok() .map(|v| mf_from_mf_time(self.time_base, v)); packet.set_pts(pts); - packet.set_dts(pts); + let mut dts = pts; + #[cfg(windows)] + if let Ok(decode_timestamp) = unsafe { + sample.GetUINT64( + &windows::Win32::Media::MediaFoundation::MFSampleExtension_DecodeTimestamp + ) + } { + dts = Some(mf_from_mf_time(self.time_base, decode_timestamp as i64)); + } + packet.set_dts(dts); @@ - // if let Ok(decode_timestamp) = - // unsafe { sample.GetUINT64(&MFSampleExtension_DecodeTimestamp) } - // { - // packet.set_dts(Some(mf_from_mf_time( - // self.time_base, - // decode_timestamp as i64, - // ))); - // } + // (DTS handled above)Also applies to: 139-147
🧹 Nitpick comments (5)
crates/recording/examples/recording-cli.rs (4)
16-16: Make log level configurable via RUST_LOG.Initialize tracing with EnvFilter so developers can control verbosity without code changes.
- tracing_subscriber::fmt::init(); + use tracing_subscriber::EnvFilter; + tracing_subscriber::fmt() + .with_env_filter(EnvFilter::from_default_env()) // e.g., RUST_LOG=cap_recording=debug,recording_cli=trace + .init()
33-35: Expose camera/mic toggles for easier manual testing.Right now both camera_feed and mic_feed are None. Consider wiring simple CLI flags (e.g., --mic, --camera) to pass through optional feeds for validation during development.
25-25: Rename _ready_rx to _done_rx (reflects semantics).The returned channel resolves when the actor finishes, not when it’s “ready”; the current name could mislead.
- let (handle, _ready_rx) = cap_recording::instant_recording::spawn_instant_recording_actor( + let (handle, _done_rx) = cap_recording::instant_recording::spawn_instant_recording_actor(
18-20: Remove leftover /tmp/bruh scaffolding.These filesystem ops are unused and may be surprising on Windows. The tempfile::tempdir below is sufficient.
- let _ = std::fs::remove_dir_all("/tmp/bruh"); - let _ = std::fs::create_dir("/tmp/bruh"); -crates/mediafoundation-ffmpeg/src/h264.rs (1)
42-44: Time base choice: consider 1/fps and let muxer refine.1/(fps*1000) works but is unusual; 1/fps is simpler and the muxer will often override on write_header anyway.
- let time_base = ffmpeg::Rational::new(1, config.fps as i32 * 1000); + let time_base = ffmpeg::Rational::new(1, config.fps as i32);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
crates/enc-ffmpeg/src/audio/aac.rs(1 hunks)crates/mediafoundation-ffmpeg/src/h264.rs(1 hunks)crates/recording/examples/recording-cli.rs(2 hunks)crates/recording/src/capture_pipeline.rs(10 hunks)crates/recording/src/studio_recording.rs(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/recording/src/capture_pipeline.rs
- crates/recording/src/studio_recording.rs
🧰 Additional context used
🧬 Code graph analysis (2)
crates/mediafoundation-ffmpeg/src/h264.rs (2)
crates/mediafoundation-ffmpeg/examples/usage.rs (2)
new(76-93)finish(120-133)crates/camera-mediafoundation/src/lib.rs (2)
new(39-64)new(490-498)
crates/recording/examples/recording-cli.rs (2)
crates/scap-targets/src/lib.rs (1)
primary(20-22)crates/recording/src/instant_recording.rs (1)
spawn_instant_recording_actor(161-262)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (rust)
🔇 Additional comments (6)
crates/recording/examples/recording-cli.rs (1)
5-5: Good move: switch to tracing + structured logging.Replacing println! with tracing::info! is the right call for consistent, structured logs.
Also applies to: 23-23
crates/mediafoundation-ffmpeg/src/h264.rs (4)
131-135: Keyframe flagging via CleanPoint looks good.Good use of the MF clean point to mark IDR frames for muxers.
15-22: Struct design is clean and minimal.Fields are sufficient for muxing state; public surface is narrow. LGTM.
151-164: Finish semantics are sane.Idempotent and leaves trailer writing to caller—fits the examples flow.
177-181: MF timebase conversion looks correct.Using 1/10,000,000 and av_rescale_q is appropriate.
crates/enc-ffmpeg/src/audio/aac.rs (1)
8-10: Import placement follows module organization standards.Moving the standard library import (
std::collections::VecDeque) and crate import (crate::AudioEncoder) after the external crate imports is a common Rust convention. This creates a clear separation between external dependencies and internal/standard library code, improving readability and maintainability.
| let (handle, _ready_rx) = cap_recording::instant_recording::spawn_instant_recording_actor( | ||
| "test".to_string(), | ||
| dir.path().into(), | ||
| RecordingBaseInputs { | ||
| capture_target: ScreenCaptureTarget::Display { | ||
| id: Display::primary().id(), | ||
| }, | ||
| // ScreenCaptureTarget::Window { | ||
| // id: Window::list() | ||
| // .into_iter() | ||
| // .find(|w| w.owner_name().unwrap_or_default().contains("Brave")) | ||
| // .unwrap() | ||
| // .id(), | ||
| // }, | ||
| capture_system_audio: true, | ||
| mic_feed: &None, | ||
| camera_feed: None, | ||
| mic_feed: None, | ||
| }, | ||
| None, | ||
| true, | ||
| // true, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
API migration looks correct; consider cleaning up call-site leftovers.
- The new three-arg instant_recording::spawn_instant_recording_actor usage matches the current signature.
- Please remove the stale trailing comment (// true) to avoid confusion with the previous API.
- },
- // true,
+ },Optionally verify no old call sites remain:
🏁 Script executed:
#!/bin/bash
# Find legacy spawner usage to ensure all sites migrated.
rg -nP '\bspawn_studio_recording_actor\b' -g '!**/target/**' -SLength of output: 501
Migrate remaining legacy API calls and remove stale comment
- Remove the obsolete
// truetrailing comment in crates/recording/examples/recording-cli.rs. - Replace all
spawn_studio_recording_actorinvocations withinstant_recording::spawn_instant_recording_actorin:- apps/cli/src/record.rs:61
- apps/desktop/src-tauri/src/recording.rs:393
🤖 Prompt for AI Agents
In crates/recording/examples/recording-cli.rs around lines 25 to 37, remove the
obsolete trailing comment "// true" and ensure the call ends cleanly;
additionally replace legacy spawn_studio_recording_actor calls with
instant_recording::spawn_instant_recording_actor in the two other files listed
(apps/cli/src/record.rs at line ~61 and apps/desktop/src-tauri/src/recording.rs
at line ~393), updating the function name and module path for each invocation
and any corresponding imports or argument shape differences so the calls compile
against the new instant_recording API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/recording/src/sources/screen_capture/windows.rs (1)
15-15: Import tracing macros used below.error! and warn! are used but not in scope; this won’t compile.
-use tracing::{info, trace}; +use tracing::{error, info, trace, warn};
♻️ Duplicate comments (10)
crates/recording/examples/recording-cli.rs (1)
36-38: Remove obsolete trailing comment.Leftover “// true” confuses the current API usage.
- false, - // true, + false,crates/recording/src/capture_pipeline.rs (1)
319-321: Don’t panic if timestamp receiver is gone.Unwrap on send can crash the encoder task when the consumer shuts down.
- timestamp_tx.send(timestamp).unwrap(); + let _ = timestamp_tx.send(timestamp);crates/enc-mediafoundation/src/video/h264.rs (1)
425-427: Fix fps underflow in bitrate calculation.(fps - 30) underflows for fps < 30. Use saturating_sub.
-fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { - ((width * height * ((fps - 30) / 2 + 30)) as f32 * multiplier) as u32 -} +fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { + let fps_factor = 30 + fps.saturating_sub(30) / 2; + let pixels_per_sec = width.saturating_mul(height).saturating_mul(fps_factor); + (pixels_per_sec as f32 * multiplier) as u32 +}crates/enc-mediafoundation/examples/cli.rs (7)
41-44: Add proper cleanup for MF and WinRT initialization.The code initializes WinRT and Media Foundation but never cleans them up, which can lead to resource leaks. Add corresponding shutdown calls in a finally block or use RAII patterns.
Apply this diff to add proper cleanup:
- Ok(()) + unsafe { + windows::Win32::Media::MediaFoundation::MFShutdown().ok(); + windows::Win32::System::WinRT::RoUninitialize(); + } + Ok(())
85-85: Computed bit_rate is unused and should incorporate configurable bitrate.The bit_rate variable calculated here is not used in the H264Encoder::new call which hardcodes 0.07 as the bitrate multiplier.
Based on the past review comments, this needs to be addressed by calculating a proper bitrate multiplier and using it in the encoder initialization. The bit_rate should be converted and used as shown in the previous review suggestions.
124-130: Fix H264Encoder::new call to use computed bit_rate instead of hardcoded value.The encoder is being created with a hardcoded bitrate multiplier (0.07) instead of using the computed bit_rate from the CLI arguments.
From the relevant code snippets, the H264Encoder::new signature expects
(d3d_device, format, resolution, frame_rate, bitrate_multiplier). The computed bit_rate should be converted to a proper bitrate multiplier.
142-179: Capture worker thread JoinHandle to prevent race conditions.The spawned worker thread is not being joined before stopping the sample writer, which could cause race conditions where the writer is stopped while the thread is still encoding.
Apply this diff to capture and join the worker thread:
- std::thread::spawn({ + let worker = std::thread::spawn({ let sample_writer = sample_writer.clone(); move || { // ... existing code ... } }); pause(); capturer.stop().unwrap(); + let _ = worker.join(); sample_writer.stop()?;
145-145: Remove redundant MFStartup call in worker thread.MFStartup is already called in the main thread at Line 44. Calling it again in the worker thread is redundant and should be removed.
Apply this diff:
- unsafe { MFStartup(MF_VERSION, MFSTARTUP_FULL) }.unwrap(); + // MFStartup already called in main thread
152-154: Handle channel close gracefully without panic.Using
recv().unwrap()will panic when the channel is closed. This should be handled gracefully.Apply this diff to handle channel closure:
- let Some(frame) = frame_rx.recv().unwrap() else { - break; - }; + let frame = match frame_rx.recv() { + Ok(v) => v, + Err(_) => break, + }; + let Some(frame) = frame else { break };
241-251: Fix undefined VideoEncoderDevice type.
VideoEncoderDeviceis not defined. Based on the past review comments, this should usemft::EncoderDevice.Apply this diff to fix the type:
fn enum_encoders() -> Result<()> { + use cap_enc_mediafoundation::mft::EncoderDevice; - let encoder_devices = VideoEncoderDevice::enumerate()?; + let encoder_devices = EncoderDevice::enumerate()?; if encoder_devices.is_empty() { exit_with_error("No hardware H264 encoders found!"); }
🧹 Nitpick comments (7)
crates/recording/src/lib.rs (1)
9-11: Re-export InstantRecordingHandle for API symmetry.Expose the handle alongside the actor and spawn fn to mirror the Studio API surface and simplify imports at call sites.
pub use instant_recording::{ - CompletedInstantRecording, InstantRecordingActor, spawn_instant_recording_actor, + CompletedInstantRecording, + InstantRecordingActor, + InstantRecordingHandle, + spawn_instant_recording_actor, };crates/recording/examples/recording-cli.rs (1)
18-20: Drop debug filesystem ops.The /tmp/bruh deletes/creates are unused here.
- let _ = std::fs::remove_dir_all("/tmp/bruh"); - let _ = std::fs::create_dir("/tmp/bruh");crates/recording/src/capture_pipeline.rs (2)
240-241: Pair MF/Ro startup with shutdown at task end.Initialize via thread_init(), but never call MFShutdown()/RoUninitialize(). Add shutdown in task epilogues to avoid resource pinning across runs.
Example (at end of the relevant task closures):
+ unsafe { + windows::Win32::Media::MediaFoundation::MFShutdown().ok(); + windows::Win32::System::WinRT::RoUninitialize(); + }Also applies to: 306-307, 510-511
628-686: Deduplicate D3D device creation.Reuse enc-mediafoundation’s create_d3d_device() to avoid drift and double maintenance.
#[cfg(windows)] pub fn create_d3d_device() -> windows::core::Result<windows::Win32::Graphics::Direct3D11::ID3D11Device> { - use windows::Win32::Graphics::{ - Direct3D::{D3D_DRIVER_TYPE, D3D_DRIVER_TYPE_HARDWARE}, - Direct3D11::{D3D11_CREATE_DEVICE_FLAG, ID3D11Device}, - }; - ... - Ok(device.unwrap()) + cap_enc_mediafoundation::create_d3d_device() }crates/recording/src/sources/screen_capture/windows.rs (2)
2-6: Remove unused import (TimeSpan).Keep imports minimal to avoid warnings.
-use ::windows::{ - Foundation::TimeSpan, +use ::windows::{ Graphics::Capture::GraphicsCaptureItem, Win32::Graphics::Direct3D11::{D3D11_BOX, ID3D11Device}, };
251-251: Avoid setting min_update_interval twice.This initial 16ms is immediately overridden below; drop the first set to reduce confusion.
- min_update_interval: Some(Duration::from_millis(16)),crates/enc-mediafoundation/examples/cli.rs (1)
373-373: Avoid unwrap() on error cleanup.Using
unwrap()in the Drop implementation could panic during cleanup, which is problematic in destructors.Apply this diff to handle errors gracefully:
- unsafe { UnregisterHotKey(None, self.id).ok().unwrap() } + unsafe { + let _ = UnregisterHotKey(None, self.id); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
crates/enc-mediafoundation/examples/cli.rs(1 hunks)crates/enc-mediafoundation/src/video/h264.rs(1 hunks)crates/mediafoundation-ffmpeg/src/h264.rs(1 hunks)crates/recording/examples/recording-cli.rs(3 hunks)crates/recording/src/capture_pipeline.rs(10 hunks)crates/recording/src/lib.rs(1 hunks)crates/recording/src/sources/screen_capture/windows.rs(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/mediafoundation-ffmpeg/src/h264.rs
🧰 Additional context used
🧬 Code graph analysis (5)
crates/recording/src/lib.rs (1)
crates/recording/src/instant_recording.rs (1)
spawn_instant_recording_actor(161-262)
crates/enc-mediafoundation/examples/cli.rs (4)
crates/enc-mediafoundation/src/d3d.rs (1)
create_d3d_device(39-56)crates/scap-direct3d/src/lib.rs (4)
d3d_device(204-204)d3d_device(356-358)d3d_device(469-471)new(151-346)crates/enc-mediafoundation/src/video/h264.rs (1)
new(290-305)crates/enc-mediafoundation/src/video/sample_writer.rs (1)
new(24-64)
crates/enc-mediafoundation/src/video/h264.rs (4)
crates/enc-mediafoundation/src/media.rs (2)
MFSetAttributeRatio(36-43)MFSetAttributeSize(26-33)crates/enc-mediafoundation/src/mft.rs (1)
enumerate(23-50)crates/enc-mediafoundation/src/async_callback.rs (1)
new(22-27)crates/enc-mediafoundation/src/video/video_processor.rs (1)
new(55-213)
crates/recording/src/capture_pipeline.rs (4)
crates/recording/src/sources/screen_capture/mod.rs (3)
init(276-412)info(423-425)d3d_device(415-417)crates/mediafoundation-utils/src/lib.rs (1)
thread_init(18-21)crates/enc-mediafoundation/src/video/h264.rs (2)
new(290-305)new_with_scaled_output(87-288)crates/enc-mediafoundation/src/d3d.rs (2)
create_d3d_device(39-56)create_d3d_device_with_type(19-37)
crates/recording/src/sources/screen_capture/windows.rs (2)
crates/recording/src/sources/screen_capture/mod.rs (1)
d3d_device(415-417)crates/scap-direct3d/src/lib.rs (4)
d3d_device(204-204)d3d_device(356-358)d3d_device(469-471)new(151-346)
🔇 Additional comments (7)
crates/enc-mediafoundation/examples/cli.rs (7)
135-137: API usage is correct for the new SampleWriter interface.The code properly uses the new SampleWriter API by creating the writer, adding a stream, and obtaining the stream index for later use in write operations.
167-171: Correct usage of new SampleWriter API.The code correctly handles the optional return from
handle_has_output()and passes the stream index to the writer.
253-264: Path validation logic is sound.The validation correctly checks for the .mp4 extension requirement and handles edge cases appropriately.
271-283: Capture feature validation is comprehensive.The functions properly check for Windows Graphics Capture API availability and support, including the required API contract version.
285-344: CLI argument structure is well-designed.The Args struct and Commands enum provide a comprehensive set of configuration options with sensible defaults. The use of clap derives makes the CLI interface clean and user-friendly.
346-376: HotKey implementation has proper RAII cleanup.The HotKey struct correctly registers hotkeys on creation and unregisters them on drop, following good RAII patterns.
378-396: Test coverage for path validation is thorough.The tests cover both valid and invalid path cases comprehensively, including edge cases like empty extensions and wrong file types.
| MFCreateSample, MFMediaType_Video, MFT_MESSAGE_COMMAND_FLUSH, | ||
| MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, MFT_MESSAGE_NOTIFY_END_OF_STREAM, | ||
| MFT_MESSAGE_NOTIFY_END_STREAMING, MFT_MESSAGE_NOTIFY_START_OF_STREAM, | ||
| MFT_MESSAGE_SET_D3D_MANAGER, MFT_OUTPUT_DATA_BUFFER, MFT_SET_TYPE_TEST_ONLY, | ||
| MFVideoFormat_H264, MFVideoFormat_NV12, MFVideoInterlace_Progressive, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use DRAIN sequence (no flush) for proper tail frame emission.
Flushing around EOS can drop pending frames on some MFTs. Prefer DRAIN + EOS/END_STREAMING and consume output until drain-complete at the call site.
use windows::{
@@
- MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, MFT_MESSAGE_NOTIFY_END_OF_STREAM,
+ MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, MFT_MESSAGE_NOTIFY_END_OF_STREAM,
MFT_MESSAGE_NOTIFY_END_STREAMING, MFT_MESSAGE_NOTIFY_START_OF_STREAM,
- MFT_MESSAGE_SET_D3D_MANAGER, MFT_OUTPUT_DATA_BUFFER, MFT_SET_TYPE_TEST_ONLY,
+ MFT_MESSAGE_SET_D3D_MANAGER, MFT_OUTPUT_DATA_BUFFER, MFT_SET_TYPE_TEST_ONLY,
+ MFT_MESSAGE_COMMAND_DRAIN,
@@
pub fn finish(&self) -> windows::core::Result<()> {
unsafe {
- self.transform
- .ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0)?;
self.transform
.ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0)?;
+ self.transform
+ .ProcessMessage(MFT_MESSAGE_COMMAND_DRAIN, 0)?;
self.transform
.ProcessMessage(MFT_MESSAGE_NOTIFY_END_STREAMING, 0)?;
- self.transform
- .ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0)?;
}
Ok(())
}Also applies to: 315-325
🤖 Prompt for AI Agents
In crates/enc-mediafoundation/src/video/h264.rs around lines 19-24 (and
similarly at 315-325), the code issues a FLUSH around EOS which can drop pending
frames on some MFTs; change the EOS handling to use a DRAIN sequence instead:
remove or stop calling flush on the MFT at end-of-stream, send the appropriate
DRAIN/END_STREAMING messages, and then loop to consume output buffers until the
MFT signals drain-complete (no more output) before finishing EOS processing;
ensure the call site consumes outputs until drain completes and then call
end-of-stream notifications rather than performing an immediate flush.
| let scale_factor = (1080.0 / screen_config.height as f32).min(1.0); | ||
|
|
||
| let screen_encoder = { | ||
| let native_encoder = cap_enc_mediafoundation::H264Encoder::new_with_scaled_output( | ||
| source.0.d3d_device(), | ||
| screen_capture::Direct3DCapture::PIXEL_FORMAT.as_dxgi(), | ||
| SizeInt32 { | ||
| Width: screen_config.width as i32, | ||
| Height: screen_config.height as i32, | ||
| }, | ||
| ) | ||
| .map_err(|e| MediaError::Any(e.to_string().into()))?, | ||
| )); | ||
| SizeInt32 { | ||
| Width: (screen_config.width as f32 * scale_factor) as i32, | ||
| Height: (screen_config.height as f32 * scale_factor) as i32, | ||
| }, | ||
| 30, | ||
| 0.15, | ||
| ); | ||
|
|
||
| match native_encoder { | ||
| Ok(screen_encoder) => { | ||
| let screen_muxer = cap_mediafoundation_ffmpeg::H264StreamMuxer::add_stream( | ||
| &mut output, | ||
| cap_mediafoundation_ffmpeg::MuxerConfig { | ||
| width: screen_config.width, | ||
| height: screen_config.height, | ||
| fps: 30, | ||
| bitrate: screen_encoder.bitrate(), | ||
| }, | ||
| ) | ||
| .map_err(|e| MediaError::Any(format!("NativeH264Muxer/{e}").into()))?; | ||
|
|
||
| screen_encoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix muxer width/height to match scaled encoder output (instant mode).
The muxed stream declares original dimensions while the encoder scales to <=1080p. This mismatches headers versus bitstream.
- let scale_factor = (1080.0 / screen_config.height as f32).min(1.0);
+ let scale_factor = (1080.0 / screen_config.height as f32).min(1.0);
+ let output_width = (screen_config.width as f32 * scale_factor).round() as u32;
+ let output_height = (screen_config.height as f32 * scale_factor).round() as u32;
@@
- let native_encoder = cap_enc_mediafoundation::H264Encoder::new_with_scaled_output(
+ let native_encoder = cap_enc_mediafoundation::H264Encoder::new_with_scaled_output(
source.0.d3d_device(),
screen_capture::Direct3DCapture::PIXEL_FORMAT.as_dxgi(),
SizeInt32 {
Width: screen_config.width as i32,
Height: screen_config.height as i32,
},
- SizeInt32 {
- Width: (screen_config.width as f32 * scale_factor) as i32,
- Height: (screen_config.height as f32 * scale_factor) as i32,
- },
+ SizeInt32 { Width: output_width as i32, Height: output_height as i32 },
30,
0.15,
);
@@
- let screen_muxer = cap_mediafoundation_ffmpeg::H264StreamMuxer::add_stream(
+ let screen_muxer = cap_mediafoundation_ffmpeg::H264StreamMuxer::add_stream(
&mut output,
cap_mediafoundation_ffmpeg::MuxerConfig {
- width: screen_config.width,
- height: screen_config.height,
+ width: output_width,
+ height: output_height,
fps: 30,
bitrate: screen_encoder.bitrate(),
},
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let scale_factor = (1080.0 / screen_config.height as f32).min(1.0); | |
| let screen_encoder = { | |
| let native_encoder = cap_enc_mediafoundation::H264Encoder::new_with_scaled_output( | |
| source.0.d3d_device(), | |
| screen_capture::Direct3DCapture::PIXEL_FORMAT.as_dxgi(), | |
| SizeInt32 { | |
| Width: screen_config.width as i32, | |
| Height: screen_config.height as i32, | |
| }, | |
| ) | |
| .map_err(|e| MediaError::Any(e.to_string().into()))?, | |
| )); | |
| SizeInt32 { | |
| Width: (screen_config.width as f32 * scale_factor) as i32, | |
| Height: (screen_config.height as f32 * scale_factor) as i32, | |
| }, | |
| 30, | |
| 0.15, | |
| ); | |
| match native_encoder { | |
| Ok(screen_encoder) => { | |
| let screen_muxer = cap_mediafoundation_ffmpeg::H264StreamMuxer::add_stream( | |
| &mut output, | |
| cap_mediafoundation_ffmpeg::MuxerConfig { | |
| width: screen_config.width, | |
| height: screen_config.height, | |
| fps: 30, | |
| bitrate: screen_encoder.bitrate(), | |
| }, | |
| ) | |
| .map_err(|e| MediaError::Any(format!("NativeH264Muxer/{e}").into()))?; | |
| screen_encoder | |
| let scale_factor = (1080.0 / screen_config.height as f32).min(1.0); | |
| let output_width = (screen_config.width as f32 * scale_factor).round() as u32; | |
| let output_height = (screen_config.height as f32 * scale_factor).round() as u32; | |
| let screen_encoder = { | |
| let native_encoder = cap_enc_mediafoundation::H264Encoder::new_with_scaled_output( | |
| source.0.d3d_device(), | |
| screen_capture::Direct3DCapture::PIXEL_FORMAT.as_dxgi(), | |
| SizeInt32 { | |
| Width: screen_config.width as i32, | |
| Height: screen_config.height as i32, | |
| }, | |
| SizeInt32 { Width: output_width as i32, Height: output_height as i32 }, | |
| 30, | |
| 0.15, | |
| ); | |
| match native_encoder { | |
| Ok(screen_encoder) => { | |
| let screen_muxer = cap_mediafoundation_ffmpeg::H264StreamMuxer::add_stream( | |
| &mut output, | |
| cap_mediafoundation_ffmpeg::MuxerConfig { | |
| width: output_width, | |
| height: output_height, | |
| fps: 30, | |
| bitrate: screen_encoder.bitrate(), | |
| }, | |
| ) | |
| .map_err(|e| MediaError::Any(format!("NativeH264Muxer/{e}").into()))?; | |
| screen_encoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (6)
crates/mediafoundation-ffmpeg/src/h264.rs (4)
99-102: Increment frame counter after successful interleaved write.Currently the finish() log will always report 0.
packet.write_interleaved(output)?; + self.frame_count = self.frame_count.saturating_add(1); Ok(())
35-40: Use encoder codec when adding a stream (not decoder).Muxers expect an encoder descriptor. Passing a decoder here is incorrect and may misconfigure the stream.
- // Find H264 codec - let h264_codec = ffmpeg::codec::decoder::find(ffmpeg::codec::Id::H264) - .ok_or(ffmpeg::Error::DecoderNotFound)?; + // Find H264 encoder codec for the muxer + let h264_codec = ffmpeg::codec::encoder::find(ffmpeg::codec::Id::H264) + .ok_or(ffmpeg::Error::EncoderNotFound)?;
45-64: Don’t set a pixel format for packetized H.264; plumb extradata (AVCC) for MP4.AVCodecParameters.format=NV12 is wrong for H.264 bitstreams; set NONE. Also provide SPS/PPS in extradata to ensure MP4 playback.
Minimal fix now:
- (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NV12 as i32; + // Packetized H.264 is not raw video; leave pixel format unspecified. + (*codecpar).format = ffmpeg::ffi::AVPixelFormat::AV_PIX_FMT_NONE as i32;Plumb AVCC extradata next (I can wire this if you expose it):
- Extend MuxerConfig with
pub extradata: Option<Vec<u8>>(AVCC from MF_MT_MPEG_SEQUENCE_HEADER).- In
new(), copy it intocodecpar.extradata/codecpar.extradata_sizewith AV_INPUT_BUFFER_PADDING_SIZE.- Optionally set
codecpar.codec_tagto 'avc1' for mov/mp4.
118-123: Set DTS from MFSampleExtension_DecodeTimestamp when present (fallback to PTS).Using PTS for DTS breaks B‑frame streams.
- packet.set_pts(pts); - packet.set_dts(pts); + packet.set_pts(pts); + let mut dts = pts; + #[cfg(windows)] + if let Ok(decode_timestamp) = + unsafe { sample.GetUINT64(&windows::Win32::Media::MediaFoundation::MFSampleExtension_DecodeTimestamp) } + { + dts = Some(mf_from_mf_time(self.time_base, decode_timestamp as i64)); + } + packet.set_dts(dts);Also applies to: 138-145
crates/enc-mediafoundation/src/video/h264.rs (2)
393-395: Fix bitrate calc underflow/overflow; use saturating ops.For fps < 30, (fps - 30) underflows. width*height can overflow u32 before cast.
-fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { - ((width * height * ((fps - 30) / 2 + 30)) as f32 * multiplier) as u32 -} +fn calculate_bitrate(width: u32, height: u32, fps: u32, multiplier: f32) -> u32 { + let fps_factor = 30 + fps.saturating_sub(30) / 2; + let pixels = width.saturating_mul(height).saturating_mul(fps_factor); + (pixels as f32 * multiplier) as u32 +}
312-324: Use DRAIN (no FLUSH) at EOS and consume output until drain-complete.Flushing around EOS can drop pending frames. Switch to DRAIN; ensure the caller keeps processing METransformHaveOutput until METransformDrainComplete before mux trailer.
- pub fn finish(&self) -> windows::core::Result<()> { + pub fn finish(&self) -> windows::core::Result<()> { unsafe { - self.transform - .ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0)?; self.transform .ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0)?; + self.transform + .ProcessMessage(MFT_MESSAGE_COMMAND_DRAIN, 0)?; self.transform .ProcessMessage(MFT_MESSAGE_NOTIFY_END_STREAMING, 0)?; - self.transform - .ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0)?; } Ok(()) }Follow-up: update event loops in callers to continue handling HaveOutput and stop on METransformDrainComplete before writing the trailer.
🧹 Nitpick comments (9)
packages/ui-solid/src/auto-imports.d.ts (1)
5-5: Confirm this diff is generator-produced; avoid hand-edits to auto-generated d.tsSince this file is generated by unplugin-auto-import, please ensure the change came from a regeneration rather than manual edits. If quote churn keeps appearing, consider pinning unplugin-auto-import/unplugin-icons versions so the output stays stable. This aligns with our preference to rely on icon auto-imports rather than manual imports.
crates/mediafoundation-ffmpeg/src/h264.rs (3)
42-44: Time base choice is unusual; prefer 1/fps or MF_TIMEBASE to avoid rounding.1/(fps*1000) creates avoidable jitter and large rescale factors.
Option A (common): make stream tb 1/fps.
- let time_base = ffmpeg::Rational::new(1, config.fps as i32 * 1000); + let time_base = ffmpeg::Rational::new(1, config.fps as i32);Option B (MF-native): keep internal tb as MF_TIMEBASE and rescale once to stream tb; if you choose this, also set stream tb to 1/fps and keep self.time_base = MF_TIMEBASE for precise PTS/DTS conversion. Want a patch for B?
94-99: Avoid unwrap on the stream lookup.Prevent panics if the stream index ever desyncs.
- packet.rescale_ts( - self.time_base, - output.stream(self.stream_index).unwrap().time_base(), - ); + let dst_tb = output + .stream(self.stream_index) + .ok_or("invalid stream index")? + .time_base(); + packet.rescale_ts(self.time_base, dst_tb);
1-5: Gate Windows-specific imports and APIs.If this crate ever builds in a multi-target workspace, unconditional
windowsusage will fail.-use windows::Win32::Media::MediaFoundation::{IMFSample, MFSampleExtension_CleanPoint}; +#[cfg(windows)] +use windows::Win32::Media::MediaFoundation::{IMFSample, MFSampleExtension_CleanPoint};- pub fn write_sample( + #[cfg(windows)] + pub fn write_sample(- fn mf_sample_to_avpacket(&self, sample: &IMFSample) -> windows::core::Result<ffmpeg::Packet> { + #[cfg(windows)] + fn mf_sample_to_avpacket(&self, sample: &IMFSample) -> windows::core::Result<ffmpeg::Packet> {If the entire crate is Windows-only, alternatively guard the module in Cargo.toml or with
cfg(target_os = "windows").Also applies to: 83-88, 104-107
crates/enc-mediafoundation/src/video/video_processor.rs (1)
98-105: Verify nominal range/color space selection (full vs limited).Output is set to 0–255 while stream is 16–235. Confirm this intentional conversion; otherwise align both to avoid level shifts.
crates/enc-mediafoundation/src/video/h264.rs (2)
81-81: Reconsider unsafe Send; many MF/D3D COM interfaces aren’t Send.If this crosses threads, you risk COM apartment violations. Prefer keeping the encoder on one thread or wrap fields with a documented UnsafeSend guard and confine usage.
193-199: Avoid transmute for MFT_MESSAGE_SET_D3D_MANAGER.Pass the raw interface pointer cast to isize; clearer and avoids redundant AddRef on temp clones.
- unsafe { - let temp = media_device_manager.clone(); - transform - .ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, std::mem::transmute(temp)) - .map_err(NewVideoEncoderError::EncoderTransform)?; - }; + unsafe { + use windows::core::Interface; + let ptr = media_device_manager.as_raw() as isize; + transform + .ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, ptr) + .map_err(NewVideoEncoderError::EncoderTransform)?; + }crates/recording/src/capture_pipeline.rs (2)
240-241: Pair MF init with shutdown at task end.Call MFShutdown and RoUninitialize after trailer to balance thread_init, preventing leaks in long-lived processes.
Also applies to: 301-309, 398-399
626-684: De-duplicate D3D device creation; reuse the shared helper.This locally duplicates encoder crate logic. Prefer calling cap_enc_mediafoundation::d3d::create_d3d_device() (or re-export) to avoid drift.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
apps/web/app/(org)/dashboard/caps/components/Folders.tsx(1 hunks)crates/enc-ffmpeg/src/audio/aac.rs(1 hunks)crates/enc-mediafoundation/src/lib.rs(1 hunks)crates/enc-mediafoundation/src/mft.rs(1 hunks)crates/enc-mediafoundation/src/video/h264.rs(1 hunks)crates/enc-mediafoundation/src/video/mod.rs(1 hunks)crates/enc-mediafoundation/src/video/video_processor.rs(1 hunks)crates/mediafoundation-ffmpeg/examples/usage.rs(1 hunks)crates/mediafoundation-ffmpeg/src/h264.rs(1 hunks)crates/recording/src/capture_pipeline.rs(10 hunks)crates/recording/src/instant_recording.rs(1 hunks)packages/ui-solid/src/auto-imports.d.ts(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- apps/web/app/(org)/dashboard/caps/components/Folders.tsx
🚧 Files skipped from review as they are similar to previous changes (4)
- crates/enc-mediafoundation/src/video/mod.rs
- crates/enc-ffmpeg/src/audio/aac.rs
- crates/enc-mediafoundation/src/mft.rs
- crates/mediafoundation-ffmpeg/examples/usage.rs
🧰 Additional context used
📓 Path-based instructions (2)
{apps/desktop,packages/ui-solid}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Component naming (Solid): components in PascalCase; hooks/utilities in camelCase starting with 'use' where applicable
Files:
packages/ui-solid/src/auto-imports.d.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
packages/ui-solid/src/auto-imports.d.ts
🧠 Learnings (1)
📚 Learning: 2025-08-25T10:58:06.142Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.142Z
Learning: Applies to apps/desktop/**/*.{ts,tsx} : In the desktop app, rely on unplugin-icons auto-imports; do not manually import icon modules
Applied to files:
packages/ui-solid/src/auto-imports.d.ts
🧬 Code graph analysis (6)
crates/enc-mediafoundation/src/lib.rs (1)
crates/enc-ffmpeg/src/mux/mp4.rs (1)
video(115-117)
crates/enc-mediafoundation/src/video/video_processor.rs (3)
crates/enc-mediafoundation/src/video/h264.rs (1)
new(287-302)crates/recording/src/sources/screen_capture/windows.rs (2)
new(355-361)new(480-510)crates/scap-direct3d/src/lib.rs (1)
texture(465-467)
crates/recording/src/instant_recording.rs (1)
crates/recording/src/capture_pipeline.rs (2)
create_d3d_device(627-684)create_screen_capture(600-624)
crates/mediafoundation-ffmpeg/src/h264.rs (1)
crates/mediafoundation-ffmpeg/examples/usage.rs (3)
new(76-93)write_sample(96-104)finish(120-133)
crates/enc-mediafoundation/src/video/h264.rs (3)
crates/enc-mediafoundation/src/media.rs (2)
MFSetAttributeRatio(36-43)MFSetAttributeSize(26-33)crates/enc-mediafoundation/src/mft.rs (1)
enumerate(23-50)crates/enc-mediafoundation/src/video/video_processor.rs (1)
new(57-215)
crates/recording/src/capture_pipeline.rs (8)
crates/recording/src/sources/screen_capture/mod.rs (3)
init(276-412)info(423-425)d3d_device(415-417)crates/enc-ffmpeg/src/mux/mp4.rs (1)
init(31-67)crates/mediafoundation-utils/src/lib.rs (1)
thread_init(18-21)crates/enc-mediafoundation/src/video/h264.rs (2)
new(287-302)new_with_scaled_output(84-285)crates/mediafoundation-ffmpeg/examples/usage.rs (1)
new(76-93)crates/mediafoundation-ffmpeg/src/h264.rs (1)
new(28-80)crates/recording/src/sources/screen_capture/windows.rs (2)
new(355-361)new(480-510)crates/enc-mediafoundation/src/d3d.rs (2)
create_d3d_device(39-56)create_d3d_device_with_type(19-37)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Clippy
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (5)
packages/ui-solid/src/auto-imports.d.ts (1)
41-41: Quote style change is harmless; OK to mergeSwitching to single quotes in these type-imports has no semantic impact.
Also applies to: 45-45, 47-47, 51-51, 79-79
crates/enc-mediafoundation/src/lib.rs (1)
1-9: Windows gating and re-exports look good.Crate correctly gates to Windows and exposes H264Encoder via video module.
crates/recording/src/instant_recording.rs (1)
189-202: Good: avoid panic on D3D device creation and plumb device into capture.Error is mapped and propagated; Windows-only param is gated.
crates/recording/src/capture_pipeline.rs (2)
60-67: macOS MP4Encoder init error mapping — LGTM.Properly maps init errors instead of panicking.
135-143: Instant-mode MP4Encoder (macOS) — LGTM.Correctly initializes with optional audio and 1080p cap.
| let d3d_context = unsafe { d3d_device.GetImmediateContext() } | ||
| .map_err(NewVideoProcessorError::GetDevice)?; | ||
|
|
||
| // Setup video conversion | ||
| let video_device: ID3D11VideoDevice = d3d_device | ||
| .cast() | ||
| .map_err(NewVideoProcessorError::GetDevice)?; | ||
| let video_context: ID3D11VideoContext = d3d_context | ||
| .cast() | ||
| .map_err(NewVideoProcessorError::GetDevice)?; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Map errors to the correct variants (GetContext vs GetDevice).
GetImmediateContext and the cast to ID3D11VideoContext should map to GetContext, not GetDevice.
- let d3d_context = unsafe { d3d_device.GetImmediateContext() }
- .map_err(NewVideoProcessorError::GetDevice)?;
+ let d3d_context = unsafe { d3d_device.GetImmediateContext() }
+ .map_err(NewVideoProcessorError::GetContext)?;
@@
- let video_context: ID3D11VideoContext = d3d_context
- .cast()
- .map_err(NewVideoProcessorError::GetDevice)?;
+ let video_context: ID3D11VideoContext = d3d_context
+ .cast()
+ .map_err(NewVideoProcessorError::GetContext)?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let d3d_context = unsafe { d3d_device.GetImmediateContext() } | |
| .map_err(NewVideoProcessorError::GetDevice)?; | |
| // Setup video conversion | |
| let video_device: ID3D11VideoDevice = d3d_device | |
| .cast() | |
| .map_err(NewVideoProcessorError::GetDevice)?; | |
| let video_context: ID3D11VideoContext = d3d_context | |
| .cast() | |
| .map_err(NewVideoProcessorError::GetDevice)?; | |
| let d3d_context = unsafe { d3d_device.GetImmediateContext() } | |
| .map_err(NewVideoProcessorError::GetContext)?; | |
| // Setup video conversion | |
| let video_device: ID3D11VideoDevice = d3d_device | |
| .cast() | |
| .map_err(NewVideoProcessorError::GetDevice)?; | |
| let video_context: ID3D11VideoContext = d3d_context | |
| .cast() | |
| .map_err(NewVideoProcessorError::GetContext)?; |
🤖 Prompt for AI Agents
In crates/enc-mediafoundation/src/video/video_processor.rs around lines 65 to
75, the error mapping is incorrect: the call to d3d_device.GetImmediateContext()
and the cast to ID3D11VideoContext should use the
NewVideoProcessorError::GetContext variant instead of
NewVideoProcessorError::GetDevice. Change the map_err on the GetImmediateContext
call to NewVideoProcessorError::GetContext and change the map_err on the
d3d_context.cast() to NewVideoProcessorError::GetContext, leaving the
video_device cast mapping to GetDevice as-is.
| // Convert to NV12 | ||
| let video_stream = D3D11_VIDEO_PROCESSOR_STREAM { | ||
| Enable: true.into(), | ||
| OutputIndex: 0, | ||
| InputFrameOrField: 0, | ||
| pInputSurface: ManuallyDrop::new(Some(self.video_input.clone())), | ||
| ..Default::default() | ||
| }; | ||
| self.video_context.VideoProcessorBlt( | ||
| &self.video_processor, | ||
| &self.video_output, | ||
| 0, | ||
| &[video_stream], | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix COM ref leak: drop ManuallyDrop pInputSurface after VideoProcessorBlt.
The ManuallyDrop-wrapped Option will not be dropped, leaking an AddRef each frame. Capture the HRESULT, explicitly drop the field, then return.
Apply:
- let video_stream = D3D11_VIDEO_PROCESSOR_STREAM {
+ let mut video_stream = D3D11_VIDEO_PROCESSOR_STREAM {
Enable: true.into(),
OutputIndex: 0,
InputFrameOrField: 0,
pInputSurface: ManuallyDrop::new(Some(self.video_input.clone())),
..Default::default()
};
- self.video_context.VideoProcessorBlt(
+ let hr = self.video_context.VideoProcessorBlt(
&self.video_processor,
&self.video_output,
0,
&[video_stream],
- )
+ )?;
+ // Release cloned COM ptr to avoid leaks.
+ unsafe { ManuallyDrop::drop(&mut video_stream.pInputSurface) };
+ Ok(())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Convert to NV12 | |
| let video_stream = D3D11_VIDEO_PROCESSOR_STREAM { | |
| Enable: true.into(), | |
| OutputIndex: 0, | |
| InputFrameOrField: 0, | |
| pInputSurface: ManuallyDrop::new(Some(self.video_input.clone())), | |
| ..Default::default() | |
| }; | |
| self.video_context.VideoProcessorBlt( | |
| &self.video_processor, | |
| &self.video_output, | |
| 0, | |
| &[video_stream], | |
| ) | |
| } | |
| } | |
| // Convert to NV12 | |
| let mut video_stream = D3D11_VIDEO_PROCESSOR_STREAM { | |
| Enable: true.into(), | |
| OutputIndex: 0, | |
| InputFrameOrField: 0, | |
| pInputSurface: ManuallyDrop::new(Some(self.video_input.clone())), | |
| ..Default::default() | |
| }; | |
| let hr = self.video_context.VideoProcessorBlt( | |
| &self.video_processor, | |
| &self.video_output, | |
| 0, | |
| &[video_stream], | |
| )?; | |
| // Release cloned COM ptr to avoid leaks. | |
| unsafe { ManuallyDrop::drop(&mut video_stream.pInputSurface) }; | |
| Ok(()) | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/enc-mediafoundation/src/video/video_processor.rs around lines 233 to
248, the ManuallyDrop-wrapped pInputSurface inside D3D11_VIDEO_PROCESSOR_STREAM
is never dropped, leaking an AddRef each frame; capture the HRESULT returned by
VideoProcessorBlt, then immediately explicitly drop the ManuallyDrop-wrapped
pInputSurface (e.g., unsafe { ManuallyDrop::drop(&mut
video_stream.pInputSurface) } or equivalent) after the VideoProcessorBlt call to
release the COM reference, and finally return the captured HRESULT so the
function both avoids the leak and preserves the original result.
| pub fn new( | ||
| output: &mut ffmpeg::format::context::Output, | ||
| config: MuxerConfig, | ||
| ) -> Result<Self, ffmpeg::Error> { | ||
| info!("Adding H264 stream to output context"); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate config to prevent invalid stream parameters (fps/width/height).
Zero fps or dimensions will yield invalid time bases or stream params.
pub fn new(
output: &mut ffmpeg::format::context::Output,
config: MuxerConfig,
) -> Result<Self, ffmpeg::Error> {
+ if config.fps == 0 || config.width == 0 || config.height == 0 {
+ return Err(ffmpeg::Error::InvalidData);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn new( | |
| output: &mut ffmpeg::format::context::Output, | |
| config: MuxerConfig, | |
| ) -> Result<Self, ffmpeg::Error> { | |
| info!("Adding H264 stream to output context"); | |
| pub fn new( | |
| output: &mut ffmpeg::format::context::Output, | |
| config: MuxerConfig, | |
| ) -> Result<Self, ffmpeg::Error> { | |
| // Reject zero-valued parameters to avoid invalid timebases or stream setups. | |
| if config.fps == 0 || config.width == 0 || config.height == 0 { | |
| return Err(ffmpeg::Error::InvalidData); | |
| } | |
| info!("Adding H264 stream to output context"); | |
| // …rest of implementation… | |
| } |
🤖 Prompt for AI Agents
In crates/mediafoundation-ffmpeg/src/h264.rs around lines 28–33, add validation
at the start of new(...) to ensure config.fps, config.width and config.height
are non-zero; if any are zero return an appropriate Err(ffmpeg::Error) (or
convert/construct an ffmpeg error representing invalid input) rather than
proceeding. This prevents creating streams with invalid time bases or
parameters; keep the checks simple (if fps == 0 || width == 0 || height == 0)
and return early with a clear error message indicating which parameter is
invalid.
| while let Ok(e) = encoder.get_event() { | ||
| match e { | ||
| MediaFoundation::METransformNeedInput => { | ||
| let Ok((frame, timestamp)) = source.1.recv() else { | ||
| break; | ||
| }; | ||
|
|
||
| if let Some(timestamp_tx) = timestamp_tx.take() { | ||
| timestamp_tx.send(timestamp).unwrap(); | ||
| } | ||
|
|
||
| let frame_time = frame | ||
| .inner() | ||
| .SystemRelativeTime() | ||
| .map_err(|e| format!("FrameTime: {e}"))?; | ||
|
|
||
| encoder | ||
| .handle_needs_input(frame.texture(), frame_time) | ||
| .map_err(|e| format!("NeedsInput: {e}"))?; | ||
| } | ||
| MediaFoundation::METransformHaveOutput => { | ||
| if let Some(output_sample) = encoder | ||
| .handle_has_output() | ||
| .map_err(|e| format!("HasOutput: {e}"))? | ||
| { | ||
| muxer | ||
| .write_sample(&output_sample, &mut output) | ||
| .map_err(|e| format!("WriteSample: {e}"))?; | ||
| } | ||
| } | ||
| _ => {} | ||
| } | ||
| } | ||
|
|
||
| while let Ok(frame) = source.1.recv() { | ||
| if let Some(timestamp_tx) = timestamp_tx.take() { | ||
| timestamp_tx.send(frame.1).unwrap(); | ||
| encoder | ||
| .finish() | ||
| .map_err(|e| format!("EncoderFinish: {e}"))?; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drain encoder on EOS; don’t break on NeedInput closure; write trailer after drain.
Breaking immediately can drop tail frames; pair EOS with DRAIN and keep consuming HaveOutput until DrainComplete, then trailer.
- let mut timestamp_tx = Some(timestamp_tx);
+ let mut timestamp_tx = Some(timestamp_tx);
+ let mut draining = false;
@@
- MediaFoundation::METransformNeedInput => {
- let Ok((frame, timestamp)) = source.1.recv() else {
- break;
- };
+ MediaFoundation::METransformNeedInput => {
+ if draining { continue; }
+ let Ok((frame, timestamp)) = source.1.recv() else {
+ // No more input: initiate drain
+ encoder.finish().map_err(|e| format!("Finish/Drain: {e}"))?;
+ draining = true;
+ continue;
+ };
@@
- muxer
+ muxer
.write_sample(&output_sample, &mut output)
.map_err(|e| format!("WriteSample: {e}"))?;
}
}
+ MediaFoundation::METransformDrainComplete => {
+ break;
+ }
@@
- encoder
- .finish()
- .map_err(|e| format!("EncoderFinish: {e}"))?;
+ // finish() already called when initiating drain
@@
- output
+ output
.write_trailer()
.map_err(|e| format!("WriteTrailer: {e}"))?;Also applies to: 371-373
🤖 Prompt for AI Agents
In crates/recording/src/capture_pipeline.rs around lines 312-349 (and similarly
at 371-373), the loop currently breaks when the source recv fails on
METransformNeedInput which drops tail frames; instead, when the source channel
closes you must signal EOS/DRAIN to the encoder and keep consuming
METransformHaveOutput until the encoder reports drain complete, then finish the
encoder and write the muxer trailer. Concretely: replace the break on recv
failure with code that marks/requests encoder drain (EOS), do not exit the event
loop immediately on NeedInput, continue handling HaveOutput events and write any
remaining samples, exit the loop only when the encoder reports drain complete,
then call encoder.finish() and write the muxer trailer/finalize after the drain
completes; apply the same change at the other location (lines 371-373).
| timestamp_tx.send(timestamp).unwrap(); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid unwrap on timestamp send.
Channel may be dropped; don’t panic.
- if let Some(timestamp_tx) = timestamp_tx.take() {
- timestamp_tx.send(timestamp).unwrap();
- }
+ if let Some(timestamp_tx) = timestamp_tx.take() {
+ let _ = timestamp_tx.send(timestamp);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Some(timestamp_tx) = timestamp_tx.take() { | |
| timestamp_tx.send(timestamp).unwrap(); | |
| } | |
| if let Some(timestamp_tx) = timestamp_tx.take() { | |
| let _ = timestamp_tx.send(timestamp); | |
| } |
🤖 Prompt for AI Agents
In crates/recording/src/capture_pipeline.rs around lines 319 to 321, the code
currently calls timestamp_tx.send(timestamp).unwrap(), which will panic if the
receiver has been dropped; replace the unwrap with proper error handling —
attempt to send and handle the Result (for example: match or if let Err(e) =
timestamp_tx.send(timestamp) { /* log debug/info about dropped receiver or
ignore */ }) so the task does not panic when the channel is closed.
| match e { | ||
| MediaFoundation::METransformNeedInput => { | ||
| let Ok((frame, _)) = source.1.recv() else { | ||
| break; | ||
| }; | ||
|
|
||
| let frame_time = frame | ||
| .inner() | ||
| .SystemRelativeTime() | ||
| .map_err(|e| format!("Frame Time: {e}"))?; | ||
|
|
||
| encoder | ||
| .handle_needs_input(frame.texture(), frame_time) | ||
| .map_err(|e| format!("NeedsInput: {e}"))?; | ||
| } | ||
| MediaFoundation::METransformHaveOutput => { | ||
| if let Some(output_sample) = encoder | ||
| .handle_has_output() | ||
| .map_err(|e| format!("HasOutput: {e}"))? | ||
| { | ||
| let mut output = output.lock().unwrap(); | ||
|
|
||
| muxer | ||
| .write_sample(&output_sample, &mut *output) | ||
| .map_err(|e| format!("WriteSample: {e}"))?; | ||
| } | ||
| } | ||
| _ => {} | ||
| } | ||
| } | ||
|
|
||
| mp4.queue_video_frame(frame); | ||
| encoder | ||
| .finish() | ||
| .map_err(|e| format!("EncoderFinish: {e}"))?; | ||
| } | ||
| either::Right(mut encoder) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply the same EOS drain pattern in instant-mode (Windows).
Mirror the drain/HaveOutput/DrainComplete handling before writing trailer.
- while let Ok(e) = encoder.get_event() {
+ let mut draining = false;
+ while let Ok(e) = encoder.get_event() {
match e {
MediaFoundation::METransformNeedInput => {
- let Ok((frame, _)) = source.1.recv() else {
- break;
- };
+ if draining { continue; }
+ let Ok((frame, _)) = source.1.recv() else {
+ encoder.finish().map_err(|e| format!("Finish/Drain: {e}"))?;
+ draining = true;
+ continue;
+ };
@@
MediaFoundation::METransformHaveOutput => {
if let Some(output_sample) = encoder
.handle_has_output()
.map_err(|e| format!("HasOutput: {e}"))?
{
let mut output = output.lock().unwrap();
muxer
.write_sample(&output_sample, &mut *output)
.map_err(|e| format!("WriteSample: {e}"))?;
}
}
+ MediaFoundation::METransformDrainComplete => {
+ break;
+ }
_ => {}
}
}
-
- encoder
- .finish()
- .map_err(|e| format!("EncoderFinish: {e}"))?;
+ // finish() already called when initiating drain
@@
- output
+ output
.lock()
.map_err(|e| format!("OutputLock: {e}"))?
.write_trailer()
.map_err(|e| format!("WriteTrailer: {e}"))?;Also applies to: 576-581
🤖 Prompt for AI Agents
In crates/recording/src/capture_pipeline.rs around lines 512-547 (and similarly
576-581), the instant-mode (Windows) path lacks the EOS drain sequence: after
signaling encoder finish you must call the encoder drain routine and then loop
on encoder.get_event to collect METransformHaveOutput events until you observe
the DrainComplete (or equivalent EOS) event, writing any produced output samples
to the muxer as you receive them; only after the drain loop completes should you
proceed to encoder.finish() and writing the trailer. Implement this mirrored
drain/HaveOutput/DrainComplete handling in both places: signal drain, repeatedly
handle HaveOutput by extracting samples and calling muxer.write_sample, and exit
the loop on the drain-complete event before finalizing the encoder and writing
the trailer.
Adds support for hardware encoding via Media Foundation on Windows. This is currently only implemented for the Studio recording flow, with Instant mode support coming later.
For encoding audio and video at the same time, I'm planning on converting IMFSamples to AVPackets, so we can reuse FFmpeg's mp4 audio encoding and muxing while keeping hardware acceleration. This will also make it simpler to use for exporting.
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores