-
Notifications
You must be signed in to change notification settings - Fork 991
Major audio sync improvements with new timestamp system #998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughReplaces workspace cpal git source; adds a new cross-platform Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant ActorBuilder
participant ActorRuntime
participant ActorHandle
App->>ActorBuilder: Actor::builder(path, target)
App->>ActorBuilder: .with_system_audio(true).with_custom_cursor(false)
App->>ActorBuilder: .build().await
ActorBuilder-->>ActorRuntime: spawn internal actor (async)
ActorRuntime-->>App: (ActorHandle, ready_rx)
App->>ActorHandle: send(ActorControlMessage::Stop)
ActorHandle->>ActorRuntime: forward control message
ActorRuntime-->>ActorHandle: acknowledge / stop
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (2){crates,apps/cli,apps/desktop/src-tauri}/**/*.rs📄 CodeRabbit inference engine (AGENTS.md)
Files:
{crates,apps/cli,apps/desktop/src-tauri}/**/{src,tests}/**/*.rs📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧬 Code graph analysis (1)crates/recording/src/studio_recording.rs (7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (9)
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 26
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (12)
crates/recording/src/pipeline/audio_buffer.rs (1)
53-56: Bug: missing byte-size multiplier when slicing interleaved dataYou’re indexing a byte slice with a count of samples, dropping 3/4 of data for F32.
Apply:
- cast_bytes_to_f32_slice( - &frame.data(0)[0..frame.samples() * frame.channels() as usize], - ) + cast_bytes_to_f32_slice( + &frame.data(0)[..frame.samples() * frame.channels() as usize * 4], + )crates/recording/src/sources/screen_capture/windows.rs (2)
265-267: Guard against zero/invalid FPS to avoidDuration::from_secs_f64(inf)panic.If
config.fpsis zero,1.0 / 0.0producesinfandDurationwill panic. Clamp to a sane range.- settings.min_update_interval = - Some(Duration::from_secs_f64(1.0 / config.fps as f64)); + let fps = u32::max(1, u32::min(config.fps, 240)); + settings.min_update_interval = + Some(Duration::from_secs_f64(1.0 / fps as f64));
162-172: Stop the capturer when aborting on high drop rate to prevent a dangling capture.Stopping only the
FrameHandlercan leave the D3D capturer running. Also requestStopCapturingfrom the capturer before stopping self.- let _ = ctx.actor_ref().stop_gracefully().await; + if let Some(capturer) = self.capturer.upgrade() { + let _ = capturer.ask(StopCapturing).await; + } + let _ = ctx.actor_ref().stop_gracefully().await;apps/desktop/src-tauri/src/recording.rs (1)
560-561: Fix: unreachable code caused by?on a returnedErr.
return Err("Recording not in progress".to_string())?;will not compile. Drop the?.- return Err("Recording not in progress".to_string())?; + return Err("Recording not in progress".to_string());crates/enc-avfoundation/src/mp4.rs (4)
267-275: Planar copy offset bug (can overflow and corrupt memory).In the planar path, you copy only
samples * bytesper plane but advanceoffsetbydata.len(), which may include padding/stride. This can run pasttotal_data.Apply:
- if frame.is_planar() { - let mut offset = 0; - for plane_i in 0..frame.planes() { - let data = frame.data(plane_i); - block_buf_slice[offset..offset + data.len()] - .copy_from_slice(&data[0..frame.samples() * frame.format().bytes()]); - offset += data.len(); - } - } else { + if frame.is_planar() { + let mut offset = 0; + for plane_i in 0..frame.planes() { + let data = frame.data(plane_i); + let plane_size = frame.samples() * frame.format().bytes(); + block_buf_slice[offset..offset + plane_size] + .copy_from_slice(&data[0..plane_size]); + offset += plane_size; + } + } else {
282-288: Audio PTS mixes timebases and reintroducesstart_time.You create
timein 1/sample-rate and then dostart_time + elapsed_duration + (time - segment_first_timestamp), wheresegment_first_timestampis host-time. That difference is meaningless across timebases and will drift.Use a purely relative audio timeline:
- let time = cm::Time::new(frame.pts().unwrap_or(0), frame.rate() as i32); - let pts = self - .start_time - .add(self.elapsed_duration) - .add(time.sub(self.segment_first_timestamp.unwrap())); + // audio pts are relative to the first audio frame in 1/sample-rate timebase + let time = cm::Time::new(frame.pts().unwrap_or(0), frame.rate() as i32); + // encode relative to segment start only + let pts = self.elapsed_duration.add(time);To robustly support late audio-before-video after resume, also track:
// struct fields audio_segment_first_pts: Option<cm::Time>, // in 1 / sample_rateand set/reset it when (re)starting a segment; compute
pts = elapsed_duration + (audio_pts - audio_segment_first_pts).
321-325: pause() can panic if no segment has started.
self.segment_first_timestamp.unwrap()will panic if pause happens before any video frame (or after a resume before new video). HandleNone.Apply:
- self.elapsed_duration = self - .elapsed_duration - .add(time.sub(self.segment_first_timestamp.unwrap())); - self.segment_first_timestamp = None; + if let Some(seg_start) = self.segment_first_timestamp { + self.elapsed_duration = self.elapsed_duration.add(time.sub(seg_start)); + self.segment_first_timestamp = None; + }
344-346: End session with last written source PTS, not absolute host time.Use the last relative PTS you actually appended, otherwise durations are wrong under the new scheme.
Apply:
- self.asset_writer - .end_session_at_src_time(self.last_timestamp.unwrap_or(cm::Time::zero())); + self.asset_writer + .end_session_at_src_time(self.last_timestamp.unwrap_or(cm::Time::zero()));And ensure
self.last_timestampis updated with the last written src PTS, not the absolute capture clock. For video, set it tonew_pts; for audio, set it topts + duration * num_samples.
Additional changes needed outside this hunk:// struct: rename for clarity (optional) last_timestamp: Option<cm::Time>, // stores last written src PTS // video (after successful append) self.last_timestamp = Some(new_pts); // audio (after computing `pts`) let end_pts = pts.add(cm::Time::new(frame.samples() as i64, frame.rate() as i32)); self.last_timestamp = Some(end_pts);crates/enc-ffmpeg/src/audio/aac.rs (1)
61-69: Sample-rate selection falls back to the minimum supported rate; clamp to max instead.When
input_config.rate()exceeds the codec’s max,.first()picks the smallest rate (e.g., 8k), which is likely unintended. Use the maximum supported rate in that case.Apply:
- let Some(&rate) = rates - .iter() - .find(|r| **r >= input_config.rate()) - .or(rates.first()) - else { + let Some(rate) = rates + .iter() + .copied() + .find(|&r| r >= input_config.rate()) + .or_else(|| rates.last().copied()) + else { return Err(AACEncoderError::RateNotSupported(input_config.rate())); };crates/recording/src/capture_pipeline.rs (1)
322-324: Avoid unwrap() on timestamp send; prevent panic if receiver is dropped.- if let Some(timestamp_tx) = timestamp_tx.take() { - timestamp_tx.send(timestamp).unwrap(); - } + if let Some(timestamp_tx) = timestamp_tx.take() { + let _ = timestamp_tx.send(timestamp); + }crates/recording/src/studio_recording.rs (2)
724-726: Avoid unwrap() when creating the D3D device; propagate error.- #[cfg(windows)] - let d3d_device = crate::capture_pipeline::create_d3d_device().unwrap(); + #[cfg(windows)] + let d3d_device = crate::capture_pipeline::create_d3d_device().map_err(|e| { + CreateSegmentPipelineError::Media( + MediaError::Any(format!("CreateD3DDevice: {e}").into()), + ) + })?;
727-738: Avoid unwrap() on create_screen_capture; let the error bubble up.- let (screen_source, screen_rx) = create_screen_capture( + let (screen_source, screen_rx) = create_screen_capture( &capture_target, !custom_cursor_capture, 120, system_audio.0, - start_time.system_time(), + start_time.system_time(), #[cfg(windows)] d3d_device, - ) - .await - .unwrap(); + ) + .await?;
🧹 Nitpick comments (55)
crates/recording/src/pipeline/audio_buffer.rs (1)
65-67: Threshold condition depends on representationIf you keep supporting both ingest forms, gate by representation: for packed, need frame_size*channels samples; for planar, per-channel frame_size. With the interleave-on-ingest change above, current check remains correct.
crates/cpal-ffmpeg/src/lib.rs (1)
28-36: Dead branch: format_typ is always PackedSince format_typ is hardcoded to sample::Type::Packed, the planar branch never runs. Simplify to a single copy_into data_mut(0) or compute format_typ from desired layout if planar support is intended.
- if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - - ffmpeg_frame - .data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { - ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes());crates/timestamp/Cargo.toml (1)
1-20: New crate looks goodName is kebab-case and target-specific deps are scoped properly. Consider removing the empty [features] table to reduce noise.
-[features] -crates/scap-ffmpeg/src/cpal.rs (1)
28-36: Same dead planar branch as in cpal-ffmpegformat_typ is fixed to Packed; trim the planar block or make layout configurable.
- if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - - ffmpeg_frame - .data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { - ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes());crates/camera/src/lib.rs (2)
6-7: Consistency with new timestamp systemThis crate now exposes Duration for CapturedFrame.timestamp. Confirm that it’s relative to a well-defined start and that downstream components don’t expect cap_timestamp::Timestamp here. If alignment is intended, consider migrating to cap_timestamp types for uniformity.
193-196: Remove commented-out public fieldsCommented fields in public structs add confusion. Delete them or reintroduce behind a feature if needed.
pub struct CapturedFrame { native: NativeCapturedFrame, - // pub reference_time: Instant, pub timestamp: Duration, - // pub capture_begin_time: Option<Instant>, }crates/camera/src/macos.rs (1)
108-111: Clean up commented fields; confirm timestamp type consistency.The removal aligns with the new timing model. Please delete the commented lines to avoid drift and verify that
data.timestampandCapturedFrame::timestampare the same cap_timestamp type across platforms.Apply this minimal cleanup:
- // reference_time: Instant::now(), timestamp: data.timestamp, - // capture_begin_time: Some(data.capture_begin_time),crates/camera-directshow/src/lib.rs (4)
24-24: QPC import: consider documenting units or adding frequency when/if converting.If this counter will be converted locally later, also cache/import
QueryPerformanceFrequency; otherwise, add a brief comment thatperf_counteris raw QPC ticks to prevent misuse.
791-796: Expose QPC as a strongly-typed field or document semantics.
perf_counter: i64is great for cross-domain sync, but it’s opaque. Either wrap it (e.g., astruct PerfCounter(i64);) or add a doc comment clarifying it’s a raw QPC tick captured on receipt.Apply lightweight docs:
pub struct CallbackData<'a> { pub sample: &'a IMediaSample, pub media_type: &'a AMMediaType, pub timestamp: Duration, - pub perf_counter: i64, + /// Raw QueryPerformanceCounter tick value captured upon sample receipt. + /// Convert with QueryPerformanceFrequency if absolute duration is needed. + pub perf_counter: i64, }
1029-1036: Avoid precision loss and handle negative sample times from IMediaSample.
GetTimereturns 100-ns units. Converting via microseconds drops precision, and casting negativestart_timetou64is risky. Prefer nanos with saturation and mark unusedend_timeto appease clippy.Apply:
- unsafe { psample.GetTime(&mut start_time, &mut end_time) }?; + let mut _end_time = 0; + unsafe { psample.GetTime(&mut start_time, &mut _end_time) }?; - timestamp: Duration::from_micros(start_time as u64 / 10), + timestamp: if start_time >= 0 { + Duration::from_nanos((start_time as u64).saturating_mul(100)) + } else { + Duration::ZERO + }, perf_counter,Optional: propagate sample flags (
IsDiscontinuity,IsPreroll,IsSyncPoint) viaCallbackDatafor downstream sync correction.
806-809: Remove leftoverfirst_ref_time; it’s unused after the timing rewrite.This field appears dead; dropping it will keep the type lean and silence lints.
- first_ref_time: RefCell<Option<Instant>>,crates/camera-directshow/examples/cli.rs (1)
119-121: Example kept in sync; consider exposing the new perf counter too.Optionally print
perf_counterto aid debugging with the new timing model.- // dbg!(frame.reference_time); dbg!(frame.timestamp); + dbg!(frame.perf_counter);crates/scap-direct3d/src/lib.rs (1)
23-24: WinRT interop imports look good; ensure apartment init and avoid unwraps.Before using WinRT types (
CreateDirect3D11DeviceFromDXGIDevice, frame pool/session), initialize MTA to avoid sporadic COM/WinRT errors on threads not pre-initialized. Also prefer error mapping overunwrap()to surfaceStartRunnerError.Proposed additions near the start of
Capturer::new:pub fn new( item: GraphicsCaptureItem, settings: Settings, mut callback: impl FnMut(Frame) -> windows::core::Result<()> + Send + 'static, mut closed_callback: impl FnMut() -> windows::core::Result<()> + Send + 'static, mut d3d_device: Option<ID3D11Device>, ) -> Result<Capturer, NewCapturerError> { + // Ensure COM/WinRT MTA is initialized for this thread. + windows::initialize_mta().map_err(|_| NewCapturerError::Other(windows::core::Error::OK))?;And replace
unwrap()s on WinRT calls (device/frame pool/session/event hookup) withmap_err(StartRunnerError::...)to keep error paths consistent.Also applies to: 41-43
crates/recording/Cargo.toml (1)
76-76: Avoid duplicatingtracing-subscriberin deps and dev-deps with divergent features.Consolidate to one place (prefer dev-deps unless used in library code) or align features to prevent two builds of the crate.
-[dependencies] -# ... -tracing-subscriber = { version = "0.3.19" } +#[dependencies] +# If only used in tests/examples, drop from dependencies. [dev-dependencies] tempfile = "3.20.0" -tracing-subscriber = { version = "0.3.19", features = ["env-filter"] } +tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }apps/cli/src/record.rs (1)
61-67: Pass CLI--fpsif the builder supports it; otherwise consider dropping the flag.Currently
self.fpsis unused. If supported, thread it through; if not, remove to avoid user confusion.- let actor = studio_recording::Actor::builder(path, target_info) + let mut builder = studio_recording::Actor::builder(path, target_info) .with_system_audio(self.system_audio) - .with_custom_cursor(false) - .build() + .with_custom_cursor(false); + // If available: + if let Some(fps) = self.fps { builder = builder.with_fps(fps); } + let actor = builder.build() .await .map_err(|e| e.to_string())?;Also, exposing a
stop()on the returned handle would avoid the tuple-field access (actor.0.stop()).crates/recording/src/cursor.rs (1)
85-85: Prefer namingelapsed_msand keep inDurationuntil the last step.Minor readability: compute once and convert to ms at use sites.
- let elapsed = start_time.instant().elapsed().as_secs_f64() * 1000.0; + let elapsed_ms = start_time.instant().elapsed().as_secs_f64() * 1000.0;crates/recording/src/sources/audio_input.rs (1)
92-94: Fix incorrect log/error text: this is a microphone feed, not camera.- error!("Lost connection with the camera feed"); - break Err("Lost connection with the camera feed".to_string()); + error!("Lost connection with the microphone feed"); + break Err("Lost connection with the microphone feed".to_string());crates/recording/src/feeds/camera.rs (3)
5-7: OS-specific timestamp imports — ensure explicit unsupported-target handling.
If Linux/other targets are not supported, add a compile_error! to make this explicit; otherwise provide a fallback Timestamp construction.use cap_timestamp::Timestamp; +#[cfg(not(any(windows, target_os = "macos")))] +compile_error!("Camera timestamps are only implemented for Windows and macOS.");
270-271: Avoid mixing legacy PTS with new Timestamp-based flow.
Setting PTS from an unrelated clock can conflict with the mixer/encoder that will (presumably) assign PTS from Timestamp. Consider leaving PTS unset here.- ff_frame.set_pts(Some(frame.timestamp.as_micros() as i64)); + // Defer PTS assignment; it will be derived from `Timestamp` downstream. + ff_frame.set_pts(None);Verify downstream indeed overwrites/sets PTS from Timestamp.
286-298: Per-OS timestamp assignment looks correct; centralize conversion to reduce duplication.
Consider adding a helper (e.g., cap_timestamp::Timestamp::from_camera_native(..)) to encapsulate these platform specifics.crates/recording/src/sources/screen_capture/mod.rs (1)
198-199: Remove unusedstart_timefield and builder propagation
start_timeis only declared and cloned (lines 198–199, 238, 406) but never consumed after migrating toTimestamp. Drop the field/parameter and its propagation incrates/recording/src/sources/screen_capture/mod.rs.apps/desktop/src-tauri/src/recording.rs (3)
429-432: Copy/paste log message references “studio” in the Instant path.This misleads debugging.
- let (handle, actor_done_rx) = builder.build().await.map_err(|e| { - error!("Failed to spawn studio recording actor: {e}"); + let (handle, actor_done_rx) = builder.build().await.map_err(|e| { + error!("Failed to spawn instant recording actor: {e}"); e.to_string() })?;
354-355: Replace println! with tracing.Keep logging consistent and respect log levels.
- println!("spawning actor"); + tracing::debug!("spawning recording actor");
672-679: Duplicate Camera window close.You close the Camera window twice. Remove one.
- if let Some(v) = CapWindowId::Camera.get(&handle) { - let _ = v.close(); - } let _ = app.mic_feed.ask(microphone::RemoveInput).await; let _ = app.camera_feed.ask(camera::RemoveInput).await; if let Some(win) = CapWindowId::Camera.get(&handle) { win.close().ok(); }crates/recording/src/sources/audio_mixer.rs (5)
69-76: Don’texpecton filter discovery; bubble up as errors.Panicking here brings down the process if a filter is missing. Convert to a proper error and let
build()return it.- filter_graph.add( - &ffmpeg::filter::find("abuffer").expect("Failed to find abuffer filter"), + let abuffer = ffmpeg::filter::find("abuffer").ok_or_else(|| { + ffmpeg::Error::Bug // or a custom error type if available + })?; + filter_graph.add( + &abuffer, &format!("src{i}"), &args, ) ... - let mut amix = filter_graph.add( - &ffmpeg::filter::find("amix").expect("Failed to find amix filter"), + let amix_f = ffmpeg::filter::find("amix").ok_or(ffmpeg::Error::Bug)?; + let mut amix = filter_graph.add( + &amix_f, "amix", &format!("inputs={}:duration=first:dropout_transition=0", abuffers.len()), )?; ... - let mut aformat = filter_graph.add( - &ffmpeg::filter::find("aformat").expect("Failed to find aformat filter"), + let aformat_f = ffmpeg::filter::find("aformat").ok_or(ffmpeg::Error::Bug)?; + let mut aformat = filter_graph.add( + &aformat_f, "aformat", aformat_args, )?; ... - let mut abuffersink = filter_graph.add( - &ffmpeg::filter::find("abuffersink").expect("Failed to find abuffersink filter"), + let abuffersink_f = ffmpeg::filter::find("abuffersink").ok_or(ffmpeg::Error::Bug)?; + let mut abuffersink = filter_graph.add( + &abuffersink_f, "sink", "", )?;Also applies to: 77-85, 88-98
199-201: Use tracing rather than stdout for gap logs.
print!is noisy in production. Prefertracing::debug!.- print!("Gap between last buffer frame, inserting {gap:?} of silence"); + tracing::debug!("Gap in audio source; inserting {gap:?} of silence");
301-317: Set frame PTS when feeding filters to ensure alignment is explicit.You rely on buffer insertion for alignment, but
abufferalso supports PTS-based timing. Settingframe.set_pts(Some(...))aligned to the configured time_base makes alignment more robust.Would you like a patch calculating PTS from
Timestamprelative toself.timestampsandINFO.time_base?
288-322: Error typeResult<(), ()>loses context.When
sendfails or sink extraction fails, you return(). Propagate a descriptive error (e.g.,Stringor a custom error).- fn tick(&mut self, start: Timestamps, now: Timestamp) -> Result<(), ()> { + fn tick(&mut self, start: Timestamps, now: Timestamp) -> Result<(), String> { ... - return Err(()); + return Err("audio mixer output channel closed".into()); ... - Ok(()) + Ok(())And adjust the caller accordingly.
352-357: Tight spin with 5ms sleep can waste CPU. Consider backoff or blocking waits.If sources are sparse, waiting on channel readiness (e.g.,
recv_timeout) or increasing the sleep under idle can reduce CPU.crates/enc-ffmpeg/src/mux/ogg.rs (1)
27-29: Provide a targeted accessor instead of exposingOpusEncoderDownstream crates only call
input_time_base()on your encoder. Add a dedicated method onOggFile—for example:pub fn input_time_base(&self) -> /* return type of input_time_base() */ { self.encoder.input_time_base() }and make the existing
encoder()method private (orpub(crate)). This narrows your public API and avoids locking in the fullOpusEncodertype across crates.crates/enc-ffmpeg/src/audio/aac.rs (1)
112-118: Guard against zeroframe_size().Some encoders may report 0 early; avoid a potential tight loop or asking the resampler for 0 samples.
Apply:
pub fn queue_frame(&mut self, frame: frame::Audio, output: &mut format::context::Output) { self.resampler.add_frame(frame); - let frame_size = self.encoder.frame_size() as usize; + let frame_size = self.encoder.frame_size() as usize; + if frame_size == 0 { + return; + } while let Some(frame) = self.resampler.get_frame(frame_size) { self.encoder.send_frame(&frame).unwrap();crates/enc-ffmpeg/src/audio/buffered_resampler.rs (5)
5-5: Fix typo in documentation comment.There's a typo in the word "resmaples".
-/// Consumes audio frames, resmaples them, buffers the results, +/// Consumes audio frames, resamples them, buffers the results,
44-46: Remove unnecessary return statement.The explicit
returnstatement is unnecessary here since it's the last expression in the function.- return remaining_samples; + remaining_samples
65-66: Consider precision improvements for PTS calculation.The current PTS calculation could accumulate rounding errors over time. Consider using integer arithmetic where possible to maintain precision.
- let resampled_pts = - (pts as f64 * (resampled_frame.rate() as f64 / frame.rate() as f64)) as i64; + // Use integer arithmetic to avoid floating point precision issues + let resampled_pts = (pts * resampled_frame.rate() as i64) / frame.rate() as i64;
92-248: Consider extracting common logic between packed and non-packed formats.The
get_frame_innermethod has significant code duplication between the packed and non-packed format branches (lines 101-169 vs 171-244). The logic for handling PTS, silence insertion, and sample copying is nearly identical.Consider extracting the common logic into helper methods to reduce duplication and improve maintainability. For example:
- Extract silence insertion logic
- Extract sample copying logic
- Create a generic frame processing loop that handles both formats
This would make the code more maintainable and less prone to bugs when changes are needed.
250-256: Consider documenting the behavior when insufficient samples are available.The
get_framemethod returnsNonewhen there aren't enough samples buffered. It would be helpful to document this behavior in the method's doc comment for API clarity.+ /// Retrieves a frame with exactly `samples` samples from the buffer. + /// Returns None if there aren't enough samples buffered to fulfill the request. + /// The returned frame will have its PTS set appropriately based on the consumed samples. pub fn get_frame(&mut self, samples: usize) -> Option<ffmpeg::frame::Audio> {crates/recording/src/pipeline/builder.rs (1)
108-112: Public build signature changed — ensure downstreams updated and changelog notes added.
Returning RecordingPipeline is a breaking API change for callers of PipelineBuilder::build.I can draft a CHANGELOG entry and a quick fix PR for dependents. Want me to?
crates/recording/examples/recording-cli.rs (3)
1-2: Avoid wildcard import; import only what you use.
Helps with clippy’s wildcard_imports and keeps the example explicit.Apply:
-use cap_recording::{feeds::microphone, screen_capture::ScreenCaptureTarget, *}; +use cap_recording::{ + feeds::microphone, + instant_recording, + screen_capture::ScreenCaptureTarget, + MicrophoneFeed, +};
4-4: Prepare for mic feed wiring by importing Arc.-use std::time::Duration; +use std::{sync::Arc, time::Duration};
57-58: Drop the extra sleep; readiness is already awaited.
The prior ask(...).await... ensures the mic is ready; this sleep is redundant.- tokio::time::sleep(Duration::from_millis(10)).await;crates/timestamp/src/macos.rs (1)
22-27: Cache TimeBaseInfo once.
TimeBaseInfo::new()is constant per boot. Cache it withOnceLockto cut syscalls and stabilize conversions.+use std::sync::OnceLock; + +fn timebase() -> &'static TimeBaseInfo { + static TB: OnceLock<TimeBaseInfo> = OnceLock::new(); + TB.get_or_init(TimeBaseInfo::new) +} @@ - let info = TimeBaseInfo::new(); + let info = *timebase(); @@ - let info = TimeBaseInfo::new(); + let info = *timebase(); @@ - let info = TimeBaseInfo::new(); + let info = *timebase();Also applies to: 40-42, 51-53
crates/recording/src/sources/screen_capture/macos.rs (1)
167-168: Forcing sRGB: double-check downstream color expectations.Locking capture to sRGB can clamp wide-gamut content. If the downstream assumes display-native color, consider making this configurable or documenting the trade-off.
crates/recording/src/sources/camera.rs (1)
35-37: Clean up commented legacy timing code.These comments add noise and may confuse future readers.
- // first_frame_instant: Instant, - // first_frame_timestamp: Duration, @@ - // let relative_timestamp = camera_frame.timestamp - first_frame_timestamp; @@ - // let first_frame_instant = *self.first_frame_instant.get_or_insert(frame.reference_time); - // let first_frame_timestamp = *self.first_frame_timestamp.get_or_insert(frame.timestamp); @@ - // let first_frame_instant = - // *self.first_frame_instant.get_or_insert(frame.reference_time); - // let first_frame_timestamp = - // *self.first_frame_timestamp.get_or_insert(frame.timestamp);Also applies to: 48-49, 71-75, 100-109
crates/recording/src/capture_pipeline.rs (4)
78-85: Don't silently ignore video queue errors; log and continue.Both the first-frame path and the loop drop encoder errors. Prefer logging to aid field debugging.
- let _ = screen_encoder.queue_video_frame(frame.as_ref()); + if let Err(err) = screen_encoder.queue_video_frame(frame.as_ref()) { + tracing::error!("queue_video_frame(first): {err}"); + } ... - Ok((frame, _)) => { - let _ = screen_encoder.queue_video_frame(frame.as_ref()); - } + Ok((frame, _)) => { + if let Err(err) = screen_encoder.queue_video_frame(frame.as_ref()) { + tracing::error!("queue_video_frame(loop): {err}"); + } + }Also applies to: 87-99
180-182: Round audio PTS to the nearest sample to avoid systematic truncation.Truncation can cause drift over long recordings. Align with other places where you already round.
- let pts = (ts_offset.as_secs_f64() * frame.rate() as f64) as i64; + let pts = (ts_offset.as_secs_f64() * frame.rate() as f64).round() as i64;
510-512: Round audio PTS on Windows instant path as well.Keep PTS computation consistent with other encoders.
- let pts = (ts_offset.as_secs_f64() * frame.rate() as f64) as i64; + let pts = (ts_offset.as_secs_f64() * frame.rate() as f64).round() as i64;
591-595: Pause/resume not honored in Windows software path.The pause logic is commented out; video keeps flowing during pause, diverging from macOS behavior. Consider honoring
pause_flagfor parity.crates/recording/src/instant_recording.rs (3)
214-242: Unify time anchor across capture and encoders (optional).You pass
SystemTime::now()to screen capture, while encoders deriveTimestamps::now()internally. Using a singleTimestampscreated at actor start and threading it to both sides would eliminate any baseline skew, even if small. Not critical because you align on the first video frame, but worth considering for future-proofing.
256-258: Control mailbox size = 1; consider a small buffer.Back-to-back UI actions (e.g., Pause then Stop) could transiently block the sender. A capacity of 4–8 avoids head-of-line blocking without affecting ordering.
- let (ctrl_tx, ctrl_rx) = flume::bounded(1); + let (ctrl_tx, ctrl_rx) = flume::bounded(8);
102-107: Consider populating audio metadata when present.
sample_rate: NoneinCompletedRecordingdiscards useful info when mic/system audio exists. If trivial to get it from the mixer, consider setting it.crates/recording/src/studio_recording.rs (3)
313-318: Typo in log message (“shuting”).- tracing::info!("pipeline shuting down"); + tracing::info!("pipeline shutting down");
793-818: Factor repeated “first timestamp capture + PTS calc” into a helper.The mic, system audio, and camera encoders share near-identical logic. Extract into a small function to reduce duplication and desync risk on future edits.
Also applies to: 848-871, 901-923
935-936: Camera fps integer division loses precision (e.g., 30000/1001 -> 29).If downstream can handle non-integer FPS, consider storing rational or f64 (29.97). If you must keep u32, at least round instead of floor.
- fps: (camera_config.frame_rate.0 / camera_config.frame_rate.1) as u32, + fps: ((camera_config.frame_rate.0 as f64 / camera_config.frame_rate.1 as f64).round()) as u32,crates/timestamp/src/win.rs (2)
16-21: Consider caching the frequency value for better performance.The
QueryPerformanceFrequencycall is relatively expensive and the frequency value doesn't change during program execution. Consider caching it in a static variable.+use std::sync::OnceLock; + +static PERF_FREQUENCY: OnceLock<i64> = OnceLock::new(); + +fn get_performance_frequency() -> i64 { + *PERF_FREQUENCY.get_or_init(|| { + let mut freq = 0; + unsafe { QueryPerformanceFrequency(&mut freq).unwrap() }; + freq + }) +} + impl PerformanceCounterTimestamp { pub fn duration_since(&self, other: Self) -> Duration { - let mut freq = 0; - unsafe { QueryPerformanceFrequency(&mut freq).unwrap() }; - + let freq = get_performance_frequency(); Duration::from_secs_f64((self.0 - other.0) as f64 / freq as f64) }
36-44: Apply frequency caching to arithmetic operations.The Add and Sub implementations should also use the cached frequency value for consistency and performance.
impl Add<Duration> for PerformanceCounterTimestamp { type Output = Self; fn add(self, rhs: Duration) -> Self::Output { - let mut freq = 0; - unsafe { QueryPerformanceFrequency(&mut freq) }.unwrap(); + let freq = get_performance_frequency(); Self(self.0 + (rhs.as_secs_f64() * freq as f64) as i64) } } impl Sub<Duration> for PerformanceCounterTimestamp { type Output = Self; fn sub(self, rhs: Duration) -> Self::Output { - let mut freq = 0; - unsafe { QueryPerformanceFrequency(&mut freq) }.unwrap(); + let freq = get_performance_frequency(); Self(self.0 - (rhs.as_secs_f64() * freq as f64) as i64) } }Also applies to: 46-54
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (52)
Cargo.toml(1 hunks)apps/cli/src/record.rs(2 hunks)apps/desktop/package.json(1 hunks)apps/desktop/src-tauri/src/recording.rs(7 hunks)crates/camera-directshow/examples/cli.rs(1 hunks)crates/camera-directshow/src/lib.rs(4 hunks)crates/camera-mediafoundation/Cargo.toml(1 hunks)crates/camera-mediafoundation/src/lib.rs(4 hunks)crates/camera-windows/examples/cli.rs(1 hunks)crates/camera-windows/src/lib.rs(3 hunks)crates/camera/src/lib.rs(2 hunks)crates/camera/src/macos.rs(1 hunks)crates/camera/src/windows.rs(1 hunks)crates/cpal-ffmpeg/Cargo.toml(0 hunks)crates/cpal-ffmpeg/src/lib.rs(1 hunks)crates/enc-avfoundation/Cargo.toml(0 hunks)crates/enc-avfoundation/src/mp4.rs(7 hunks)crates/enc-ffmpeg/src/audio/aac.rs(5 hunks)crates/enc-ffmpeg/src/audio/buffered_resampler.rs(1 hunks)crates/enc-ffmpeg/src/audio/mod.rs(1 hunks)crates/enc-ffmpeg/src/audio/opus.rs(5 hunks)crates/enc-ffmpeg/src/mux/ogg.rs(1 hunks)crates/ffmpeg-utils/Cargo.toml(0 hunks)crates/ffmpeg-utils/src/lib.rs(0 hunks)crates/media-info/Cargo.toml(0 hunks)crates/media-info/src/lib.rs(5 hunks)crates/recording/Cargo.toml(2 hunks)crates/recording/examples/recording-cli.rs(2 hunks)crates/recording/src/capture_pipeline.rs(20 hunks)crates/recording/src/cursor.rs(3 hunks)crates/recording/src/feeds/camera.rs(3 hunks)crates/recording/src/feeds/microphone.rs(5 hunks)crates/recording/src/instant_recording.rs(12 hunks)crates/recording/src/lib.rs(1 hunks)crates/recording/src/pipeline/audio_buffer.rs(3 hunks)crates/recording/src/pipeline/builder.rs(3 hunks)crates/recording/src/pipeline/mod.rs(1 hunks)crates/recording/src/sources/audio_input.rs(2 hunks)crates/recording/src/sources/audio_mixer.rs(3 hunks)crates/recording/src/sources/camera.rs(5 hunks)crates/recording/src/sources/mod.rs(0 hunks)crates/recording/src/sources/screen_capture/macos.rs(6 hunks)crates/recording/src/sources/screen_capture/mod.rs(3 hunks)crates/recording/src/sources/screen_capture/windows.rs(6 hunks)crates/recording/src/studio_recording.rs(24 hunks)crates/scap-direct3d/src/lib.rs(2 hunks)crates/scap-ffmpeg/Cargo.toml(0 hunks)crates/scap-ffmpeg/src/cpal.rs(1 hunks)crates/timestamp/Cargo.toml(1 hunks)crates/timestamp/src/lib.rs(1 hunks)crates/timestamp/src/macos.rs(1 hunks)crates/timestamp/src/win.rs(1 hunks)
💤 Files with no reviewable changes (7)
- crates/recording/src/sources/mod.rs
- crates/ffmpeg-utils/Cargo.toml
- crates/enc-avfoundation/Cargo.toml
- crates/media-info/Cargo.toml
- crates/scap-ffmpeg/Cargo.toml
- crates/cpal-ffmpeg/Cargo.toml
- crates/ffmpeg-utils/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (4)
{crates/*,apps/cli,apps/desktop/src-tauri}/Cargo.toml
📄 CodeRabbit inference engine (AGENTS.md)
Crate package names should be kebab-case
Files:
crates/camera-mediafoundation/Cargo.tomlcrates/timestamp/Cargo.tomlcrates/recording/Cargo.toml
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs: Rust code must be formatted with rustfmt and adhere to workspace clippy lints
Rust module files should use snake_case names
Files:
crates/enc-ffmpeg/src/mux/ogg.rscrates/camera-directshow/examples/cli.rscrates/recording/src/pipeline/builder.rscrates/camera/src/macos.rscrates/enc-ffmpeg/src/audio/buffered_resampler.rscrates/enc-ffmpeg/src/audio/mod.rscrates/recording/src/pipeline/mod.rscrates/camera/src/lib.rscrates/scap-direct3d/src/lib.rscrates/camera-windows/examples/cli.rscrates/camera-windows/src/lib.rscrates/timestamp/src/macos.rscrates/cpal-ffmpeg/src/lib.rscrates/recording/examples/recording-cli.rscrates/camera-mediafoundation/src/lib.rsapps/cli/src/record.rscrates/enc-ffmpeg/src/audio/aac.rscrates/recording/src/pipeline/audio_buffer.rscrates/recording/src/lib.rscrates/recording/src/cursor.rscrates/camera/src/windows.rscrates/recording/src/sources/audio_input.rscrates/scap-ffmpeg/src/cpal.rscrates/recording/src/feeds/camera.rscrates/timestamp/src/lib.rscrates/timestamp/src/win.rscrates/recording/src/capture_pipeline.rscrates/enc-avfoundation/src/mp4.rscrates/recording/src/feeds/microphone.rscrates/enc-ffmpeg/src/audio/opus.rscrates/camera-directshow/src/lib.rscrates/recording/src/sources/camera.rscrates/media-info/src/lib.rscrates/recording/src/studio_recording.rscrates/recording/src/sources/audio_mixer.rsapps/desktop/src-tauri/src/recording.rscrates/recording/src/sources/screen_capture/mod.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/sources/screen_capture/windows.rscrates/recording/src/instant_recording.rs
{crates,apps/cli,apps/desktop/src-tauri}/**/{src,tests}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust tests should live in src (inline/unit) or tests (integration) directories
Files:
crates/enc-ffmpeg/src/mux/ogg.rscrates/recording/src/pipeline/builder.rscrates/camera/src/macos.rscrates/enc-ffmpeg/src/audio/buffered_resampler.rscrates/enc-ffmpeg/src/audio/mod.rscrates/recording/src/pipeline/mod.rscrates/camera/src/lib.rscrates/scap-direct3d/src/lib.rscrates/camera-windows/src/lib.rscrates/timestamp/src/macos.rscrates/cpal-ffmpeg/src/lib.rscrates/camera-mediafoundation/src/lib.rsapps/cli/src/record.rscrates/enc-ffmpeg/src/audio/aac.rscrates/recording/src/pipeline/audio_buffer.rscrates/recording/src/lib.rscrates/recording/src/cursor.rscrates/camera/src/windows.rscrates/recording/src/sources/audio_input.rscrates/scap-ffmpeg/src/cpal.rscrates/recording/src/feeds/camera.rscrates/timestamp/src/lib.rscrates/timestamp/src/win.rscrates/recording/src/capture_pipeline.rscrates/enc-avfoundation/src/mp4.rscrates/recording/src/feeds/microphone.rscrates/enc-ffmpeg/src/audio/opus.rscrates/camera-directshow/src/lib.rscrates/recording/src/sources/camera.rscrates/media-info/src/lib.rscrates/recording/src/studio_recording.rscrates/recording/src/sources/audio_mixer.rsapps/desktop/src-tauri/src/recording.rscrates/recording/src/sources/screen_capture/mod.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/sources/screen_capture/windows.rscrates/recording/src/instant_recording.rs
apps/desktop/src-tauri/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
Use tauri_specta on the Rust side for IPC (typed commands/events) and emit events via generated types
Files:
apps/desktop/src-tauri/src/recording.rs
🧬 Code graph analysis (21)
crates/camera-windows/examples/cli.rs (2)
crates/camera-mediafoundation/src/lib.rs (2)
bytes(468-483)len(69-71)crates/camera-windows/src/lib.rs (2)
bytes(188-199)pixel_format(362-364)
crates/timestamp/src/macos.rs (2)
crates/timestamp/src/win.rs (6)
new(12-14)now(23-27)duration_since(16-21)from_cpal(29-33)add(39-43)sub(49-53)crates/timestamp/src/lib.rs (7)
now(103-112)duration_since(24-33)from_cpal(35-44)instant(114-116)add(50-59)add(65-74)sub(80-89)
crates/recording/examples/recording-cli.rs (3)
crates/scap-targets/src/platform/win.rs (2)
std(967-967)primary(68-89)crates/recording/src/feeds/microphone.rs (3)
new(84-94)default(96-99)handle(245-365)crates/recording/src/instant_recording.rs (2)
new(167-174)builder(154-156)
crates/camera-mediafoundation/src/lib.rs (1)
crates/enc-mediafoundation/src/video/h264.rs (1)
sample(39-41)
apps/cli/src/record.rs (1)
crates/recording/src/studio_recording.rs (1)
builder(129-131)
crates/enc-ffmpeg/src/audio/aac.rs (2)
crates/enc-ffmpeg/src/audio/buffered_resampler.rs (2)
new(19-26)output(48-50)crates/media-info/src/lib.rs (2)
new(29-45)rate(107-109)
crates/recording/src/pipeline/audio_buffer.rs (1)
crates/audio/src/lib.rs (1)
cast_bytes_to_f32_slice(37-41)
crates/recording/src/sources/audio_input.rs (5)
crates/recording/src/sources/camera.rs (2)
info(28-30)init(20-26)crates/recording/src/sources/screen_capture/mod.rs (2)
info(422-424)init(275-411)crates/timestamp/src/macos.rs (1)
from_cpal(29-33)crates/timestamp/src/lib.rs (1)
from_cpal(35-44)crates/timestamp/src/win.rs (1)
from_cpal(29-33)
crates/recording/src/feeds/camera.rs (5)
crates/recording/src/capture_pipeline.rs (2)
tokio(154-154)tokio(491-491)crates/recording/src/sources/audio_mixer.rs (1)
new(30-35)crates/timestamp/src/macos.rs (1)
new(14-16)crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)crates/timestamp/src/win.rs (1)
new(12-14)
crates/timestamp/src/lib.rs (2)
crates/timestamp/src/macos.rs (5)
duration_since(22-27)from_cpal(29-33)add(39-44)sub(50-55)now(18-20)crates/timestamp/src/win.rs (5)
duration_since(16-21)from_cpal(29-33)add(39-43)sub(49-53)now(23-27)
crates/timestamp/src/win.rs (3)
crates/timestamp/src/macos.rs (6)
new(14-16)duration_since(22-27)now(18-20)from_cpal(29-33)add(39-44)sub(50-55)crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)crates/timestamp/src/lib.rs (7)
duration_since(24-33)now(103-112)from_cpal(35-44)instant(114-116)add(50-59)add(65-74)sub(80-89)
crates/recording/src/capture_pipeline.rs (7)
crates/recording/src/sources/audio_mixer.rs (2)
builder(324-326)new(30-35)crates/timestamp/src/macos.rs (2)
now(18-20)new(14-16)crates/timestamp/src/lib.rs (1)
now(103-112)crates/recording/src/sources/audio_input.rs (1)
init(25-31)crates/recording/src/sources/camera.rs (1)
init(20-26)crates/recording/src/sources/screen_capture/mod.rs (1)
init(275-411)crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)
crates/recording/src/feeds/microphone.rs (4)
crates/media-info/src/lib.rs (1)
ffmpeg_sample_format_for(266-276)crates/timestamp/src/macos.rs (1)
from_cpal(29-33)crates/timestamp/src/lib.rs (1)
from_cpal(35-44)crates/timestamp/src/win.rs (1)
from_cpal(29-33)
crates/enc-ffmpeg/src/audio/opus.rs (3)
crates/enc-ffmpeg/src/audio/buffered_resampler.rs (2)
new(19-26)output(48-50)crates/media-info/src/lib.rs (2)
new(29-45)rate(107-109)crates/enc-ffmpeg/src/audio/aac.rs (2)
queue_frame(112-122)queue_frame(149-151)
crates/recording/src/sources/camera.rs (2)
crates/recording/src/sources/audio_input.rs (2)
info(33-35)init(25-31)crates/recording/src/sources/screen_capture/mod.rs (2)
info(422-424)init(275-411)
crates/recording/src/studio_recording.rs (7)
crates/recording/src/sources/audio_mixer.rs (4)
sources(53-75)builder(324-326)new(30-35)build(50-122)crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)crates/recording/src/pipeline/builder.rs (3)
build(108-164)e(81-81)e(83-83)crates/timestamp/src/lib.rs (1)
now(103-112)crates/recording/src/sources/screen_capture/mod.rs (3)
fps(258-260)init(275-411)config(418-420)crates/recording/src/sources/audio_input.rs (1)
init(25-31)crates/recording/src/sources/camera.rs (1)
init(20-26)
crates/recording/src/sources/audio_mixer.rs (4)
crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)crates/timestamp/src/win.rs (3)
new(12-14)now(23-27)duration_since(16-21)crates/media-info/src/lib.rs (3)
new(29-45)new_raw(47-55)rate(107-109)crates/timestamp/src/lib.rs (2)
now(103-112)duration_since(24-33)
apps/desktop/src-tauri/src/recording.rs (3)
apps/desktop/src/utils/tauri.ts (4)
RecordingMode(420-420)ScreenCaptureTarget(431-431)VideoUploadInfo(454-454)StartRecordingInputs(438-438)crates/recording/src/studio_recording.rs (1)
builder(129-131)crates/recording/src/instant_recording.rs (1)
builder(154-156)
crates/recording/src/sources/screen_capture/macos.rs (3)
crates/recording/src/sources/audio_mixer.rs (1)
new(30-35)crates/timestamp/src/macos.rs (1)
new(14-16)crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)
crates/recording/src/sources/screen_capture/windows.rs (3)
crates/timestamp/src/win.rs (3)
new(12-14)now(23-27)from_cpal(29-33)crates/timestamp/src/lib.rs (2)
now(103-112)from_cpal(35-44)crates/scap-cpal/src/lib.rs (2)
create_capturer(17-52)config(71-73)
crates/recording/src/instant_recording.rs (5)
crates/recording/src/capture_pipeline.rs (1)
create_screen_capture(633-657)crates/recording/src/sources/audio_mixer.rs (4)
sources(53-75)builder(324-326)new(30-35)build(50-122)crates/recording/src/studio_recording.rs (11)
stop(103-105)pause(107-109)resume(111-113)cancel(115-117)builder(129-131)new(144-153)new(634-649)with_system_audio(155-158)with_mic_feed(160-163)build(175-189)shutdown(308-348)apps/desktop/src-tauri/src/recording.rs (5)
stop(94-117)pause(73-78)resume(80-85)cancel(119-124)capture_target(59-64)crates/recording/src/pipeline/builder.rs (1)
build(108-164)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
| let new_pts = self | ||
| .elapsed_duration | ||
| .add(time.sub(self.segment_first_timestamp.unwrap_or(time))); | ||
|
|
||
| if !self.is_writing { | ||
| self.is_writing = true; | ||
| self.asset_writer.start_session_at_src_time(time); | ||
| self.asset_writer.start_session_at_src_time(new_pts); | ||
| self.start_time = time; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Start session at relative PTS but track/end with absolute host time — mismatch.
You start the session at new_pts (relative), but elsewhere last_timestamp holds absolute host time. This will produce an incorrect end time and can truncate/extend the file.
Minimal fix: update the “last written src time” using new_pts and use that for ending the session. Example change near this block:
// after successful append:
self.last_timestamp = Some(new_pts); // track last written src PTS, not absolute host timeI’ve added a concrete end-session fix in a later comment (Lines 344-346).
🤖 Prompt for AI Agents
In crates/enc-avfoundation/src/mp4.rs around lines 209 to 216, you start the
AVAssetWriter session with a relative source time (new_pts) while elsewhere
last_timestamp is tracked as absolute host time, causing a mismatch when ending
the session; fix by treating last_timestamp as the last written source PTS:
after a successful append set self.last_timestamp = Some(new_pts) (i.e., track
last written src time, not host time) and ensure any end_session logic uses this
src PTS value when calling end_session_at_src_time.
| @@ -244,7 +246,7 @@ | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Audio timeline can race after resume; also mixes timebases with video segment start.
- After
pause(),segment_first_timestampisNone. If audio arrives before video on resume,unwrap()in the audio PTS path will panic. - Audio
timeuses 1/sample-rate timebase, whilesegment_first_timestampis in host-time; subtracting them is semantically invalid.
Short-term guard (avoid panic):
- if self.is_paused || !self.is_writing {
+ if self.is_paused || !self.is_writing || self.segment_first_timestamp.is_none() {
return Ok(());
}Longer-term: keep an audio_segment_first_pts: Option<cm::Time> in the audio timebase and compute pts = self.elapsed_duration.add(audio_pts - audio_segment_first_pts), independent of video clock. See Lines 282-288 for a concrete refactor.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Expects frames with pts values relative to the first frame's pts | |
| /// in the timebase of 1 / sample rate | |
| pub fn queue_audio_frame(&mut self, frame: frame::Audio) -> Result<(), QueueAudioFrameError> { | |
| if self.is_paused { | |
| if self.is_paused || !self.is_writing { | |
| return Ok(()); | |
| } | |
| /// Expects frames with pts values relative to the first frame's pts | |
| /// in the timebase of 1 / sample rate | |
| pub fn queue_audio_frame(&mut self, frame: frame::Audio) -> Result<(), QueueAudioFrameError> { | |
| if self.is_paused | |
| || !self.is_writing | |
| || self.segment_first_timestamp.is_none() | |
| { | |
| return Ok(()); | |
| } |
| if !audio_input.is_ready_for_more_media_data() { | ||
| return Err(QueueAudioFrameError::NotReady); | ||
| return Ok(()); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Swallowing “not ready” now returns Ok — verify upstream requeue.
Returning Ok(()) instead of NotReady changes the contract; ensure callers retry rather than dropping the frame.
Run to grep callers and review handling:
🏁 Script executed:
#!/bin/bash
rg -nP --type=rust -C3 '\bqueue_audio_frame\s*\('Length of output: 2038
🏁 Script executed:
#!/bin/bash
# Show context for queue_audio_frame usage in export crate
rg -nC3 'queue_audio_frame' crates/export/src/mp4.rs
# Show context for queue_audio_frame usage in recording crate
rg -nC3 'queue_audio_frame' crates/recording/src/capture_pipeline.rsLength of output: 554
🏁 Script executed:
#!/bin/bash
# Display the full queue_audio_frame implementation in enc-avfoundation
sed -n '230,300p' crates/enc-avfoundation/src/mp4.rs
# Confirm is_ready_for_more_media_data check location
rg -nC3 'is_ready_for_more_media_data' crates/enc-avfoundation/src/mp4.rsLength of output: 2801
Ensure “not ready” returns NotReady, not Ok
In crates/enc-avfoundation/src/mp4.rs (lines 248–250), returning Ok(()) when !audio_input.is_ready_for_more_media_data() causes callers (e.g. in crates/recording/src/capture_pipeline.rs at line 184) to drop frames rather than retry. Change that branch to Err(QueueAudioFrameError::NotReady) so upstream can requeue on NotReady.
🤖 Prompt for AI Agents
In crates/enc-avfoundation/src/mp4.rs around lines 248 to 250, the branch that
currently returns Ok(()) when !audio_input.is_ready_for_more_media_data() should
instead return Err(QueueAudioFrameError::NotReady) so upstream callers can
requeue rather than drop frames; update the return to
Err(QueueAudioFrameError::NotReady) (and ensure QueueAudioFrameError is in scope
or imported) so the NotReady case propagates correctly.
| let resampler = ffmpeg::software::resampler( | ||
| ( | ||
| input_config.sample_format, | ||
| input_config.channel_layout(), | ||
| input_config.sample_rate, | ||
| ), | ||
| ( | ||
| output_config.sample_format, | ||
| output_config.channel_layout(), | ||
| output_config.sample_rate, | ||
| ), | ||
| ) | ||
| .unwrap(); | ||
| let resampler = BufferedResampler::new(resampler); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t unwrap the resampler creation; propagate ffmpeg errors.
unwrap() can panic for nominal failures (e.g., unsupported format/layout). Return the error instead.
Apply:
- let resampler = ffmpeg::software::resampler(
+ let resampler = ffmpeg::software::resampler(
(
input_config.sample_format,
input_config.channel_layout(),
input_config.sample_rate,
),
(
output_config.sample_format,
output_config.channel_layout(),
output_config.sample_rate,
),
- )
- .unwrap();
+ )?;
let resampler = BufferedResampler::new(resampler);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let resampler = ffmpeg::software::resampler( | |
| ( | |
| input_config.sample_format, | |
| input_config.channel_layout(), | |
| input_config.sample_rate, | |
| ), | |
| ( | |
| output_config.sample_format, | |
| output_config.channel_layout(), | |
| output_config.sample_rate, | |
| ), | |
| ) | |
| .unwrap(); | |
| let resampler = BufferedResampler::new(resampler); | |
| let resampler = ffmpeg::software::resampler( | |
| ( | |
| input_config.sample_format, | |
| input_config.channel_layout(), | |
| input_config.sample_rate, | |
| ), | |
| ( | |
| output_config.sample_format, | |
| output_config.channel_layout(), | |
| output_config.sample_rate, | |
| ), | |
| )?; | |
| let resampler = BufferedResampler::new(resampler); |
🤖 Prompt for AI Agents
In crates/enc-ffmpeg/src/audio/aac.rs around lines 75 to 89, the call to
ffmpeg::software::resampler(...).unwrap() can panic for unsupported
configurations; change it to propagate the ffmpeg error instead of unwrapping.
Replace the unwrap with the ? operator (or map_err to your crate's error type)
and ensure the enclosing function returns Result<..., ffmpeg::Error> or your
crate Result type so the error bubbles up; then pass the successful resampler
into BufferedResampler::new as before.
| while let Some(frame) = self.resampler.flush(self.encoder.frame_size() as usize) { | ||
| self.encoder.send_frame(&frame).unwrap(); | ||
|
|
||
| self.process_packets(output); | ||
| } | ||
| self.process_packets(output); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid panicking on flush; handle send_frame errors gracefully.
Unwrapping during finalization can crash a recording session. At minimum, handle errors and stop flushing; ideally, propagate a Result (may require trait changes).
Apply minimal handling:
- while let Some(frame) = self.resampler.flush(self.encoder.frame_size() as usize) {
- self.encoder.send_frame(&frame).unwrap();
-
- self.process_packets(output);
- }
+ while let Some(frame) = self.resampler.flush(self.encoder.frame_size() as usize) {
+ if let Err(e) = self.encoder.send_frame(&frame) {
+ // TODO: replace with proper logging and/or bubbling the error up
+ eprintln!("[{}] AAC send_frame during flush failed: {e}", self.tag);
+ break;
+ }
+ self.process_packets(output);
+ }If you’re open to evolving the AudioEncoder trait to return Result<(), E>, I can draft that change across the encoders/muxers.
🤖 Prompt for AI Agents
In crates/enc-ffmpeg/src/audio/aac.rs around lines 136–140, the call to
self.encoder.send_frame(&frame).unwrap() during resampler flush can panic;
replace the unwrap with proper error handling: match on the send_frame result,
and if it returns an Err then stop the flush loop and propagate an error (return
a Result from this function) or at minimum log the error and break the loop so
the recording session doesn't crash. If needed, update the AudioEncoder trait
method signature to return Result<(), E> and propagate that Result through
callers so errors are handled rather than unwrapped.
| pub struct MachAbsoluteTimestamp( | ||
| // Nanoseconds | ||
| u64, | ||
| ); | ||
|
|
||
| impl MachAbsoluteTimestamp { | ||
| pub fn new(nanos: u64) -> Self { | ||
| Self(nanos) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix units: stored value is host ticks, not nanoseconds (API/docs mismatch).
mach::abs_time() and CoreAudio host time are in Mach absolute ticks. The struct comment and new arg name suggest nanoseconds, which is incorrect and leaks into math below.
Apply:
-#[derive(Clone, Copy, Debug)]
-pub struct MachAbsoluteTimestamp(
- // Nanoseconds
- u64,
-);
+#[derive(Clone, Copy, Debug)]
+pub struct MachAbsoluteTimestamp(
+ // Mach absolute host time ticks (from mach_absolute_time / CoreAudio host time)
+ u64,
+);
@@
- pub fn new(nanos: u64) -> Self {
- Self(nanos)
+ pub fn new(host_time: u64) -> Self {
+ Self(host_time)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub struct MachAbsoluteTimestamp( | |
| // Nanoseconds | |
| u64, | |
| ); | |
| impl MachAbsoluteTimestamp { | |
| pub fn new(nanos: u64) -> Self { | |
| Self(nanos) | |
| } | |
| #[derive(Clone, Copy, Debug)] | |
| pub struct MachAbsoluteTimestamp( | |
| // Mach absolute host time ticks (from mach_absolute_time / CoreAudio host time) | |
| u64, | |
| ); | |
| impl MachAbsoluteTimestamp { | |
| pub fn new(host_time: u64) -> Self { | |
| Self(host_time) | |
| } | |
| // …rest of impl… | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 8 to 16, the struct currently
documents and accepts nanoseconds but actually stores Mach absolute host ticks;
update the inline comment to state "Host ticks (Mach absolute time)" instead of
"Nanoseconds", rename the constructor parameter from `nanos` to `ticks` and
update the function signature and any internal uses to reflect ticks, and
audit/adjust downstream math or helper functions that assume nanoseconds so they
convert between ticks and nanos explicitly where needed.
| pub fn duration_since(&self, other: Self) -> Duration { | ||
| let info = TimeBaseInfo::new(); | ||
| let freq = info.numer as f64 / info.denom as f64; | ||
|
|
||
| Duration::from_nanos(((self.0 - other.0) as f64 * freq) as u64) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct tick→ns conversion and guard underflow.
Use integer math to avoid precision loss; and saturate the delta to prevent underflow.
- pub fn duration_since(&self, other: Self) -> Duration {
- let info = TimeBaseInfo::new();
- let freq = info.numer as f64 / info.denom as f64;
-
- Duration::from_nanos(((self.0 - other.0) as f64 * freq) as u64)
- }
+ pub fn duration_since(&self, other: Self) -> Duration {
+ let info = TimeBaseInfo::new();
+ let ticks = self.0.saturating_sub(other.0) as u128;
+ let nanos = (ticks * info.numer as u128) / info.denom as u128;
+ Duration::from_nanos(nanos as u64)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn duration_since(&self, other: Self) -> Duration { | |
| let info = TimeBaseInfo::new(); | |
| let freq = info.numer as f64 / info.denom as f64; | |
| Duration::from_nanos(((self.0 - other.0) as f64 * freq) as u64) | |
| } | |
| pub fn duration_since(&self, other: Self) -> Duration { | |
| let info = TimeBaseInfo::new(); | |
| let ticks = self.0.saturating_sub(other.0) as u128; | |
| let nanos = (ticks * info.numer as u128) / info.denom as u128; | |
| Duration::from_nanos(nanos as u64) | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 22 to 27, the tick→ns conversion
uses floating point math and does not guard against underflow; change to integer
math and saturate the delta: compute ticks as self.0.saturating_sub(other.0),
promote operands to a wide integer (e.g. u128) and calculate ns = (ticks as u128
* info.numer as u128) / info.denom as u128, clamp the result to u64::MAX if
necessary, then return Duration::from_nanos(ns as u64); this prevents precision
loss and avoids negative/underflowed durations.
| fn add(self, rhs: Duration) -> Self::Output { | ||
| let info = TimeBaseInfo::new(); | ||
| let freq = info.numer as f64 / info.denom as f64; | ||
|
|
||
| Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ns→tick conversion direction in Add (currently reversed).
To add a Duration to host ticks, convert ns→ticks via denom/numer; also saturate on add.
- fn add(self, rhs: Duration) -> Self::Output {
- let info = TimeBaseInfo::new();
- let freq = info.numer as f64 / info.denom as f64;
-
- Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64)
- }
+ fn add(self, rhs: Duration) -> Self::Output {
+ let info = TimeBaseInfo::new();
+ let ticks_delta =
+ ((rhs.as_nanos() as u128 * info.denom as u128) / info.numer as u128) as u64;
+ Self(self.0.saturating_add(ticks_delta))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn add(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let freq = info.numer as f64 / info.denom as f64; | |
| Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64) | |
| } | |
| } | |
| fn add(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let ticks_delta = | |
| ((rhs.as_nanos() as u128 * info.denom as u128) / info.numer as u128) as u64; | |
| Self(self.0.saturating_add(ticks_delta)) | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 39–45, the Add impl currently
converts host Duration→ticks using numer/denom (reversed) and does a float add;
instead compute the tick delta as ns * denom / numer using integer math and
saturate on overflow: convert rhs.as_nanos() to u128, compute tick_delta =
(ns_u128 * info.denom as u128) / info.numer as u128, clamp or saturate the
tick_delta to u64::MAX if it exceeds u64, then return
Self(self.0.saturating_add(tick_delta_as_u64)). Use integer ops to avoid
precision loss and ensure saturation on addition.
| fn sub(self, rhs: Duration) -> Self::Output { | ||
| let info = TimeBaseInfo::new(); | ||
| let freq = info.numer as f64 / info.denom as f64; | ||
|
|
||
| Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ns→tick conversion direction in Sub and avoid underflow.
Same as Add; saturate to zero on underflow.
- fn sub(self, rhs: Duration) -> Self::Output {
- let info = TimeBaseInfo::new();
- let freq = info.numer as f64 / info.denom as f64;
-
- Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64)
- }
+ fn sub(self, rhs: Duration) -> Self::Output {
+ let info = TimeBaseInfo::new();
+ let ticks_delta =
+ ((rhs.as_nanos() as u128 * info.denom as u128) / info.numer as u128) as u64;
+ Self(self.0.saturating_sub(ticks_delta))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn sub(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let freq = info.numer as f64 / info.denom as f64; | |
| Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64) | |
| } | |
| fn sub(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let ticks_delta = | |
| ((rhs.as_nanos() as u128 * info.denom as u128) / info.numer as u128) as u64; | |
| Self(self.0.saturating_sub(ticks_delta)) | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 50 to 55, the ns→tick conversion
in Sub is using the wrong direction and can underflow; replace the current
multiplication with the exact same ns→tick conversion used in Add to compute
delta_ticks, then if delta_ticks >= self.0 return Self(0) to saturate at zero,
otherwise return Self(self.0 - delta_ticks). Ensure you reuse the same
conversion expression (no new conversion formula) and perform the underflow
check before casting or subtracting.
| #[cfg(test)] | ||
| mod test { | ||
| use super::*; | ||
|
|
||
| #[test] | ||
| fn test() { | ||
| let a = MachAbsoluteTimestamp::new(0); | ||
|
|
||
| dbg!(MachAbsoluteTimestamp::now()); | ||
| dbg!(a + Duration::from_secs(1)); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Strengthen test to assert math correctness.
Current test only dbg!-prints. Add assertions to catch the conversion bugs above.
#[test]
- fn test() {
- let a = MachAbsoluteTimestamp::new(0);
-
- dbg!(MachAbsoluteTimestamp::now());
- dbg!(a + Duration::from_secs(1));
- }
+ fn add_and_duration_since_roundtrip() {
+ let a = MachAbsoluteTimestamp::now();
+ let b = a + Duration::from_millis(250);
+ let d = b.duration_since(a);
+ assert!(d >= Duration::from_millis(250));
+ assert!(d <= Duration::from_millis(251)); // allow small rounding
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[cfg(test)] | |
| mod test { | |
| use super::*; | |
| #[test] | |
| fn test() { | |
| let a = MachAbsoluteTimestamp::new(0); | |
| dbg!(MachAbsoluteTimestamp::now()); | |
| dbg!(a + Duration::from_secs(1)); | |
| } | |
| } | |
| #[cfg(test)] | |
| mod test { | |
| use super::*; | |
| #[test] | |
| fn add_and_duration_since_roundtrip() { | |
| let a = MachAbsoluteTimestamp::now(); | |
| let b = a + Duration::from_millis(250); | |
| let d = b.duration_since(a); | |
| assert!(d >= Duration::from_millis(250)); | |
| assert!(d <= Duration::from_millis(251)); // allow small rounding | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 58-69, the test currently only
prints values with dbg!; replace those with concrete assertions to verify
timestamp math and conversions. Specifically: assert that
MachAbsoluteTimestamp::now() is >= MachAbsoluteTimestamp::new(0), assert that a
+ Duration::from_secs(1) minus a equals Duration::from_secs(1), and assert that
converting timestamps to/from the underlying representation preserves the value
(i.e., round-trip equality) so any conversion/math bugs are caught.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 29
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (14)
crates/recording/src/pipeline/audio_buffer.rs (1)
52-56: Bug: interleaved slice counts samples instead of bytes (reads 1/4 of data).You slice
frame.data(0)by number of samples, not bytes. For f32 audio this drops 3/4 of the buffer. Multiply byf32::BYTE_SIZE(orsize_of::<f32>()).Apply this diff:
- cast_bytes_to_f32_slice( - &frame.data(0)[0..frame.samples() * frame.channels() as usize], - ) + cast_bytes_to_f32_slice( + &frame.data(0)[..frame.samples() as usize + * frame.channels() as usize + * f32::BYTE_SIZE], + )crates/enc-avfoundation/src/mp4.rs (3)
270-274: Avoid copying padded bytes for planar audio.Use the exact per-plane length (
samples * bytes_per_sample), notdata.len(), to avoid overruns/garbage if FFmpeg ever returns padded planes.Apply this diff:
- for plane_i in 0..frame.planes() { - let data = frame.data(plane_i); - block_buf_slice[offset..offset + data.len()] - .copy_from_slice(&data[0..frame.samples() * frame.format().bytes()]); - offset += data.len(); - } + for plane_i in 0..frame.planes() { + let data = frame.data(plane_i); + let plane_len = frame.samples() * frame.format().bytes(); + block_buf_slice[offset..offset + plane_len] + .copy_from_slice(&data[..plane_len]); + offset += plane_len; + }
282-288: Unify audio PTS normalization with video; dropstart_timefrom the formula.Video uses
elapsed_duration + (t - segment_first_timestamp). Audio mixesstart_timewith that delta, which is unnecessary and risks drift. Normalize audio the same way for a single, consistent "writer timeline".Apply this diff (assumes the
normalized_src_timehelper from a previous comment):- let time = cm::Time::new(frame.pts().unwrap_or(0), frame.rate() as i32); - - let pts = self - .start_time - .add(self.elapsed_duration) - .add(time.sub(self.segment_first_timestamp.unwrap())); + let time = cm::Time::new(frame.pts().unwrap_or(0), frame.rate() as i32); + let pts = self.normalized_src_time(time);Also update
self.last_src_timeafter append as suggested earlier.
321-323:pause()can panic before any frame arrives.
unwrap()onsegment_first_timestampwill panic if pause is called before the first frame (or after init). Guard it.Apply this diff:
- self.elapsed_duration = self - .elapsed_duration - .add(time.sub(self.segment_first_timestamp.unwrap())); + if let Some(start) = self.segment_first_timestamp { + self.elapsed_duration = self.elapsed_duration.add(time.sub(start)); + }crates/scap-direct3d/src/lib.rs (4)
466-522: Fix unsound mapping: mapped texture is dropped before return (dangling slice) and never Unmap’d.
as_buffercreates a staging texture, maps it, then drops it when the function returns. The returned slice points to freed GPU memory, and the resource remains mapped (leak). Also, the stagingDXGI_FORMATignoresself.pixel_format, and mapping is RW when only READ is needed.Apply this diff to hold the staging texture and context in
FrameBuffer, use the correct format, and Unmap on drop:@@ pub fn as_buffer<'a>(&'a self) -> windows::core::Result<FrameBuffer<'a>> { let texture_desc = D3D11_TEXTURE2D_DESC { Width: self.width, Height: self.height, MipLevels: 1, ArraySize: 1, - Format: DXGI_FORMAT(DirectXPixelFormat::R8G8B8A8UIntNormalized.0), // (self.color_format as i32), + Format: self.pixel_format.as_dxgi(), SampleDesc: DXGI_SAMPLE_DESC { Count: 1, Quality: 0, }, Usage: D3D11_USAGE_STAGING, BindFlags: 0, - CPUAccessFlags: D3D11_CPU_ACCESS_READ.0 as u32 | D3D11_CPU_ACCESS_WRITE.0 as u32, + CPUAccessFlags: D3D11_CPU_ACCESS_READ.0 as u32, MiscFlags: 0, }; @@ unsafe { self.d3d_context.Map( &texture, 0, - D3D11_MAP_READ_WRITE, + D3D11_MAP_READ, 0, Some(&mut mapped_resource), )?; }; - let data = unsafe { - std::slice::from_raw_parts_mut( - mapped_resource.pData.cast(), - (self.height * mapped_resource.RowPitch) as usize, - ) - }; + let data = unsafe { + std::slice::from_raw_parts_mut( + mapped_resource.pData.cast(), + (self.height as usize) * (mapped_resource.RowPitch as usize), + ) + }; Ok(FrameBuffer { data, width: self.width, height: self.height, stride: mapped_resource.RowPitch, pixel_format: self.pixel_format, resource: mapped_resource, + texture, + d3d_context: self.d3d_context.clone(), }) } } @@ pub struct FrameBuffer<'a> { data: &'a mut [u8], width: u32, height: u32, stride: u32, resource: D3D11_MAPPED_SUBRESOURCE, pixel_format: PixelFormat, + texture: ID3D11Texture2D, + d3d_context: ID3D11DeviceContext, }And add this Drop impl (outside the shown range):
impl<'a> Drop for FrameBuffer<'a> { fn drop(&mut self) { unsafe { self.d3d_context.Unmap(&self.texture, 0) }; } }Also applies to: 525-532
23-33: Enable BGRA support in D3D11 device creation.Windows Graphics Capture interop typically requires
D3D11_CREATE_DEVICE_BGRA_SUPPORT; without it,CreateDirect3D11DeviceFromDXGIDevicecan fail. Add the flag at creation.@@ Direct3D11::{ - D3D11_BIND_RENDER_TARGET, D3D11_BIND_SHADER_RESOURCE, D3D11_BOX, + D3D11_BIND_RENDER_TARGET, D3D11_BIND_SHADER_RESOURCE, D3D11_BOX, D3D11_CPU_ACCESS_READ, D3D11_CPU_ACCESS_WRITE, D3D11_MAP_READ_WRITE, D3D11_MAPPED_SUBRESOURCE, D3D11_SDK_VERSION, D3D11_TEXTURE2D_DESC, D3D11_USAGE_DEFAULT, D3D11_USAGE_STAGING, D3D11CreateDevice, ID3D11Device, ID3D11DeviceContext, ID3D11Texture2D, + D3D11_CREATE_DEVICE_BGRA_SUPPORT, D3D11_MAP_READ, }, @@ - Default::default(), + D3D11_CREATE_DEVICE_BGRA_SUPPORT.0 as u32, None, D3D11_SDK_VERSION, Some(&mut d3d_device), None, None, )Also replace subsequent uses of
D3D11_MAP_READ_WRITEwithD3D11_MAP_READ(covered in the previous comment).Also applies to: 167-176
182-187: Avoid double-unwrap; propagateGetImmediateContexterrors clearly.The current
transpose().unwrap().unwrap()panics on error. Prefer explicit error mapping.- let (d3d_device, d3d_context) = d3d_device - .map(|d| unsafe { d.GetImmediateContext() }.map(|v| (d, v))) - .transpose() - .unwrap() - .unwrap(); + let d3d_device = d3d_device.unwrap(); + let d3d_context = unsafe { + d3d_device + .GetImmediateContext() + .map_err(StartRunnerError::D3DDevice)? + };
192-210: Replaceunwrap()s innew()with proper error propagation.Panicking inside a library API is hostile to callers and complicates debugging. Use the existing
StartRunnerErrorvariants and letNewCapturerErrorconvert viaFrom.- let direct3d_device = (|| { + let direct3d_device = (|| { let dxgi_device = d3d_device.cast::<IDXGIDevice>()?; let inspectable = unsafe { CreateDirect3D11DeviceFromDXGIDevice(&dxgi_device) }?; inspectable.cast::<IDirect3DDevice>() })() - .unwrap(); - // .map_err(StartRunnerError::Direct3DDevice)?; + .map_err(StartRunnerError::Direct3DDevice)?; @@ - let frame_pool = Direct3D11CaptureFramePool::CreateFreeThreaded( + let frame_pool = Direct3D11CaptureFramePool::CreateFreeThreaded( &direct3d_device, settings.pixel_format.as_directx(), 1, item.Size().unwrap(), ) - .unwrap(); - // .map_err(StartRunnerError::FramePool)?; + .map_err(StartRunnerError::FramePool)?; @@ - let session = frame_pool.CreateCaptureSession(&item).unwrap(); - // .map_err(StartRunnerError::CaptureSession)?; + let session = frame_pool + .CreateCaptureSession(&item) + .map_err(StartRunnerError::CaptureSession)?; @@ - session.SetIsBorderRequired(border_required).unwrap(); + session + .SetIsBorderRequired(border_required) + .map_err(StartRunnerError::CaptureSession)?; } @@ - session - .SetIsCursorCaptureEnabled(cursor_capture_enabled) - .unwrap(); + session + .SetIsCursorCaptureEnabled(cursor_capture_enabled) + .map_err(StartRunnerError::CaptureSession)?; } @@ - session - .SetMinUpdateInterval(min_update_interval.into()) - .unwrap(); + session + .SetMinUpdateInterval(min_update_interval.into()) + .map_err(StartRunnerError::CaptureSession)?; } @@ - frame_pool - .FrameArrived( + frame_pool + .FrameArrived( &TypedEventHandler::<Direct3D11CaptureFramePool, IInspectable>::new({ let d3d_context = d3d_context.clone(); let d3d_device = d3d_device.clone(); @@ - }), - ) - .unwrap(); - // .map_err(StartRunnerError::RegisterFrameArrived)?; + }), + ) + .map_err(StartRunnerError::RegisterFrameArrived)?; @@ - item.Closed( + item.Closed( &TypedEventHandler::<GraphicsCaptureItem, IInspectable>::new(move |_, _| { closed_callback() }), ) - .unwrap(); - // .map_err(StartRunnerError::RegisterClosed)?; + .map_err(StartRunnerError::RegisterClosed)?;Also applies to: 200-206, 209-226, 256-317, 320-325
crates/camera-mediafoundation/src/lib.rs (1)
560-577: Use nanosecond precision for MF sample times; keep QPC read, but tighten conversionsMF times are in 100-ns units. Converting to micros drops precision. Use nanos to preserve full resolution.
- let mut perf_counter = 0; + let mut perf_counter: i64 = 0; unsafe { QueryPerformanceCounter(&mut perf_counter)? }; @@ - let sample_time = unsafe { sample.GetSampleTime() }?; + let sample_time = unsafe { sample.GetSampleTime() }?; @@ - (callback)(CallbackData { - sample: sample.clone(), - timestamp: Duration::from_micros(sample_time as u64 / 10), - perf_counter, - }); + (callback)(CallbackData { + sample: sample.clone(), + // 100-ns units -> ns + timestamp: Duration::from_nanos((sample_time as u64) * 100), + perf_counter, + });crates/recording/src/pipeline/mod.rs (1)
36-49: Set is_shutdown = true after shutdown.Currently subsequent shutdown() calls won’t error as intended.
pub async fn shutdown(&mut self) -> Result<(), MediaError> { if self.is_shutdown { return Err(MediaError::ShutdownPipeline); }; trace!("Shutting down pipeline"); self.control.broadcast(Control::Shutdown).await; for (_name, task) in self.task_handles.drain(..) { let _ = task.join(); } info!("Pipeline stopped"); // TODO: Collect shutdown errors? - Ok(()) + self.is_shutdown = true; + Ok(()) }crates/recording/src/feeds/camera.rs (1)
270-298: Missing non-Windows/macOS timestamp initializer (compile break on other targets).
timestampis only set under#[cfg(windows)]and#[cfg(target_os = "macos")]. Other targets will fail to compile. Add a fallback or gate the whole camera feed.Apply:
- time::Duration, + time::{Duration, SystemTime},and
let _ = recipient .tell(NewFrame(RawCameraFrame { frame: ff_frame, + #[cfg(all(not(windows), not(target_os = "macos")))] + timestamp: Timestamp::SystemTime(SystemTime::now()), #[cfg(windows)] timestamp: Timestamp::PerformanceCounter(PerformanceCounterTimestamp::new( frame.native().perf_counter, )), #[cfg(target_os = "macos")] timestamp: Timestamp::MachAbsoluteTime( cap_timestamp::MachAbsoluteTimestamp::new( cidre::cm::Clock::convert_host_time_to_sys_units( frame.native().sample_buf().pts(), ), ), ), }))crates/recording/src/sources/camera.rs (1)
66-79: Avoid blocking on shutdown: use try_send when draining.
Using blocking send here can deadlock if the consumer is stopping. Drop frames instead.Apply:
- for frame in frames { - // let first_frame_instant = *self.first_frame_instant.get_or_insert(frame.reference_time); - // let first_frame_timestamp = *self.first_frame_timestamp.get_or_insert(frame.timestamp); - - if let Err(error) = self.process_frame(frame) { - eprintln!("{error}"); - break; - } - } + for frame in frames { + if let Err(e) = self.output.try_send((frame.frame, frame.timestamp)) { + tracing::debug!("Dropping camera frame during shutdown: {e}"); + break; + } + }crates/recording/src/capture_pipeline.rs (2)
524-536: Fix deadlock: send first video timestamp in software fallback too.In the software (ffmpeg) branch we never send
first_frame_tx, soaudio_encodingcan block forever waiting onfirst_frame_rx. Hoist the sender outside the match and use the incomingtimestampfromsource.1on the first frame.builder.spawn_task("screen_encoder", move |ready| { - match screen_encoder { + let mut first_frame_tx_opt = Some(first_frame_tx); + match screen_encoder { either::Left((mut encoder, mut muxer)) => { use windows::Win32::Media::MediaFoundation; cap_mediafoundation_utils::thread_init(); let _ = ready.send(Ok(())); - - let mut first_frame_tx = Some(first_frame_tx); + // first_frame_tx_opt used in both branches while let Ok(e) = encoder.get_event() { match e { MediaFoundation::METransformNeedInput => { use cap_timestamp::PerformanceCounterTimestamp; let Ok((frame, _)) = source.1.recv() else { break; }; let frame_time = frame .inner() .SystemRelativeTime() .map_err(|e| format!("Frame Time: {e}"))?; let timestamp = Timestamp::PerformanceCounter( PerformanceCounterTimestamp::new(frame_time.Duration), ); - if let Some(first_frame_tx) = first_frame_tx.take() { + if let Some(first_frame_tx) = first_frame_tx_opt.take() { let _ = first_frame_tx.send(timestamp); } @@ either::Right(mut encoder) => { let output = output.clone(); let _ = ready.send(Ok(())); - while let Ok((frame, _unix_time)) = source.1.recv() { + while let Ok((frame, timestamp)) = source.1.recv() { + if let Some(first_frame_tx) = first_frame_tx_opt.take() { + let _ = first_frame_tx.send(timestamp); + } let Ok(mut output) = output.lock() else { continue; };Also applies to: 586-603
322-324: Don’t unwrap in background task when sending timestamp.Unwrap can crash the encoder task. Ignore send failure instead.
- timestamp_tx.send(timestamp).unwrap(); + let _ = timestamp_tx.send(timestamp);
🧹 Nitpick comments (63)
crates/recording/src/pipeline/audio_buffer.rs (3)
81-83: Replace magic number 4 withf32::BYTE_SIZEfor correctness and clarity.Avoid hardcoding the sample width.
Apply this diff:
- self.frame.data_mut(channel)[index * 4..(index + 1) * 4] + self.frame.data_mut(channel)[index * f32::BYTE_SIZE..(index + 1) * f32::BYTE_SIZE] .copy_from_slice(&byte.to_ne_bytes());- self.frame.data_mut(0)[index * 4..(index + 1) * 4] + self.frame.data_mut(0)[index * f32::BYTE_SIZE..(index + 1) * f32::BYTE_SIZE] .copy_from_slice(&byte.to_ne_bytes());Also applies to: 90-92
47-51: Prefer bounding planar bytes to the used samples for symmetry and safety.
frame.data(channel)typically matchesnb_samples * bytes_per_sample, but slicing to the exact byte count avoids surprises if upstream ever changes.Apply this diff:
- self.data[channel].extend(unsafe { cast_bytes_to_f32_slice(frame.data(channel)) }); + let bytes = &frame.data(channel)[..frame.samples() as usize * f32::BYTE_SIZE]; + self.data[channel].extend(unsafe { cast_bytes_to_f32_slice(bytes) });
75-93: Optional perf: avoid per-sampleto_ne_bytes()+ small copies.Batch-copy drained samples to the frame buffer (e.g., collect drained
f32s into a temporary slice and copy via a casted&mut [u8]), which reduces loop overhead in this hot path.I can propose a bytemuck-based or
align_to-based version if you want to keep itno_std-friendly.crates/enc-avfoundation/src/mp4.rs (1)
248-251: Drop unusedQueueAudioFrameError::NotReadyvariant
TheNotReadyvariant is only declared at crates/enc-avfoundation/src/mp4.rs:59 and never referenced elsewhere; remove it to eliminate dead code and align the API with current behavior.crates/scap-direct3d/src/lib.rs (4)
381-416:stop()should not panic and should coordinate with the FrameArrived loop.
self.session.Close().unwrap()can panic. Also,stop_flagis never stored inCapturer, so the check in the callback is dead code.
- Return a
windows::core::Result<()>(or map toStopCapturerError) and propagateClose()errors.- Either remove
stop_flagfrom the callback or store it onCapturerand set it here before closing.
256-317: Consider re-creating the frame pool on size changes.You read
frame.ContentSize()but don’t react when it changes. For Windows Graphics Capture, the frame pool should be re-created when size changes to avoid invalid/misaligned buffers.Would you like a small helper to detect size changes and recreate the pool safely?
560-584: Prune obsolete error variants.
FailedToInitializeWinRT,DispatchQueue,PostMessageFailed,ThreadJoinFailed, etc., appear unused after removing the dispatcher/message pump. Keeping them increases maintenance and can trigger clippy warnings.Remove unused variants or gate them behind feature flags if you plan to restore that path.
Also applies to: 354-362, 369-377
228-254: Minor: reuse the preallocated crop texture without cloning every frame.
crop_data.clone()AddRefs the COM texture each frame. It’s cheap but unnecessary. Capture the tuple by move into the closure and reference it.- let frame = if let Some((cropped_texture, crop)) = crop_data.clone() { + let frame = if let Some((ref cropped_texture, crop)) = crop_data {crates/enc-ffmpeg/src/mux/ogg.rs (1)
27-29: Avoid exposing the full encoder; provide a narrow accessor instead.Returning
&OpusEncoderleaks internals and can create borrow ergonomics issues for callers (holding&self.encoderprevents subsequent&mut selfcalls onOggFile). If the goal is to expose the time base, add a pass-through and keep the encoder private.Apply this non-breaking addition (you can later deprecate
encoder()if unused):+use cap_media_info::FFRational; + impl OggFile { @@ - pub fn encoder(&self) -> &OpusEncoder { - &self.encoder - } + /// Time base expected for input audio frames sent into the encoder. + pub fn input_time_base(&self) -> FFRational { + self.encoder.input_time_base() + }crates/enc-ffmpeg/src/audio/opus.rs (1)
114-117: Document or renameinput_time_base
Add a doc comment explaining it returns the encoder’s time base (the PTS base for input audio frames, i.e.self.encoder.time_base()), or rename toencoder_time_base/sample_time_baseto avoid ambiguity.crates/enc-ffmpeg/src/audio/buffered_resampler.rs (4)
5-5: Fix typo in documentation commentThere's a typo in the documentation: "resmaples" should be "resamples".
-/// Consumes audio frames, resmaples them, buffers the results, +/// Consumes audio frames, resamples them, buffers the results,
28-46: Simplify return statement and potential inefficiency in remaining_samples calculationLine 45 has an unnecessary explicit return statement that can be simplified. Additionally, the loop on line 38-43 appears to be calculating incorrectly - it's adding
buffer.0.samples()toptson line 42 but using the wrong value to calculate gaps on line 40.- return remaining_samples; + remaining_samplesFor the calculation issue, the current logic seems incorrect. Line 42 updates
ptsbut this should happen before calculating the gap on line 40:for buffer in self.buffer.iter().skip(1) { - // fill in gap - remaining_samples += (buffer.1 - pts) as usize; + // fill in gap between current pts position and next buffer's pts + remaining_samples += (buffer.1 - pts).max(0) as usize; + pts = buffer.1; remaining_samples += buffer.0.samples(); pts += buffer.0.samples() as i64; }
72-72: Redundant comparison in while loop conditionThe
while let Some(_)pattern ignores the value, making the condition effectively a boolean check.- while let Some(_) = self.resampler.delay() { + while self.resampler.delay().is_some() {
59-59: Unsafe unwrap() calls that could panicMultiple
unwrap()calls are used on ffmpeg operations that could potentially fail, which would cause the application to panic. While the comment acknowledges this is intentional, it's better to handle errors gracefully in production code.Consider propagating errors instead of panicking:
- Line 59:
frame.pts().unwrap()- handle missing PTS- Line 63:
self.resampler.run(&frame, &mut resampled_frame).unwrap()- handle resampling failure- Line 78:
self.resampler.flush(&mut resampled_frame).unwrap()- handle flush failureYou could modify the function signatures to return
Resulttypes and propagate errors upward.Also applies to: 63-63, 78-78
apps/desktop/package.json (1)
44-44: Align Tauri plugin versions
- In apps/desktop/package.json, most plugins and @tauri-apps/api sit around 2.3.x–2.5.x, but plugin-updater is at ^2.9.0 and plugin-process is pinned to 2.3.0. To avoid potential runtime incompatibilities, unify all @tauri-apps/plugin-* and @tauri-apps/api to the same minor version.
crates/camera-directshow/examples/cli.rs (1)
118-121: Prefer tracing over dbg! in examples for consistent logging.You already initialize
tracing_subscriber; considertracing::debug!to integrate with the rest of the app’s logs.Apply:
- // dbg!(frame.media_type.subtype_str()); - // dbg!(frame.reference_time); - dbg!(frame.timestamp); + // debug!(?frame.media_type.subtype_str()); + // debug!(?frame.reference_time); + tracing::debug!(?frame.timestamp, "frame");crates/cpal-ffmpeg/src/lib.rs (1)
28-39: Dead planar branch:format_typis always Packed.
let format_typ = sample::Type::Packed;means the planar path never runs. Simplify to avoid misleading code.- if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - - ffmpeg_frame - .data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { - ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes());crates/scap-ffmpeg/src/cpal.rs (1)
28-39: Same dead planar branch as in cpal-ffmpeg.
format_typis hard-coded toPacked, so the planar code never executes.- if matches!(format_typ, sample::Type::Planar) { - for i in 0..config.channels { - let plane_size = sample_count * sample_size; - let base = (i as usize) * plane_size; - - ffmpeg_frame - .data_mut(i as usize) - .copy_from_slice(&self.bytes()[base..base + plane_size]); - } - } else { - ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes()); - } + ffmpeg_frame.data_mut(0).copy_from_slice(self.bytes());crates/camera-mediafoundation/Cargo.toml (1)
18-20: Windows timing features enabled — good alignment with perf-counter usage.Looks correct. Ensure any unused Windows features don’t regress build times in non-consumer crates.
crates/camera-directshow/src/lib.rs (2)
791-796: Callback now carriesperf_counter— good, but consider documenting units.Clarify that
perf_counteris a raw QPC tick, not time; downstream code must pair with frequency for wall/relative time.Add doc comment:
/// perf_counter: Raw QueryPerformanceCounter ticks captured at receive time.
998-1000: Minor: Propagate QPC failure without breaking capture loop.If
QueryPerformanceCounterfails,Receivereturns an error and may disrupt the graph. Safer to log and continue withperf_counter = 0.- let mut perf_counter = 0; - unsafe { QueryPerformanceCounter(&mut perf_counter)? }; + let mut perf_counter = 0; + if let Err(e) = unsafe { QueryPerformanceCounter(&mut perf_counter) } { + tracing::warn!("QPC failed in Receive: {e:?}"); + perf_counter = 0; + }crates/timestamp/src/macos.rs (1)
62-68: Expand the test to catch the ratio bug.A simple assert prevents regressions.
#[test] fn test() { - let a = MachAbsoluteTimestamp::new(0); - - dbg!(MachAbsoluteTimestamp::now()); - dbg!(a + Duration::from_secs(1)); + let now = MachAbsoluteTimestamp::now(); + let later = now + Duration::from_millis(500); + let dur = later.duration_since(now); + assert!(dur.as_millis() >= 500 && dur.as_millis() <= 550, "unexpected duration: {dur:?}"); }crates/media-info/src/lib.rs (2)
107-109: Minor: clarify intent ofrate()castReturning
i32is fine given typical sample rates, but the silent cast can surprise. Consider documenting the bounds or returningu32and casting only at FFI boundaries.
118-150: wrap_frame: add lightweight sanity checks and tighten the de-interleave loopSmall safety/perf wins without changing behavior.
pub fn wrap_frame(&self, data: &[u8]) -> frame::Audio { let sample_size = self.sample_size(); let interleaved_chunk_size = sample_size * self.channels; + debug_assert!(self.channels > 0, "channels must be > 0"); + debug_assert!( + data.len() % sample_size == 0, + "audio buffer length must be a multiple of sample size" + ); let samples = data.len() / interleaved_chunk_size; let mut frame = frame::Audio::new(self.sample_format, samples, self.channel_layout()); frame.set_rate(self.sample_rate); - if self.channels == 0 { - unreachable!() - } else if self.channels == 1 || frame.is_packed() { + if self.channels == 1 || frame.is_packed() { frame.data_mut(0)[0..data.len()].copy_from_slice(data) } else { // cpal always returns interleaved data… - for (chunk_index, interleaved_chunk) in data.chunks(interleaved_chunk_size).enumerate() - { + for (chunk_index, interleaved_chunk) in data.chunks_exact(interleaved_chunk_size).enumerate() { let start = chunk_index * sample_size; let end = start + sample_size; for channel in 0..self.channels { let channel_start = channel * sample_size; let channel_end = channel_start + sample_size; frame.data_mut(channel)[start..end] .copy_from_slice(&interleaved_chunk[channel_start..channel_end]); } } }crates/timestamp/src/win.rs (1)
23-27: Type clarity in QPC/QPF localsInitialize counters/frequencies as
i64for readability and to avoid any inference surprises.- let mut value = 0; + let mut value: i64 = 0;crates/camera-mediafoundation/src/lib.rs (1)
230-236: Replace println! with tracing to follow existing logging styleKeeps logging consistent and controllable.
- println!("Initializing engine..."); + tracing::info!("Initializing engine..."); @@ - println!("Engine initialized."); + tracing::info!("Engine initialized.");Also applies to: 247-249
crates/recording/src/sources/audio_mixer.rs (5)
295-299: Preserve per-frame timing by setting PTS before feeding abuffer.Without PTS, abuffer assumes contiguous timing based on arrival order. Compute PTS from source.buffer_last/known sample counts to improve alignment, especially across jittery inputs.
Example approach (conceptual; may require minor API adjustments):
- for buffer in source.buffer.drain(..) { - let _ = self.abuffers[i].source().add(&buffer.0); + for (mut frame, ts) in source.buffer.drain(..) { + // pts in samples relative to the stream start + let pts = ts + .duration_since(self.timestamps) + .as_secs_f64() * frame.rate() as f64; + frame.set_pts(Some(pts as i64)); + let _ = self.abuffers[i].source().add(&frame); }
81-84: Consider amix duration=longest to avoid early stream truncation.duration=first stops when input 0 ends; in multi-source capture, a late-joining or longer-running source may be cut. duration=longest often matches mixing expectations.
- &format!( - "inputs={}:duration=first:dropout_transition=0", - abuffers.len() - ), + &format!( + "inputs={}:duration=longest:dropout_transition=0", + abuffers.len() + ),
199-201: Replace print! with tracing to avoid stdout noise.Use tracing::{debug,trace} instead of print!, and add a newline if you keep stdout.
- print!("Gap between last buffer frame, inserting {gap:?} of silence"); + debug!("Inserting silence to cover gap: {gap:?}");
125-138: samples_out can overflow usize on long sessions; prefer u64.At 48 kHz, 24-hour sessions exceed 4.1B samples; usize may be 32-bit on some targets. Use u64 for counters used in time math.
- samples_out: usize, + samples_out: u64,and cast filtered.samples() to u64 where incremented and used.
335-357: Run-loop timing: consider tick pacing based on sink backpressure.Fixed 5 ms sleep may under/over-feed the graph. Optionally adapt sleep to frames produced or to BUFFER_TIMEOUT.
crates/camera/src/macos.rs (1)
106-111: Removal of Instant-based fields aligns with the new cross-platform timestamp model.Looks good; reduces ambiguity between wall/monotonic clocks. Consider deleting the commented lines to avoid confusion.
crates/camera-windows/examples/cli.rs (1)
35-40: Expanded dbg! is helpful for manual inspection.No functional concerns. Consider feature-gating verbose output behind a CLI flag if the example grows.
crates/camera/src/windows.rs (1)
75-81: Remove dead commented fields in CapturedFrame construction.These comments add noise and can drift from reality. Delete them to keep intent clear.
- // reference_time: frame.reference_time, - // capture_begin_time: frame.capture_begin_time,crates/recording/src/pipeline/mod.rs (1)
15-25: Optional: add a backward-compat alias.If external callers still use Pipeline, a type alias avoids churn.
// elsewhere in this module (outside the struct block) pub type Pipeline = RecordingPipeline;crates/camera/src/lib.rs (1)
191-196: Clean up commented fields in CapturedFrame.Remove stale commented members to avoid confusion.
pub struct CapturedFrame { native: NativeCapturedFrame, - // pub reference_time: Instant, pub timestamp: Duration, - // pub capture_begin_time: Option<Instant>, }crates/recording/Cargo.toml (1)
38-39: Consolidate tracing-subscriber to a single dependency.It’s listed in both deps and dev-deps; enable features in deps and drop the dev entry to avoid duplication.
- tracing-subscriber = { version = "0.3.19" } + tracing-subscriber = { version = "0.3.19", features = ["env-filter"] }-[dev-dependencies] -tempfile = "3.20.0" -tracing-subscriber = { version = "0.3.19", features = ["env-filter"] } +[dev-dependencies] +tempfile = "3.20.0"Also applies to: 76-76
crates/camera-windows/src/lib.rs (1)
9-10: Remove unused import and commented remnants; keep API crisp.Instant is now unused; also drop commented fields in Frame construction.
-use std::{ - ffi::{OsStr, OsString}, - fmt::{Debug, Display}, - ops::Deref, - time::{Duration, Instant}, -}; +use std::{ + ffi::{OsStr, OsString}, + fmt::{Debug, Display}, + ops::Deref, + time::Duration, +};- perf_counter: data.perf_counter, - // capture_begin_time: Some(data.capture_begin_time), + perf_counter: data.perf_counter,- perf_counter: data.perf_counter, - // capture_begin_time: None, + perf_counter: data.perf_counter,Also applies to: 181-184, 102-103, 132-133
apps/cli/src/record.rs (1)
61-66: Preserve error context and avoid panic on stop.
map_err(|e| e.to_string())erases context. Add a prefix for clarity.actor.0.stop().await.unwrap()can panic; handle gracefully.Apply:
- .await - .map_err(|e| e.to_string())?; + .await + .map_err(|e| format!("failed to build studio recording actor: {e}"))?;Outside this hunk (Line 75), consider:
if let Err(e) = actor.0.stop().await { eprintln!("Failed to stop recording: {e}"); }Also applies to: 75-75
crates/recording/src/feeds/camera.rs (2)
2-7: Scope imports and gate platform-specific ones.
- Wildcard import
cap_camera_ffmpeg::*is broad; prefer explicit items to reduce namespace collisions.- Timestamp imports are good; keep them behind platform cfgs if only used conditionally.
16-16: Minor: group time imports.Consider importing
SystemTimealongsideDurationif you add a non-Windows/macOS fallback below.crates/recording/src/sources/screen_capture/mod.rs (1)
279-281: Updated init signature: propagate changes to call sites.Confirm Windows/macOS implementations and all callers pass the new
(…, Timestamp)senders. Add brief docs on the timestamp semantics for these channels.crates/recording/src/feeds/microphone.rs (2)
20-26: Use the carried timestamp downstream; avoid recomputing later.You’ve added
timestamp: TimestamptoMicrophoneSamples, which is good. Inaudio_input.rs, we still recompute the timestamp fromInputCallbackInfo. Prefer usingsamples.timestampto keep a single source of truth and avoid drift/mismatch.
96-99: Rename method to avoid confusingDefaultsemantics.
MicrophoneFeed::default()returns a(String, Device, SupportedStreamConfig), notSelf. Consider renaming todefault_device()(or similar) to avoid confusion with the standardDefaulttrait.- pub fn default() -> Option<(String, Device, SupportedStreamConfig)> { + pub fn default_device() -> Option<(String, Device, SupportedStreamConfig)> { let host = cpal::default_host(); - host.default_input_device().and_then(get_usable_device) + host.default_input_device().and_then(get_usable_device) }Update call sites within this file accordingly.
crates/recording/src/sources/screen_capture/macos.rs (2)
93-96: Copy loop is fine; consider bounds-checked slices.The manual slice math is correct. If you want to tighten safety, you could derive per-plane slices using
chunks_exactto avoid index math.- for i in 0..frame.planes() { - frame.data_mut(i).copy_from_slice( - &slice[i * data_bytes_size as usize..(i + 1) * data_bytes_size as usize], - ); - } + for (i, plane) in frame.data_mut_plane_iter().enumerate() { + let start = i * data_bytes_size as usize; + let end = start + data_bytes_size as usize; + plane.copy_from_slice(&slice[start..end]); + }
98-99: Mirror video-path error handling for audio send.Currently the audio send ignores errors. Mirror the video path and warn if the pipeline is unreachable.
- let _ = audio_tx.send((frame, timestamp)); + if audio_tx.send((frame, timestamp)).is_err() { + warn!("Pipeline is unreachable (audio)"); + }crates/recording/src/sources/audio_input.rs (2)
38-41: Use the carried timestamp from MicrophoneSamples instead of recomputing.Keeps a single source of truth and avoids potential discrepancies.
- let timestamp = Timestamp::from_cpal(samples.info.timestamp().capture); - - let frame = self.audio_info.wrap_frame(&samples.data); + let timestamp = samples.timestamp; + let frame = self.audio_info.wrap_frame(&samples.data);If
Timestampisn’tCopy, change tolet timestamp = samples.timestamp.clone();.Confirm
Timestamp: Copy(orClone).
92-94: Fix log message: this is the microphone feed, not camera.Minor but user-facing.
- error!("Lost connection with the camera feed"); - break Err("Lost connection with the camera feed".to_string()); + error!("Lost connection with the microphone feed"); + break Err("Lost connection with the microphone feed".to_string());crates/recording/src/sources/camera.rs (1)
109-112: Use tracing for errors instead of eprintln!.
Keeps logging consistent and structured.- eprintln!("{error}"); + error!("{error}");crates/recording/examples/recording-cli.rs (4)
21-23: Drop leftover temp-dir code.
/tmp/bruh is unused; keep example minimal.- let _ = std::fs::remove_dir_all("/tmp/bruh"); - let _ = std::fs::create_dir("/tmp/bruh");
39-56: Handle missing default mic gracefully (avoid panic) and avoid spinning up an unused feed.
Unwrapping default() will panic on machines without an input device. Either wire the mic into the builder or skip setting it.- mic_feed - .ask(microphone::SetInput { - label: - // MicrophoneFeed::list() - // .into_iter() - // .find(|(k, _)| k.contains("Focusrite")) - MicrophoneFeed::default() - .map(|v| v.0) - .unwrap(), - }) - .await - .unwrap() - .await - .unwrap(); + if let Some((label, _, _)) = MicrophoneFeed::default() { + mic_feed + .ask(microphone::SetInput { label }) + .await + .unwrap() + .await + .unwrap(); + } else { + tracing::warn!("No default input device found; skipping mic selection"); + }
57-57: Remove the 10ms sleep or await readiness instead.
Sleep is unnecessary (and racy) if not waiting for mic; prefer awaiting the builder’s ready signal.- tokio::time::sleep(Duration::from_millis(10)).await;
65-69: Optionally wire the mic feed into the recording.
If you intend to record mic audio, enable the builder hook.- // .with_mic_feed(std::sync::Arc::new( - // mic_feed.ask(microphone::Lock).await.unwrap(), - // )) + .with_mic_feed(std::sync::Arc::new( + mic_feed.ask(microphone::Lock).await.unwrap(), + ))crates/recording/src/lib.rs (1)
9-9: Consider also re-exporting the FeedLock types.
Saves downstreams from importing feeds::* just for locks.- pub use feeds::{camera::CameraFeed, microphone::MicrophoneFeed}; + pub use feeds::{ + camera::{CameraFeed, CameraFeedLock}, + microphone::{MicrophoneFeed, MicrophoneFeedLock}, + };crates/recording/src/sources/screen_capture/windows.rs (1)
129-136: Consider handlingSystemRelativeTimeerrors more gracefullySilently returning on error without logging could make debugging difficult. Consider logging the error before returning.
- let Ok(timestamp) = msg.frame.inner().SystemRelativeTime() else { - return; - }; + let Ok(timestamp) = msg.frame.inner().SystemRelativeTime() else { + warn!("Failed to get SystemRelativeTime from frame"); + return; + };crates/timestamp/src/lib.rs (1)
102-112: Consider extracting timestamp collection for better testabilityThe
now()method makes multiple system calls that could have slightly different times. Consider collecting all timestamps atomically or documenting the potential time skew.pub fn now() -> Self { + // Note: There may be microsecond-level differences between these timestamps + // as they are collected sequentially rather than atomically Self { instant: Instant::now(), system_time: SystemTime::now(), #[cfg(windows)] performance_counter: PerformanceCounterTimestamp::now(), #[cfg(target_os = "macos")] mach_absolute_time: MachAbsoluteTimestamp::now(), } }crates/recording/src/capture_pipeline.rs (2)
180-182: Use rounding for audio PTS to reduce cumulative drift.Truncation biases timestamps low; rounding matches neighboring code and typical encoder expectations.
- let pts = (ts_offset.as_secs_f64() * frame.rate() as f64) as i64; + let pts = (ts_offset.as_secs_f64() * frame.rate() as f64).round() as i64; frame.set_pts(Some(pts));Also applies to: 510-512
173-179: Minor: prefactor the checked subtraction for readability.Keeps the “skip negative” path clear.
- let Some(ts_offset) = ts_offset.checked_sub(screen_first_offset) else { - continue; - }; + let Some(ts_offset) = ts_offset.checked_sub(screen_first_offset) else { continue };Also applies to: 503-509
crates/recording/src/studio_recording.rs (3)
811-812: Standardize PTS rounding across mic/system/camera.System audio already uses
.round(). Align mic and camera to minimize off-by-one jitter.- frame.set_pts(Some((elapsed.as_secs_f64() * rate) as i64)); + frame.set_pts(Some((elapsed.as_secs_f64() * rate).round() as i64));- frame.set_pts(Some((elapsed.as_secs_f64() * rate) as i64)); + frame.set_pts(Some((elapsed.as_secs_f64() * rate).round() as i64));Also applies to: 917-919
313-314: Fix log typo.“shuting” → “shutting”.
- tracing::info!("pipeline shuting down"); + tracing::info!("pipeline shutting down");
935-936: Avoid integer truncation when deriving camera FPS.Use floating rounding to get 30 from 30000/1001 etc.
- fps: (camera_config.frame_rate.0 / camera_config.frame_rate.1) as u32, + fps: ((camera_config.frame_rate.0 as f64 / camera_config.frame_rate.1 as f64).round()) as u32,crates/recording/src/instant_recording.rs (2)
246-251: Avoid unnecessary clones of screen source/receiver.Ownership is available; pass directly.
- (screen_source.clone(), screen_rx.clone()), + (screen_source, screen_rx),
214-241: Consider using cap_timestamp::Timestamps for start_time too.Everything else has moved to
Timestamps; passingSystemTime::now()here is fine, but if you later need cross-clock diffs in instant mode, preferTimestamps::now().system_time()and keep theTimestampshandy.
| error!("Failed to spawn studio recording actor: {e}"); | ||
| e.to_string() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error message referring to wrong recording type
The error message says "Failed to spawn studio recording actor" but this is in the instant recording branch.
- error!("Failed to spawn studio recording actor: {e}");
+ error!("Failed to spawn instant recording actor: {e}");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| error!("Failed to spawn studio recording actor: {e}"); | |
| e.to_string() | |
| error!("Failed to spawn instant recording actor: {e}"); | |
| e.to_string() |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/recording.rs around lines 430 to 431, the error
message and returned string incorrectly reference "studio recording" while this
is the instant recording branch; update the log and the returned string to say
"instant recording" (or a more specific instant-recording identifier used in
this module) so both the error! invocation and the returned e.to_string()
wrapper/context reflect "Failed to spawn instant recording actor: {e}" (or
equivalent) to match the branch semantics.
| #[derive(Debug)] | ||
| pub struct CapturedFrame { | ||
| native: NativeCapturedFrame, | ||
| pub reference_time: Instant, | ||
| // pub reference_time: Instant, | ||
| pub timestamp: Duration, | ||
| pub capture_begin_time: Option<Instant>, | ||
| // pub capture_begin_time: Option<Instant>, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Breaking change: verify no call sites still use removed fields.
Search and update any consumers referencing reference_time or capture_begin_time.
🏁 Script executed:
#!/bin/bash
# Audit for removed fields usage
rg -nP --type=rust 'reference_time|capture_begin_time'Length of output: 1669
Update consumer crates to remove or adapt all reference_time/capture_begin_time usage
Active references in crates/camera-avfoundation/src/lib.rs (capture_begin_time at lines 77, 116, 119, 132, 138) and commented examples in
crates/camera-directshow/examples/cli.rs:119crates/recording/src/sources/camera.rs:71,105crates/camera/src/macos.rs:108,110crates/camera/src/windows.rs:76–77crates/camera-windows/src/lib.rs:102,132
must be updated to use the newtimestampfield.
| let new_pts = self | ||
| .elapsed_duration | ||
| .add(time.sub(self.segment_first_timestamp.unwrap_or(time))); | ||
|
|
||
| if !self.is_writing { | ||
| self.is_writing = true; | ||
| self.asset_writer.start_session_at_src_time(time); | ||
| self.asset_writer.start_session_at_src_time(new_pts); | ||
| self.start_time = time; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
End-of-session time uses the wrong timeline; track and use normalized src time.
You start the session at normalized new_pts, but finish() ends it at the original (unnormalized) last_timestamp. This mismatch can truncate or extend the session, especially with pauses. Record the last normalized src time and use it in finish().
Apply this diff to fix finish():
- self.asset_writer
- .end_session_at_src_time(self.last_timestamp.unwrap_or(cm::Time::zero()));
+ self.asset_writer
+ .end_session_at_src_time(self.last_src_time.unwrap_or(cm::Time::zero()));Add a field and update it when queuing frames:
// struct MP4Encoder
last_src_time: Option<cm::Time>, // initialize to None in init()
// after computing `new_pts` in queue_video_frame(...)
self.last_src_time = Some(new_pts);
// after computing `pts` in queue_audio_frame(...)
if self.last_src_time.map_or(true, |prev| pts > prev) {
self.last_src_time = Some(pts);
}Optionally, factor normalization into a helper to keep both paths consistent:
impl MP4Encoder {
#[inline]
fn normalized_src_time(&self, t: cm::Time) -> cm::Time {
self.elapsed_duration.add(t.sub(self.segment_first_timestamp.unwrap_or(t)))
}
}🤖 Prompt for AI Agents
In crates/enc-avfoundation/src/mp4.rs around lines 209 to 216, the session is
started with a normalized src time (new_pts) but finish() uses the original
unnormalized last_timestamp; add a new field last_src_time: Option<cm::Time>
initialized to None in MP4Encoder, set last_src_time = Some(new_pts) after
computing new_pts in queue_video_frame, and when computing pts in
queue_audio_frame update last_src_time only if pts is greater than the previous
value; change finish() to use last_src_time (if present) as the end-of-session
src time; optionally add a small helper normalized_src_time(&self, t: cm::Time)
-> cm::Time and call it from both queue_video_frame and queue_audio_frame so
normalization logic is shared and consistent.
| return Ok(()); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Audio can panic after resume if it arrives before video.
After resume(), segment_first_timestamp is None. If audio arrives before the next video frame, unwrap() later on the same code path will panic. Add a guard.
Apply this diff in queue_audio_frame:
- if self.is_paused || !self.is_writing {
+ if self.is_paused || !self.is_writing {
return Ok(());
}
+ if self.segment_first_timestamp.is_none() {
+ // Wait for the segment baseline (first video frame) to arrive.
+ return Ok(());
+ }Alternative (better sync resilience): adopt the first audio frame as the segment baseline when video hasn't arrived yet, and normalize like video:
- set
self.segment_first_timestamp = Some(time)whenNone - compute
ptsvia the samenormalized_src_time(time)helper.
If you want this option, I can provide a concrete patch.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if self.is_paused || !self.is_writing { | |
| return Ok(()); | |
| } | |
| if self.is_paused || !self.is_writing { | |
| return Ok(()); | |
| } | |
| if self.segment_first_timestamp.is_none() { | |
| // Wait for the segment baseline (first video frame) to arrive. | |
| return Ok(()); | |
| } |
🤖 Prompt for AI Agents
In crates/enc-avfoundation/src/mp4.rs around lines 240-242, after the early
return for paused/not-writing, add a guard to handle the case where
self.segment_first_timestamp is None to avoid later unwrap() panics when audio
arrives before video; specifically, if segment_first_timestamp.is_none() return
Ok(()) (or optionally adopt the first audio frame as the segment baseline by
setting self.segment_first_timestamp = Some(time) and computing pts via
normalized_src_time(time) to normalize audio like video).
| while let Some(frame) = self.resampler.flush(self.encoder.frame_size() as usize) { | ||
| self.encoder.send_frame(&frame).unwrap(); | ||
|
|
||
| self.process_packets(output); | ||
| } | ||
| self.process_packets(output); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify flush behavior matches encoder requirements
The flush implementation repeatedly calls resampler.flush() with the encoder's frame size. Ensure this properly handles any remaining samples that don't fill a complete frame.
🏁 Script executed:
#!/bin/bash
# Check if there are any tests that verify the flush behavior with partial frames
rg -n "flush.*frame_size|partial.*flush" --type rust -C3Length of output: 1083
Add flush behavior tests for partial frames
No existing tests cover resampler.flush() in finish() (see crates/enc-ffmpeg/src/audio/aac.rs and opus.rs). Add tests exercising non-full-buffer flush to verify the encoder handles remaining samples correctly.
| impl Add<Duration> for MachAbsoluteTimestamp { | ||
| type Output = Self; | ||
|
|
||
| fn add(self, rhs: Duration) -> Self::Output { | ||
| let info = TimeBaseInfo::new(); | ||
| let freq = info.numer as f64 / info.denom as f64; | ||
|
|
||
| Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add(Duration) uses the wrong ratio; should convert ns→ticks with denom/numer.
Currently uses numer/denom which is ticks→ns. This makes additions wrong in magnitude.
- fn add(self, rhs: Duration) -> Self::Output {
- let info = TimeBaseInfo::new();
- let freq = info.numer as f64 / info.denom as f64;
-
- Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64)
- }
+ fn add(self, rhs: Duration) -> Self::Output {
+ let info = TimeBaseInfo::new();
+ let ticks = (rhs.as_nanos() as u128)
+ .saturating_mul(info.denom as u128)
+ .checked_div(info.numer as u128)
+ .unwrap_or(0);
+ Self(self.0.saturating_add(ticks.min(u64::MAX as u128) as u64))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl Add<Duration> for MachAbsoluteTimestamp { | |
| type Output = Self; | |
| fn add(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let freq = info.numer as f64 / info.denom as f64; | |
| Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64) | |
| } | |
| } | |
| impl Add<Duration> for MachAbsoluteTimestamp { | |
| type Output = Self; | |
| fn add(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let ticks = (rhs.as_nanos() as u128) | |
| .saturating_mul(info.denom as u128) | |
| .checked_div(info.numer as u128) | |
| .unwrap_or(0); | |
| Self(self.0.saturating_add(ticks.min(u64::MAX as u128) as u64)) | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 36 to 45, the Add<Duration> impl
converts nanoseconds to ticks using info.numer/info.denom (ticks→ns) which is
inverted; change the conversion to multiply the duration (ns) by (denom as f64 /
numer as f64) to get ticks, i.e. compute freq = info.denom as f64 / info.numer
as f64 (or explicitly ns_to_ticks = denom/numer) and use that when adding to
self.0 (cast as needed to f64 and back to u64) so the magnitude of the addition
is correct.
| impl Sub<Duration> for MachAbsoluteTimestamp { | ||
| type Output = Self; | ||
|
|
||
| fn sub(self, rhs: Duration) -> Self::Output { | ||
| let info = TimeBaseInfo::new(); | ||
| let freq = info.numer as f64 / info.denom as f64; | ||
|
|
||
| Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Sub(Duration) has the same ratio issue; fix and saturate.
Mirror the corrected add() logic for subtraction.
- fn sub(self, rhs: Duration) -> Self::Output {
- let info = TimeBaseInfo::new();
- let freq = info.numer as f64 / info.denom as f64;
-
- Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64)
- }
+ fn sub(self, rhs: Duration) -> Self::Output {
+ let info = TimeBaseInfo::new();
+ let ticks = (rhs.as_nanos() as u128)
+ .saturating_mul(info.denom as u128)
+ .checked_div(info.numer as u128)
+ .unwrap_or(0);
+ Self(self.0.saturating_sub(ticks.min(u64::MAX as u128) as u64))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl Sub<Duration> for MachAbsoluteTimestamp { | |
| type Output = Self; | |
| fn sub(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let freq = info.numer as f64 / info.denom as f64; | |
| Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64) | |
| } | |
| } | |
| impl Sub<Duration> for MachAbsoluteTimestamp { | |
| type Output = Self; | |
| fn sub(self, rhs: Duration) -> Self::Output { | |
| let info = TimeBaseInfo::new(); | |
| let ticks = (rhs.as_nanos() as u128) | |
| .saturating_mul(info.denom as u128) | |
| .checked_div(info.numer as u128) | |
| .unwrap_or(0); | |
| Self(self.0.saturating_sub(ticks.min(u64::MAX as u128) as u64)) | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/macos.rs around lines 47-56, the Sub<Duration>
implementation uses a floating-point ratio and casts that leads to
precision/overflow issues; mirror the corrected add() logic by computing the
tick delta using integer arithmetic (e.g. use rhs.as_nanos() * info.numer /
info.denom with a wider integer type like u128), convert that to u64 safely,
then perform a saturating subtraction from self.0
(self.0.saturating_sub(tick_delta_as_u64)) and return Self with the result so
the subtraction is precise and clamps at zero rather than wrapping.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (7)
crates/timestamp/src/macos.rs (5)
62-73: Make the test assert math correctness instead of dbg!-printing.Catches the add/convert bugs above.
- #[test] - fn test() { - let a = MachAbsoluteTimestamp::new(0); - - dbg!(MachAbsoluteTimestamp::now()); - dbg!(a + Duration::from_secs(1)); - } + #[test] + fn add_and_duration_since_roundtrip() { + let a = MachAbsoluteTimestamp::now(); + let b = a + Duration::from_millis(250); + let d = b.duration_since(a); + assert!(d >= Duration::from_millis(250)); + assert!(d <= Duration::from_millis(251)); + }
8-16: Fix units and constructor: inner value is Mach host ticks, not nanoseconds.The field stores mach_absolute_time/CoreAudio host ticks. Rename the ctor param and fix the comment to avoid misuse in callers.
Run to spot any callers assuming nanoseconds:
#!/bin/bash rg -nC2 'MachAbsoluteTimestamp::new\('Apply:
-#[derive(Clone, Copy, Debug)] -pub struct MachAbsoluteTimestamp( - // Nanoseconds - u64, -); +#[derive(Clone, Copy, Debug)] +pub struct MachAbsoluteTimestamp( + // Mach absolute time ticks (CoreAudio host time / mach_absolute_time units) + u64, +); @@ - pub fn new(nanos: u64) -> Self { - Self(nanos) + pub fn new(host_time_ticks: u64) -> Self { + Self(host_time_ticks) }
22-31: Use integer math for tick→ns conversion; avoid precision loss and under/overflow.Current float path can round and miscompute long spans.
- pub fn duration_since(&self, other: Self) -> Duration { - let info = TimeBaseInfo::new(); - let freq = info.numer as f64 / info.denom as f64; - - let Some(diff) = self.0.checked_sub(other.0) else { - return Duration::ZERO; - }; - - Duration::from_nanos((diff as f64 * freq) as u64) - } + pub fn duration_since(&self, other: Self) -> Duration { + let info = TimeBaseInfo::new(); + let ticks = self.0.saturating_sub(other.0) as u128; + let nanos = ticks + .saturating_mul(info.numer as u128) + .checked_div(info.denom as u128) + .unwrap_or(0); + Duration::from_nanos(nanos.min(u64::MAX as u128) as u64) + }
43-48: Add(Duration) uses the wrong conversion (should be ns→ticks via denom/numer) and loses precision.This inflates additions; also avoid floats and saturate.
- fn add(self, rhs: Duration) -> Self::Output { - let info = TimeBaseInfo::new(); - let freq = info.numer as f64 / info.denom as f64; - - Self((self.0 as f64 + rhs.as_nanos() as f64 * freq) as u64) - } + fn add(self, rhs: Duration) -> Self::Output { + let info = TimeBaseInfo::new(); + let ticks_delta = (rhs.as_nanos() as u128) + .saturating_mul(info.denom as u128) + .checked_div(info.numer as u128) + .unwrap_or(0); + Self(self.0.saturating_add(ticks_delta.min(u64::MAX as u128) as u64)) + }
54-59: Sub(Duration) uses the wrong conversion and can underflow; mirror Add with saturation.- fn sub(self, rhs: Duration) -> Self::Output { - let info = TimeBaseInfo::new(); - let freq = info.numer as f64 / info.denom as f64; - - Self((self.0 as f64 - rhs.as_nanos() as f64 * freq) as u64) - } + fn sub(self, rhs: Duration) -> Self::Output { + let info = TimeBaseInfo::new(); + let ticks_delta = (rhs.as_nanos() as u128) + .saturating_mul(info.denom as u128) + .checked_div(info.numer as u128) + .unwrap_or(0); + Self(self.0.saturating_sub(ticks_delta.min(u64::MAX as u128) as u64)) + }crates/recording/src/studio_recording.rs (2)
811-811: Round mic PTS to nearest tick (match system audio).System audio uses .round(); mic truncates, risking drift.
- frame.set_pts(Some((elapsed.as_secs_f64() * rate) as i64)); + frame.set_pts(Some((elapsed.as_secs_f64() * rate).round() as i64));
919-919: Round camera PTS to nearest tick.Keep PTS policy consistent across streams.
- frame.set_pts(Some((elapsed.as_secs_f64() * rate) as i64)); + frame.set_pts(Some((elapsed.as_secs_f64() * rate).round() as i64));
🧹 Nitpick comments (8)
crates/timestamp/src/macos.rs (2)
8-11: Optional: derive standard comparison/hash traits for ergonomics.Helps with ordering and map/set keys; zero risk.
-#[derive(Clone, Copy, Debug)] +#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct MachAbsoluteTimestamp(
22-31: Optional: cache TimeBaseInfo once.Timebase is constant; avoid repeated syscalls/FFI.
use std::sync::OnceLock; fn timebase() -> &'static TimeBaseInfo { static TB: OnceLock<TimeBaseInfo> = OnceLock::new(); TB.get_or_init(TimeBaseInfo::new) }Then replace
TimeBaseInfo::new()with*timebase()or references accordingly.Also applies to: 43-48, 54-59
crates/recording/src/studio_recording.rs (6)
910-912: Remove stray dbg! calls.Avoid noisy stderr in production; use tracing if needed.
- dbg!(×tamp_tx); ... - dbg!(timestamp);Also applies to: 916-916
468-471: Use tracing instead of println!.Prefer structured logging for consistency.
- println!("recording successfully stopped"); + info!("recording successfully stopped");
313-313: Fix typo in log message (“shuting” → “shutting”).- tracing::info!("pipeline shuting down"); + tracing::info!("pipeline shutting down");
543-545: Avoid unwraps when deriving relative paths.These can panic if invariants break; at minimum add context.
- let make_relative = |path: &PathBuf| { - RelativePathBuf::from_path(path.strip_prefix(&actor.recording_dir).unwrap()).unwrap() - }; + let make_relative = |path: &PathBuf| { + let rel = path + .strip_prefix(&actor.recording_dir) + .expect("segment path must be inside recording_dir"); + RelativePathBuf::from_path(rel) + .expect("failed to convert segment path to RelativePathBuf") + };If you prefer, we can propagate an error instead of panicking.
809-814: DRY: factor PTS computation into a helper.PTS math is duplicated across mic/system audio/camera; centralize to avoid divergence.
Example helper (add once near the top of this module):
fn pts_from(timestamp: Timestamp, first: Timestamp, origin: &Timestamps, tb_num: i32, tb_den: i32) -> i64 { let rate = tb_den as f64 / tb_num as f64; let elapsed = timestamp.duration_since(*origin) - first.duration_since(*origin); (elapsed.as_secs_f64() * rate).round() as i64 }Then replace each set_pts call with:
frame.set_pts(Some(pts_from(timestamp, *first_timestamp, &start_time, time_base.numerator(), time_base.denominator())));Also applies to: 862-865, 917-921
455-458: Make minimum segment duration configurable.Hardcoding 1s makes testing and UX tweaks harder.
- tokio::time::sleep_until( - (segment_start_instant + Duration::from_secs(1)).into(), - ) + tokio::time::sleep_until((segment_start_instant + MIN_SEGMENT_DURATION).into())Add near the top:
const MIN_SEGMENT_DURATION: Duration = Duration::from_secs(1);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
Cargo.toml(2 hunks)crates/recording/src/studio_recording.rs(24 hunks)crates/timestamp/src/macos.rs(1 hunks)crates/timestamp/src/win.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/timestamp/src/win.rs
🧰 Additional context used
📓 Path-based instructions (2)
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs: Rust code must be formatted with rustfmt and adhere to workspace clippy lints
Rust module files should use snake_case names
Files:
crates/timestamp/src/macos.rscrates/recording/src/studio_recording.rs
{crates,apps/cli,apps/desktop/src-tauri}/**/{src,tests}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust tests should live in src (inline/unit) or tests (integration) directories
Files:
crates/timestamp/src/macos.rscrates/recording/src/studio_recording.rs
🧠 Learnings (1)
📚 Learning: 2025-09-08T16:48:20.727Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-08T16:48:20.727Z
Learning: Applies to {crates,apps/cli,apps/desktop/src-tauri}/**/*.rs : Rust code must be formatted with rustfmt and adhere to workspace clippy lints
Applied to files:
Cargo.toml
🧬 Code graph analysis (2)
crates/timestamp/src/macos.rs (2)
crates/timestamp/src/win.rs (6)
new(26-28)now(35-39)duration_since(30-33)from_cpal(41-45)add(51-54)sub(60-63)crates/timestamp/src/lib.rs (7)
now(103-112)duration_since(24-33)from_cpal(35-44)instant(114-116)add(50-59)add(65-74)sub(80-89)
crates/recording/src/studio_recording.rs (6)
crates/recording/src/sources/screen_capture/windows.rs (2)
new(356-362)new(481-498)crates/recording/src/pipeline/builder.rs (3)
build(108-164)e(81-81)e(83-83)crates/timestamp/src/lib.rs (1)
now(103-112)crates/recording/src/sources/screen_capture/mod.rs (2)
init(275-411)config(418-420)crates/recording/src/sources/audio_input.rs (1)
init(25-31)crates/recording/src/sources/camera.rs (1)
init(20-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (5)
crates/timestamp/src/macos.rs (2)
18-21: LGTM: now() correctly reads mach absolute ticks.
33-37: LGTM: from_cpal() maps to CoreAudio host time ticks.Cargo.toml (2)
70-70: Good: deny unchecked duration subtraction across the workspace.This aligns with the new timestamp semantics and prevents underflow/panic bugs with time math. LGTM.
9-9: Ensure a single-sourcecpaland pin deterministically.File: Cargo.toml (workspace.dependencies)
- Add a [patch.crates-io] override for
cpalpointing at the fork so all transitive resolves use the same crate (prevents duplicatecpaltypes at link-time).- Pin
revto the full 40‑char commit SHA (replace short3cc779a7b4ca).- Add a TODO comment with the upstream PR/issue to track switching back to upstream.
Suggested diff:
[workspace.dependencies] -cpal = { git = "https://github.com/CapSoftware/cpal", rev = "3cc779a7b4ca" } +cpal = { git = "https://github.com/CapSoftware/cpal", rev = "3cc779a7b4ca" } # TODO: switch back to upstream once fix lands [patch.crates-io] +cpal = { git = "https://github.com/CapSoftware/cpal", rev = "3cc779a7b4ca" }Verification: sandbox ripgrep was inconclusive. Run locally to confirm no stray
cpalspecs:# show all cpal entries for manual review find . -type f -name Cargo.toml -not -path '*/target/*' -print0 | xargs -0 grep -n -I -C2 -e 'cpal\s*=' || true # show any crates that specify a versioned cpal (not workspace) find . -type f -name Cargo.toml -not -path '*/target/*' -print0 | xargs -0 grep -n -I -C2 -E -e 'cpal\s*=\s*"[0-9]+\.[0-9]+\.[0-9]+"' || truecrates/recording/src/studio_recording.rs (1)
553-586: Confirm start_time Option semantics.Mapping uses try_recv() so start_time may be None — verify downstream consumers tolerate a missing start_time without breaking PTS/sync. Audit PTS sites that compute set_pts (notably crates/recording/src/studio_recording.rs — set_pts at ~lines 811, 864, 919 — and crates/recording/src/capture_pipeline.rs — set_pts at ~lines 181, 511) and either ensure they use pipeline.start_time/first_timestamp (not the optional metadata) or add a fallback/wait-for-first-timestamp before exposing the segment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
crates/timestamp/src/lib.rs (2)
24-33: Avoid panics in duration_since; use saturating for Instant/SystemTime.Instant::duration_since panics if self < start.instant; SystemTime::duration_since(...).unwrap() panics on clock skew. Use saturating variants.
- Self::Instant(instant) => instant.duration_since(start.instant), - Self::SystemTime(time) => time.duration_since(start.system_time).unwrap(), + Self::Instant(instant) => instant.saturating_duration_since(start.instant), + Self::SystemTime(time) => time.saturating_duration_since(start.system_time),Note: Ensure platform arms also clamp negatives (e.g., Windows perf counter) to avoid Duration construction panics downstream.
35-44: from_cpal returns nothing on non-Windows/macOS targets; gate or provide fallback.Current impl won’t compile on Linux/others. Add cfg gating and a portable fallback.
- pub fn from_cpal(instant: cpal::StreamInstant) -> Self { + #[cfg(any(windows, target_os = "macos"))] + pub fn from_cpal(instant: cpal::StreamInstant) -> Self { #[cfg(windows)] { Self::PerformanceCounter(PerformanceCounterTimestamp::from_cpal(instant)) } #[cfg(target_os = "macos")] { Self::MachAbsoluteTime(MachAbsoluteTimestamp::from_cpal(instant)) } - } + }Add this fallback (outside the range above):
#[cfg(not(any(windows, target_os = "macos")))] pub fn from_cpal(_instant: cpal::StreamInstant) -> Self { // Best-effort portable timestamp Self::Instant(Instant::now()) }Optional: feature-gate this API with
#[cfg(feature = "cpal")]to avoid forcing a hard cpal dependency for consumers.
🧹 Nitpick comments (2)
crates/timestamp/src/lib.rs (2)
62-75: Deduplicate Add impls by delegating.Minor cleanup: have the owning impl delegate to the ref impl.
fn add(self, rhs: Duration) -> Self::Output { - match self { - Timestamp::Instant(i) => Timestamp::Instant(i + rhs), - Timestamp::SystemTime(t) => Timestamp::SystemTime(t + rhs), - #[cfg(windows)] - Timestamp::PerformanceCounter(c) => Timestamp::PerformanceCounter(c + rhs), - #[cfg(target_os = "macos")] - Timestamp::MachAbsoluteTime(c) => Timestamp::MachAbsoluteTime(c + rhs), - } + (&self).add(rhs) }
122-125: Add Windows accessor for symmetry (optional).Expose performance counter like the macOS accessor to keep the API consistent.
#[cfg(windows)] pub fn performance_counter(&self) -> PerformanceCounterTimestamp { self.performance_counter }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
crates/timestamp/src/lib.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
{crates,apps/cli,apps/desktop/src-tauri}/**/*.rs: Rust code must be formatted with rustfmt and adhere to workspace clippy lints
Rust module files should use snake_case names
Files:
crates/timestamp/src/lib.rs
{crates,apps/cli,apps/desktop/src-tauri}/**/{src,tests}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust tests should live in src (inline/unit) or tests (integration) directories
Files:
crates/timestamp/src/lib.rs
🧬 Code graph analysis (1)
crates/timestamp/src/lib.rs (2)
crates/timestamp/src/win.rs (5)
duration_since(30-33)from_cpal(41-45)add(51-54)sub(60-63)now(35-39)crates/timestamp/src/macos.rs (5)
duration_since(22-31)from_cpal(33-37)add(43-48)sub(54-59)now(18-20)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Clippy
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
| impl std::ops::Sub<Duration> for Timestamp { | ||
| type Output = Timestamp; | ||
|
|
||
| fn sub(self, rhs: Duration) -> Self::Output { | ||
| match self { | ||
| Timestamp::Instant(i) => Timestamp::Instant(i.checked_sub(rhs).unwrap()), | ||
| Timestamp::SystemTime(t) => Timestamp::SystemTime(t - rhs), | ||
| #[cfg(windows)] | ||
| Timestamp::PerformanceCounter(c) => Timestamp::PerformanceCounter(c - rhs), | ||
| #[cfg(target_os = "macos")] | ||
| Timestamp::MachAbsoluteTime(c) => Timestamp::MachAbsoluteTime(c - rhs), | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saturate Instant subtraction instead of unwrap.
checked_sub(None).unwrap() panics on underflow. Saturate instead.
- Timestamp::Instant(i) => Timestamp::Instant(i.checked_sub(rhs).unwrap()),
+ Timestamp::Instant(i) => Timestamp::Instant(i.checked_sub(rhs).unwrap_or(i)),If your MSRV supports it, prefer i.saturating_sub(rhs).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl std::ops::Sub<Duration> for Timestamp { | |
| type Output = Timestamp; | |
| fn sub(self, rhs: Duration) -> Self::Output { | |
| match self { | |
| Timestamp::Instant(i) => Timestamp::Instant(i.checked_sub(rhs).unwrap()), | |
| Timestamp::SystemTime(t) => Timestamp::SystemTime(t - rhs), | |
| #[cfg(windows)] | |
| Timestamp::PerformanceCounter(c) => Timestamp::PerformanceCounter(c - rhs), | |
| #[cfg(target_os = "macos")] | |
| Timestamp::MachAbsoluteTime(c) => Timestamp::MachAbsoluteTime(c - rhs), | |
| } | |
| } | |
| } | |
| impl std::ops::Sub<Duration> for Timestamp { | |
| type Output = Timestamp; | |
| fn sub(self, rhs: Duration) -> Self::Output { | |
| match self { | |
| Timestamp::Instant(i) => Timestamp::Instant(i.checked_sub(rhs).unwrap_or(i)), | |
| Timestamp::SystemTime(t) => Timestamp::SystemTime(t - rhs), | |
| #[cfg(windows)] | |
| Timestamp::PerformanceCounter(c) => Timestamp::PerformanceCounter(c - rhs), | |
| #[cfg(target_os = "macos")] | |
| Timestamp::MachAbsoluteTime(c) => Timestamp::MachAbsoluteTime(c - rhs), | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/timestamp/src/lib.rs around lines 77 to 90, the Instant branch uses
i.checked_sub(rhs).unwrap() which will panic on underflow; replace that unwrap
with a saturating subtraction to clamp to zero instead (e.g., use
i.saturating_sub(rhs) if MSRV supports it) or handle the Option returned by
checked_sub by mapping None to Duration::ZERO or the minimum Instant, then
construct Timestamp::Instant with the saturated result so subtraction never
panics.
Introduces
cap-timestamp, a crate now used by capture and recording code to standardise how we deal with timestamps across platforms.The main addition is the
Timestampenum, which is able to hold one of a few options for timestamps, including rust's Instant and SystemTime, or os-specific timestamps like Windows' performance counter or Mac'smach_absolute_time.Two
Timestampinstances can't be used together directly, but can instead be compared relative to aTimestampsinstance - the main point of the crate; not for manipulating current clock time (we haveSystemTimefor that), but instead for manipulating relative, cross platform, high resolution timestamps.Timestampsis a struct that holds a value for each variant ofTimestamp, so that aTimestamp::Instantand aTimestamp::MachAbsoluteTimecan have a common base to compare each other against.Timestampis now the primary method of sync management during recording, and the audio encoders + mixer have been rewritten to properly assign frame timestamps. I'd like to do another rewrite of the mixer to make it simpler, but it's working great at the moment from my testing.Testing
Create studio and instant recordings that contain microphone, camera, and system audio. We want to throw as much as possible at this new system to make sure things stay in sync.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores