-
Notifications
You must be signed in to change notification settings - Fork 1k
fix: ai transcription loading state #991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughServer-side status retrieval now re-fetches the video row from the DB when transcription hasn’t started to derive transcription and AI metadata (aiProcessing, aiTitle, summary, chapters). The Share page tightens polling/loading: polling runs only when aiGenerationEnabled && (aiProcessing || noAiData); loading visibility was simplified and decoupled from AI flag. The transcribe helper’s signature now uses typed Video.VideoId. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Client
participant S as get-status action
participant DB as Database
C->>S: Request video status(videoId)
alt Transcription not started
S->>DB: Fetch latest video row by videoId
alt Row found
S->>S: Derive transcriptionStatus, aiProcessing, aiTitle, summary, chapters from DB metadata
S-->>C: Return status from updated DB state
else Row not found
S->>S: Fallback to original in-memory metadata path
S-->>C: Return status from original path
end
else Transcription in progress/complete
S->>S: Use existing logic
S-->>C: Return status
end
sequenceDiagram
autonumber
participant U as Share.tsx
participant T as Poll Timer
participant A as get-status API
note over U: Transcription PENDING/PROCESSING
T-->>U: tick
U->>A: fetch status
A-->>U: status + aiData
U->>U: Show loading (independent of aiGenerationEnabled)
note over U: Transcription COMPLETE
alt aiGenerationEnabled && (aiData.processing || noAiData)
T-->>U: continue polling
U->>U: Show loading (if aiData.processing)
else Otherwise
T-->>U: stop polling
U->>U: Hide loading
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/web/actions/videos/get-status.ts (1)
216-236: After kicking off AI, optimistically set aiProcessing and tighten the refresh query.Small race: the immediate SELECT may read the row before generateAiMetadata flips aiProcessing. Set it up front and limit the SELECT.
Apply:
- (async () => { + (async () => { try { console.log( `[Get Status] Starting AI metadata generation for video ${videoId}`, ); await generateAiMetadata(videoId, video.ownerId); console.log( `[Get Status] AI metadata generation completed for video ${videoId}`, ); } catch (error) { console.error( `[Get Status] Error generating AI metadata for video ${videoId}:`, error, ); @@ } })(); - const updatedVideo = await db() + // Optimistically reflect processing in DB so clients see it immediately + await db() + .update(videos) + .set({ + metadata: { + ...metadata, + aiProcessing: true, + }, + }) + .where(eq(videos.id, videoId)); + + const updatedVideo = await db() .select() .from(videos) - .where(eq(videos.id, videoId)); + .where(eq(videos.id, videoId)) + .limit(1);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/web/actions/videos/get-status.ts(1 hunks)apps/web/app/s/[videoId]/Share.tsx(1 hunks)apps/web/lib/transcribe.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/app/s/[videoId]/Share.tsxapps/web/lib/transcribe.tsapps/web/actions/videos/get-status.ts
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/app/s/[videoId]/Share.tsxapps/web/lib/transcribe.tsapps/web/actions/videos/get-status.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
apps/web/app/s/[videoId]/Share.tsxapps/web/lib/transcribe.tsapps/web/actions/videos/get-status.ts
apps/web/actions/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/actions/**/*.ts: All Groq/OpenAI calls must be implemented in Next.js Server Actions under apps/web/actions
Place Server Actions for the web app under apps/web/actions and mark them with 'use server'
Files:
apps/web/actions/videos/get-status.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (2)
apps/web/lib/transcribe.ts (1)
4-4: Type-only domain import is correct.Prevents bundling runtime code and keeps the module side‑effect free.
apps/web/app/s/[videoId]/Share.tsx (1)
203-204: LGTM: Loading only when aiProcessing is true.Removes misleading loader when AI data is absent post-transcription.
| videoId: Video.VideoId, | ||
| userId: string, | ||
| aiGenerationEnabled = false, | ||
| ): Promise<TranscribeResult> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Signature now uses Video.VideoId—verify all call sites.
Change looks good; ensure no callers still pass plain string ids.
Run to spot call sites:
🏁 Script executed:
#!/bin/bash
rg -nP --type=ts --type=tsx -C2 '\btranscribeVideo\s*\('Length of output: 27
🏁 Script executed:
#!/bin/bash
# Search for transcribeVideo call sites in TS and TSX files using glob patterns
rg -nP -C2 '\btranscribeVideo\s*\(' --glob '*.ts' --glob '*.tsx'Length of output: 1326
🏁 Script executed:
#!/bin/bash
# Locate the Video namespace and VideoId type alias
rg -nP --type=ts 'export (namespace|type) Video' -nC3Length of output: 422
Enforce Video.VideoId on all transcribeVideo calls
Call sites still passing plain strings—update to cast or type-annotate as Video.VideoId:
- apps/web/app/embed/[videoId]/page.tsx:250 –
transcribeVideo(video.id, …) - apps/web/app/s/[videoId]/page.tsx:437 –
transcribeVideo(videoId, …)(route param isstring) - apps/web/actions/videos/get-status.ts:52 –
transcribeVideo(videoId, …)
🤖 Prompt for AI Agents
apps/web/lib/transcribe.ts around lines 16-19: the transcribeVideo API must
enforce Video.VideoId at call sites; update the three callers to pass a properly
typed Video.VideoId instead of plain string — in
apps/web/app/embed/[videoId]/page.tsx line ~250 change transcribeVideo(video.id,
...) to transcribeVideo(video.id as Video.VideoId, ...) or ensure video.id is
typed Video.VideoId; in apps/web/app/s/[videoId]/page.tsx line ~437 cast the
route param before calling (e.g. const id = params.videoId as Video.VideoId) and
call transcribeVideo(id, ...); in apps/web/actions/videos/get-status.ts line ~52
cast the videoId variable to Video.VideoId when calling transcribeVideo(videoId
as Video.VideoId, ...); ensure Video is imported where required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
apps/web/lib/transcribe.ts (1)
16-19: Signature hardened to Video.VideoId—ensure callers are updated.This enforces correct branding; verify all call sites now pass a Video.VideoId.
Re-run:
#!/bin/bash # Verify all transcribeVideo call sites and their argument types rg -nP -C2 '\btranscribeVideo\s*\(' --type=ts --type=tsx
🧹 Nitpick comments (2)
apps/web/actions/videos/get-status.ts (1)
216-236: Select only needed fields and limit(1) to reduce I/O.No need to pull full rows here.
Apply within this block:
- const updatedVideo = await db() - .select() - .from(videos) - .where(eq(videos.id, videoId)); - if (updatedVideo.length > 0 && updatedVideo[0]) { - const updatedMetadata = - (updatedVideo[0].metadata as VideoMetadata) || {}; + const updatedRows = await db() + .select({ + transcriptionStatus: videos.transcriptionStatus, + metadata: videos.metadata, + }) + .from(videos) + .where(eq(videos.id, videoId)) + .limit(1); + if (updatedRows.length > 0) { + const updatedMetadata = (updatedRows[0].metadata as VideoMetadata) || {}; return { - transcriptionStatus: - (updatedVideo[0].transcriptionStatus as + transcriptionStatus: + (updatedRows[0].transcriptionStatus as | "PROCESSING" | "COMPLETE" | "ERROR") || null, aiProcessing: updatedMetadata.aiProcessing || false, aiTitle: updatedMetadata.aiTitle || null, summary: updatedMetadata.summary || null, chapters: updatedMetadata.chapters || null, }; }apps/web/app/s/[videoId]/Share.tsx (1)
205-211: Gate debug logs for production.Avoid noisy logs in the client bundle.
- console.log({ - aiLoading, - aiData, - transcriptionStatus, - aiGenerationEnabled, - }); + if (process.env.NODE_ENV !== "production") { + // eslint-disable-next-line no-console + console.log({ aiLoading, aiData, transcriptionStatus, aiGenerationEnabled }); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/web/actions/videos/get-status.ts(1 hunks)apps/web/app/s/[videoId]/Share.tsx(2 hunks)apps/web/lib/transcribe.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/lib/transcribe.tsapps/web/app/s/[videoId]/Share.tsxapps/web/actions/videos/get-status.ts
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/lib/transcribe.tsapps/web/app/s/[videoId]/Share.tsxapps/web/actions/videos/get-status.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
apps/web/lib/transcribe.tsapps/web/app/s/[videoId]/Share.tsxapps/web/actions/videos/get-status.ts
apps/web/actions/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/actions/**/*.ts: All Groq/OpenAI calls must be implemented in Next.js Server Actions under apps/web/actions
Place Server Actions for the web app under apps/web/actions and mark them with 'use server'
Files:
apps/web/actions/videos/get-status.ts
🧬 Code graph analysis (2)
apps/web/lib/transcribe.ts (1)
packages/web-domain/src/Video.ts (3)
Video(13-35)VideoId(9-9)VideoId(10-10)
apps/web/actions/videos/get-status.ts (2)
packages/database/index.ts (1)
db(30-35)packages/database/schema.ts (1)
videos(231-277)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (3)
apps/web/lib/transcribe.ts (1)
4-4: Type-only import is correct.Good move to avoid bundling the domain module at runtime.
apps/web/actions/videos/get-status.ts (1)
216-236: DB re-fetch after kicking off AI: LGTM.This aligns the response with near-real-time metadata without blocking on the job.
apps/web/app/s/[videoId]/Share.tsx (1)
183-198: Loading behavior matches the intended lifecycle.Shows spinner during transcription; only shows AI spinner when enabled and processing. Nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
apps/web/actions/videos/get-status.ts (1)
224-239: Remove unreachable guard and simplify indexing.Inside the length > 0 branch, row cannot be falsy. Simplify with destructuring and a single null check.
- if (updatedVideo.length > 0) { - const row = updatedVideo[0]; - if (!row) { - return { - transcriptionStatus: - (video.transcriptionStatus as - | "PROCESSING" - | "COMPLETE" - | "ERROR") || null, - aiProcessing: metadata.aiProcessing || false, - aiTitle: metadata.aiTitle || null, - summary: metadata.summary || null, - chapters: metadata.chapters || null, - // generationError: metadata.generationError || null, - }; - } - const updatedMetadata = (row.metadata as VideoMetadata) || {}; + const [row] = updatedVideo; + if (!row) { + return { + transcriptionStatus: + (video.transcriptionStatus as "PROCESSING" | "COMPLETE" | "ERROR") || + null, + aiProcessing: metadata.aiProcessing || false, + aiTitle: metadata.aiTitle || null, + summary: metadata.summary || null, + chapters: metadata.chapters || null, + // generationError: metadata.generationError || null, + }; + } + const updatedMetadata = (row.metadata as VideoMetadata) || {};apps/web/app/s/[videoId]/Share.tsx (1)
207-213: Gate debug logs behind NODE_ENV or remove before release.Avoid noisy console output in production.
- console.log({ - aiLoading, - aiData, - transcriptionStatus, - aiGenerationEnabled, - }); + if (process.env.NODE_ENV !== "production") { + console.log({ + aiLoading, + aiData, + transcriptionStatus, + aiGenerationEnabled, + }); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/web/actions/videos/get-status.ts(1 hunks)apps/web/app/s/[videoId]/Share.tsx(3 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
apps/web/actions/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/actions/**/*.ts: All Groq/OpenAI calls must be implemented in Next.js Server Actions under apps/web/actions
Place Server Actions for the web app under apps/web/actions and mark them with 'use server'
Files:
apps/web/actions/videos/get-status.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/actions/videos/get-status.tsapps/web/app/s/[videoId]/Share.tsx
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/actions/videos/get-status.tsapps/web/app/s/[videoId]/Share.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
Files:
apps/web/actions/videos/get-status.tsapps/web/app/s/[videoId]/Share.tsx
🧬 Code graph analysis (1)
apps/web/actions/videos/get-status.ts (2)
packages/database/index.ts (1)
db(30-35)packages/database/schema.ts (1)
videos(231-277)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (4)
apps/web/actions/videos/get-status.ts (1)
216-224: Good: immediate DB re-fetch after enqueue.This reduces stale reads and better reflects aiProcessing flips from the worker. Nicely scoped select with limit(1).
apps/web/app/s/[videoId]/Share.tsx (3)
117-125: Tighter polling condition looks right.Gating polling to aiGenerationEnabled && (aiProcessing || noAiData) aligns with the lifecycle and avoids unnecessary polling.
185-185: Comment clarifies intent.Decoupling loading from the AI flag during transcription is clear and matches the UX goal.
196-200: Loading logic for COMPLETE state is correct.Only showing the AI skeleton when aiGenerationEnabled && processing prevents spinners for videos without audio.
This PR: improves the loading state of transcriptions, as for videos without any audio it would show loading.
When AI generation is kicked off:
When AI finishes and the worker writes summary/chapters and sets aiProcessing back to false, aiLoading turns false and the content appears.
Summary by CodeRabbit
Bug Fixes
Performance
UI/UX
Refactor