Conversation
- React 19 is not fully stable and was causing browser crashes - Next.js 15.5.2 officially supports React 18 - Disabled React strict mode temporarily during debugging - Added Playwright test configuration for automated testing This resolves compatibility issues between React 19 and Next.js 15.
- Added playwright.config.ts with browser configuration - Created test suite for debugging application issues - Updated .gitignore to exclude .next build artifacts and test results
Major Changes: - Added OpenAI SDK integration alongside existing Gemini support - Story Planning: Now uses GPT-4o (default) or Gemini 2.5 Flash - Image Generation: Now uses DALL-E 3 (default) or Gemini Image Preview - Provider selection via environment variables (PLANNER_PROVIDER, RENDERER_PROVIDER) Backend Updates: - renderer.service.ts: Added OpenAI DALL-E 3 integration with dual-provider support - planner.service.ts: Added OpenAI GPT-4o integration with dual-provider support - config.ts: Updated to support both OpenAI and Gemini providers - .env.example: Comprehensive documentation for both AI providers Frontend Updates: - index.tsx: Updated UI to show "GPT-4o" and "DALL-E 3" badges - Replaced references to "Gemini 2.5" and "Nano Banana" with OpenAI equivalents Environment Variables: - OPENAI_API_KEY: OpenAI API key - OPENAI_PLANNER_MODEL: Story planning model (default: gpt-4o) - OPENAI_IMAGE_MODEL: Image generation model (default: dall-e-3) - PLANNER_PROVIDER: 'openai' or 'gemini' (default: openai) - RENDERER_PROVIDER: 'openai' or 'gemini' (default: openai) The system now defaults to OpenAI but maintains full backward compatibility with Gemini.
…08-07 - Changed image generation model from dall-e-3 to gpt-image-1 - Changed story planning model from gpt-4o to gpt-5-mini-2025-08-07 - Updated .env.example with correct model names - Updated frontend badges to show 'GPT-5 Mini' and 'GPT-Image-1' - Updated renderer and planner service default configurations
- Complete system architecture overview - Detailed explanation of how MangaFusion works end-to-end - AI models configuration and usage (OpenAI GPT-5-Mini, GPT-Image-1) - Setup and installation instructions - Configuration reference for all environment variables - Usage guide with step-by-step workflow - Complete API reference - Technical implementation details - Troubleshooting guide - Development notes and project structure This documentation covers everything from setup to production deployment.
- Fix EventSource memory leaks in pages/index.tsx and pages/episodes/[id].tsx - Added proper cleanup with useRef pattern - Close EventSource on component unmount and error events - Prevent multiple simultaneous EventSource connections - Add missing @Injectable decorators to all 8 backend services - PlannerService, RendererService, StorageService, TTSService - EventsService, QueueService, PrismaService, EpisodesService - Enables proper dependency injection in NestJS - Fix TypeScript build errors in backend - Install missing @types/multer package - Fix renderer.imageModel references (use provider-specific model) - Add explicit type annotations for Prisma query results - Fix null vs undefined in PanelDialogue character field - Add proper error type checking (unknown to Error) - Add optional chaining for response.data array access All backend TypeScript errors resolved - build now succeeds.
WalkthroughAdds multi-provider AI support (OpenAI + Gemini) for planning and rendering, introduces Playwright E2E tests and config, hardens SSE/error handling and filename sanitization, applies NestJS Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PlannerService
participant OpenAI
participant GeminiAPI as Gemini
Note over PlannerService: generateOutline(seed)
Client->>PlannerService: generateOutline(seed)
PlannerService->>PlannerService: read config.provider
alt provider == "openai"
PlannerService->>PlannerService: generateOutlineOpenAI(seed)
PlannerService->>OpenAI: send prompt + schema
OpenAI->>PlannerService: response
PlannerService->>PlannerService: extractJson() -> PlannerOutput
else provider == "gemini"
PlannerService->>PlannerService: generateOutlineGemini(seed)
PlannerService->>Gemini: send prompt
Gemini->>PlannerService: response
PlannerService->>PlannerService: extractJson() -> PlannerOutput
end
PlannerService->>Client: return PlannerOutput
sequenceDiagram
participant Client
participant RendererService
participant Config
participant OpenAI
participant GeminiAPI as Gemini
participant Storage
Note over RendererService: generatePage(request, seed)
Client->>RendererService: generatePage(request, seed)
RendererService->>Config: read renderer.provider
alt provider == "openai"
RendererService->>RendererService: generatePageOpenAI(...)
RendererService->>OpenAI: image generation request
OpenAI->>RendererService: image (b64 or url)
else provider == "gemini"
RendererService->>RendererService: generatePageGemini(...)
RendererService->>Gemini: image generation request
Gemini->>RendererService: image (b64 or url)
end
opt storage enabled
RendererService->>Storage: upload image
Storage->>RendererService: public URL
end
RendererService->>Client: return { imageUrl, seed }
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~65 minutes Areas to focus review on:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍(Compliance updated until commit 3c3f0a7)Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label Previous compliance checksCompliance check up to commit 1c3d104
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/src/renderer/renderer.service.ts (1)
115-271: Seed parameter likely ineffective for image generation; consider graceful fileData fallbackThe review raises concerns about non-deterministic rendering and strict error handling, but the underlying issues differ slightly from the suggestions:
The
seedfield is documented in Vertex AI's generationConfig schema, but it's primarily supported for text generation. All observable examples of gemini-2.5-flash-image generation show responses extracted from inlineData (base64), not fileData. Passingseedto the image model's generationConfig may not improve reproducibility for this particular model—verify if gemini-2.5-flash-image-preview even honors the seed parameter for image generation before adding it.Generated image content consistently returns as inlineData, not fileData. However, rather than throwing an error if
fileDataever appears, consider logging a warning and continuing to extract from it gracefully, since you may want to support that format in the future.The
padStartinconsistency is valid: the general error fallback usespadStart(4, '0')while character- and style-image fallbacks use 2 digits. Standardize to 2 for consistency.
🧹 Nitpick comments (13)
backend/src/pages/pages.controller.ts (1)
60-62: Apply consistent type-safe error handling pattern.Now that line 41 uses the type-safe
error instanceof Error ? error.message : 'default'pattern, consider applying the same approach to all other catch blocks in this file for consistency.Apply this pattern to standardize error handling:
} catch (e: any) { - return { error: e?.message || String(e) }; + return { error: e instanceof Error ? e.message : 'Operation failed' }; }Customize the fallback message for each endpoint as appropriate (e.g., 'Failed to save overlays', 'Failed to regenerate page', etc.).
Also applies to: 72-74, 82-84, 92-94, 102-104, 112-114, 122-124
DOCUMENTATION.md (1)
1-857: Consider addressing markdown linting issues for improved documentation quality.Static analysis identified several minor markdown linting issues:
- Bare URLs that should be formatted as links (lines 370, 405-407, 468, 853)
- Fenced code blocks without language specifiers (lines 41, 104, 158, 173, 202, 787)
- Using emphasis instead of proper headings (lines 388, 395, 859)
- Redundant "PNG image" phrasing (line 265)
These are optional improvements for documentation polish.
tests/firefox-debug.spec.ts (1)
1-33: Consider improving test assertions and avoiding anti-patterns.This test has several issues that limit its effectiveness:
- No assertions: The test only logs output but doesn't assert expected behavior. Consider adding
expect()assertions.- Arbitrary timeout: Line 19 uses
waitForTimeout(3000)which is an anti-pattern. Use proper wait conditions likewaitForSelector()orwaitForLoadState().- Hardcoded URL: Line 18 hardcodes 'http://localhost:3000'. Use
baseURLfrom Playwright config or thepagefixture.- Manual browser launch: Lines 4-5 manually launch Firefox instead of using Playwright's test fixtures with browser configuration.
Example improvement:
-import { test, expect, firefox } from '@playwright/test'; +import { test, expect } from '@playwright/test'; -test('test with firefox', async () => { - const browser = await firefox.launch(); - const page = await browser.newPage(); +test('test with firefox', async ({ page }) => { const errors: string[] = []; page.on('pageerror', error => { errors.push(error.message); console.log('PAGE ERROR:', error.message); }); page.on('console', msg => { console.log(`CONSOLE [${msg.type()}]:`, msg.text()); }); try { - await page.goto('http://localhost:3000', { waitUntil: 'domcontentloaded', timeout: 30000 }); - await page.waitForTimeout(3000); + await page.goto('/', { waitUntil: 'domcontentloaded' }); + await page.waitForLoadState('networkidle'); const title = await page.title(); console.log('Page title:', title); + expect(title).toBeTruthy(); const content = await page.textContent('h1'); console.log('H1 content:', content); + expect(content).toBeTruthy(); + expect(errors).toHaveLength(0); console.log('✅ Firefox test passed!'); } catch (error) { console.log('❌ Firefox test failed:', error); + throw error; - } finally { - await browser.close(); } });Configure Firefox in
playwright.config.ts:projects: [ { name: 'firefox', use: { ...devices['Desktop Firefox'] }, }, ]tests/app.spec.ts (3)
4-27: Consider adding assertions and fixing event listener timing.Issues with this test:
- Console listener attached too late: Lines 18-22 attach the console listener AFTER navigation (line 5), so errors during page load won't be captured. Move listener setup before
page.goto().- No assertions: The test logs output but doesn't assert expected behavior. Add assertions for title, body content, or absence of errors.
- Screenshot directory: Line 11 assumes 'tests/screenshots/' exists. Consider using a relative path or creating the directory.
test('should load the home page', async ({ page }) => { + // Set up console listener before navigation + page.on('console', msg => { + if (msg.type() === 'error') { + console.log('Browser console error:', msg.text()); + } + }); + await page.goto('/'); // Wait for the page to be fully loaded await page.waitForLoadState('networkidle'); // Take a screenshot for debugging - await page.screenshot({ path: 'tests/screenshots/homepage.png', fullPage: true }); + await page.screenshot({ path: 'homepage.png', fullPage: true }); // Check if page loaded without errors const title = await page.title(); console.log('Page title:', title); + expect(title).toContain('MangaFusion'); - // Log any console errors - page.on('console', msg => { - if (msg.type() === 'error') { - console.log('Browser console error:', msg.text()); - } - }); // Check for any error messages in the page const bodyText = await page.textContent('body'); console.log('Page contains:', bodyText?.substring(0, 500)); + expect(bodyText).toBeTruthy(); });
29-57: Add assertions to make error detection test meaningful.The test collects errors but doesn't assert on them. Consider adding an assertion to fail the test if errors are found (or explicitly document that this is a monitoring test).
// Log all errors found if (errors.length > 0) { console.log('\n=== ERRORS FOUND ==='); errors.forEach((error, index) => { console.log(`${index + 1}. ${error}`); }); console.log('===================\n'); } else { console.log('No errors found!'); } + + // Assert no errors found (or document this as monitoring only) + expect(errors).toHaveLength(0); });
59-82: Add assertion for failed network requests.Similar to the error detection test, this test logs failed requests but doesn't assert on them.
if (failedRequests.length > 0) { console.log('\n=== FAILED REQUESTS ==='); failedRequests.forEach((req, index) => { console.log(`${index + 1}. ${req.url} - Status: ${req.status}`); }); console.log('======================\n'); } + + // Filter out expected failures (like 404s for favicons) if any + const unexpectedFailures = failedRequests.filter( + req => !req.url.includes('favicon') && req.status !== 404 + ); + expect(unexpectedFailures).toHaveLength(0); });tests/debug.spec.ts (1)
1-67: Consider improving test with assertions and proper wait conditions.Similar to the other test files, this debug test has opportunities for improvement:
- No assertions: The test collects and logs errors but doesn't fail if errors are found.
- Arbitrary timeout: Line 38 uses
waitForTimeout(2000). Use proper wait conditions.- Hardcoded URL: Line 32 hardcodes the URL. Use baseURL from config or page fixtures.
- Error swallowing: The try/catch (lines 31-50) logs the error but doesn't re-throw it or assert.
test('capture errors before crash', async ({ page }) => { const errors: string[] = []; const consoleMessages: Array<{ type: string; text: string }> = []; // Capture page errors page.on('pageerror', error => { const errorMsg = `PAGE ERROR: ${error.message}\n${error.stack}`; errors.push(errorMsg); console.log(errorMsg); }); // Capture console messages page.on('console', msg => { const msgText = `CONSOLE [${msg.type()}]: ${msg.text()}`; consoleMessages.push({ type: msg.type(), text: msg.text() }); console.log(msgText); }); // Capture failed requests page.on('response', response => { if (!response.ok()) { const failMsg = `FAILED REQUEST: ${response.url()} - Status: ${response.status()}`; console.log(failMsg); errors.push(failMsg); } }); - // Try to navigate without waiting for networkidle - try { - await page.goto('http://localhost:3000', { - waitUntil: 'domcontentloaded', - timeout: 30000 - }); + await page.goto('/', { + waitUntil: 'domcontentloaded', + }); - // Wait a bit to see if errors appear - await page.waitForTimeout(2000); + // Wait for React to render + await page.waitForSelector('#__next', { state: 'attached' }); - // Try to get page content - const title = await page.title(); - console.log('Page title:', title); + // Try to get page content + const title = await page.title(); + console.log('Page title:', title); + expect(title).toBeTruthy(); - // Check if React rendered - const reactRoot = await page.$('#__next'); - console.log('React root found:', reactRoot !== null); - - } catch (error) { - console.log('Navigation error:', error); - } + // Check if React rendered + const reactRoot = await page.$('#__next'); + console.log('React root found:', reactRoot !== null); + expect(reactRoot).not.toBeNull(); // Print summary console.log('\n=== ERROR SUMMARY ==='); console.log(`Total errors: ${errors.length}`); console.log(`Total console messages: ${consoleMessages.length}`); if (errors.length > 0) { console.log('\nErrors:'); errors.forEach((err, i) => console.log(`${i + 1}. ${err}`)); } if (consoleMessages.length > 0) { console.log('\nConsole messages:'); consoleMessages.forEach((msg, i) => console.log(`${i + 1}. [${msg.type}] ${msg.text}`)); } console.log('====================\n'); + + // Assert no critical errors + expect(errors).toHaveLength(0); });backend/src/episodes/episodes.service.ts (1)
402-446: Narration panels usecharacter: undefinedwhile other paths usenullIn
stubOutline, the first narration panel is now emitted as:dialogues.push({ panel_number: p, character: undefined, text: `The city never sleeps...`, type: 'narration' as const, });While
parseDialogueLinesusescharacter: nullfor narration lines that don’t match thename: textpattern. Functionally this is fine (callers likely treat falsy/non-string characters the same), but for consistency it may be cleaner to standardize on one convention (e.g., alwaysnullfor narration) across all planner/outline sources.If you want to align them, changing
undefinedtonullhere is enough.backend/src/planner/planner.service.ts (1)
3-22: Planner multi-provider routing is well-structured; consider normalizing provider casingThe refactor cleanly introduces:
- Per-provider configuration (
geminiApiKey,openaiApiKey,geminiModel,openaiModel) and aproviderflag.- Provider-based routing in
generateOutlinetogenerateOutlineOpenAIvsgenerateOutlineGemini.- Provider-specific implementations that both:
- Build the same schema/prompt,
- Call their respective SDKs,
- Run through a shared
extractJsonhelper and enforce 10-page output.That keeps the public API unchanged while adding OpenAI support and better error reporting.
One minor robustness tweak:
provideris currently read as-is:private readonly provider = process.env.PLANNER_PROVIDER || 'openai'; … if (this.provider === 'openai') { … } else { … }If someone sets
PLANNER_PROVIDER=OpenAIor adds whitespace, this will silently route to the Gemini branch and then fail on missingGEMINI_API_KEY. You could defensively normalize:private readonly provider = (process.env.PLANNER_PROVIDER || 'openai').toLowerCase();and optionally validate against an allowed set (
openai|gemini) to give clearer errors when misconfigured.Also applies to: 24-31, 32-133, 135-221, 223-247
backend/src/renderer/renderer.service.ts (4)
1-37: DI wiring and client getters look consistent with planner serviceThe Injectable decoration, API key fields, and
geminiClient/openaiClientgetters are clean and consistent with the planner side. If you find yourself adding more services that need these clients, consider a shared helper/service to centralize client construction and API key checks, but this is fine as-is.
40-48: Provider routing silently treats any non‑openaivalue as Gemini
generatePagefalls back togeneratePageGeminifor anyconfig.providerother than the string'openai'. IfRENDERER_PROVIDERis mis-typed (e.g.'opena1'), requests will quietly hit Gemini instead of failing fast.Consider making the routing explicit and throwing on unknown providers, e.g.:
- if (this.config.provider === 'openai') { - return this.generatePageOpenAI(request, seed); - } else { - return this.generatePageGemini(request, seed); - } + if (this.config.provider === 'openai') { + return this.generatePageOpenAI(request, seed); + } + if (this.config.provider === 'gemini') { + return this.generatePageGemini(request, seed); + } + throw new Error(`Unsupported renderer provider: ${this.config.provider}`);
343-349: Character routing mirrors page routingThe
generateCharacterrouter cleanly mirrors the page routing logic, so the provider choice is consistent across both APIs. Once you decide how strict you want to be about unknownprovidervalues ingeneratePage, consider applying the same pattern here for symmetry.
399-413: Gemini character flow: consider data URL trade-offs for storage-disabled behaviorThe Gemini character flow is well-structured and consistent with the page path. Two considerations:
- When
this.storage.enabledisfalse, you return a placeholder instead of the actual generated image (unlike the OpenAI character path, which returns the OpenAI URL). Data URLs are acceptable for small, one-off images but increase request size, prevent caching, and don't scale well. For local/dev use, a data URL works; for production, the recommended approach for ephemeral images is uploading to Cloud Storage and returning a time-limited signed URL instead.- The inlineData parsing logic is nearly identical to the page path; if you introduce support for
fileDataor other formats later, consider extracting a shared helper to keep the two in sync.If you choose to use data URLs for dev scenarios, the suggested change above remains valid; just be aware of the trade-offs when persistent storage is unavailable in production.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
backend/package-lock.jsonis excluded by!**/package-lock.jsonpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (22)
.gitignore(1 hunks)DOCUMENTATION.md(1 hunks)backend/.env.example(1 hunks)backend/package.json(1 hunks)backend/src/episodes/episodes.service.ts(7 hunks)backend/src/events/events.service.ts(1 hunks)backend/src/pages/pages.controller.ts(1 hunks)backend/src/planner/planner.service.ts(2 hunks)backend/src/prisma/prisma.service.ts(1 hunks)backend/src/queue/queue.service.ts(1 hunks)backend/src/renderer/config.ts(1 hunks)backend/src/renderer/renderer.service.ts(4 hunks)backend/src/storage/storage.service.ts(1 hunks)backend/src/tts/tts.service.ts(5 hunks)next.config.js(1 hunks)package.json(1 hunks)pages/episodes/[id].tsx(3 hunks)pages/index.tsx(7 hunks)playwright.config.ts(1 hunks)tests/app.spec.ts(1 hunks)tests/debug.spec.ts(1 hunks)tests/firefox-debug.spec.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
backend/src/storage/storage.service.ts (2)
backend/src/renderer/renderer.service.ts (1)
Injectable(21-445)backend/src/tts/tts.service.ts (1)
Injectable(10-193)
backend/src/planner/planner.service.ts (3)
backend/src/episodes/episodes.service.ts (1)
Injectable(10-632)backend/src/renderer/renderer.service.ts (1)
Injectable(21-445)backend/src/episodes/types.ts (2)
EpisodeSeed(48-56)PlannerOutput(42-46)
backend/src/renderer/renderer.service.ts (1)
backend/src/renderer/config.ts (1)
getRendererConfig(1-10)
🪛 dotenv-linter (4.0.0)
backend/.env.example
[warning] 18-18: [UnorderedKey] The OPENAI_IMAGE_MODEL key should go before the OPENAI_PLANNER_MODEL key
(UnorderedKey)
🪛 LanguageTool
DOCUMENTATION.md
[style] ~265-~265: This phrase is redundant (‘G’ stands for ‘graphic’). Use simply “PNG”.
Context: ... specifications - Output: 1024x1792 PNG image - Quality: HD for pages, Standard f...
(ACRONYM_TAUTOLOGY)
🪛 markdownlint-cli2 (0.18.1)
DOCUMENTATION.md
41-41: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
104-104: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
158-158: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
173-173: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
202-202: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
370-370: Bare URL used
(MD034, no-bare-urls)
388-388: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
395-395: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
405-405: Bare URL used
(MD034, no-bare-urls)
406-406: Bare URL used
(MD034, no-bare-urls)
407-407: Bare URL used
(MD034, no-bare-urls)
468-468: Bare URL used
(MD034, no-bare-urls)
787-787: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
853-853: Bare URL used
(MD034, no-bare-urls)
859-859: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🔇 Additional comments (21)
backend/src/pages/pages.controller.ts (1)
39-42: LGTM: Type-safe error handling applied.The
instanceof Errorcheck before accessingerror.messageis a best practice that ensures type safety and provides a clear fallback message.backend/src/queue/queue.service.ts (1)
4-4: LGTM! Proper NestJS DI integration.The
@Injectable()decorator correctly enables dependency injection for QueueService, aligning with NestJS best practices and the DI pattern applied across backend services in this PR.backend/src/prisma/prisma.service.ts (1)
4-4: LGTM! Enables DI for PrismaService.The
@Injectable()decorator is required since PrismaService is injected into EpisodesService and other services. This change properly registers the service with NestJS's DI container.backend/src/storage/storage.service.ts (1)
4-4: LGTM! Required for dependency injection.The
@Injectable()decorator is necessary since StorageService is injected into RendererService and TTSService constructors. This properly registers the service with NestJS's DI system.backend/src/events/events.service.ts (1)
13-13: LGTM! Enables DI for EventsService.The
@Injectable()decorator is required since EventsService is injected into EpisodesService for emitting episode and page lifecycle events. This change enables proper DI resolution.backend/src/tts/tts.service.ts (2)
10-10: LGTM! Adds DI support.The
@Injectable()decorator enables TTSService to be properly registered with NestJS's dependency injection container, aligning with the DI pattern applied across backend services in this PR.
64-64: Excellent type-safe error handling!The error handling improvements correctly use
error instanceof Error ? error.message : String(error)to safely format error messages. This prevents "[object Object]" messages and gracefully handles non-Error throws.Also applies to: 137-137, 161-161, 190-190
next.config.js (1)
3-3: Verify the necessity of disabling React Strict Mode.The codebase shows proper EventSource cleanup patterns. EventSource connections in
pages/index.tsxandpages/episodes/[id].tsxare correctly closed in useEffect cleanup functions. The useEffect patterns appear sound and should work correctly with Strict Mode enabled.Disabling Strict Mode is generally not recommended unless there's a specific, documented reason. Consider:
- Re-enabling
reactStrictMode: trueand testing locally to confirm there are no actual issues- If issues arise, fix the specific problems rather than disabling the check
- Document the reason if disabling is truly necessary
If this was disabled due to observed issues during testing, please share those details so the underlying problems can be addressed properly.
.gitignore (1)
6-8: LGTM!Appropriate ignore patterns for Next.js build output and Playwright test artifacts.
package.json (2)
29-29: LGTM! Playwright test infrastructure added.The addition of Playwright dependencies supports the new E2E testing infrastructure introduced in this PR.
Also applies to: 35-35
23-24: React downgrade from 19.1.1 to 18.3.1 is safe and intentional.Verification confirms:
- Intentional downgrade: Git history shows commit "Fix: Downgrade React from 19.1.1 to 18.3.1 for stability"
- No React 19 features in use: Codebase contains no React 19–specific hooks (useOptimistic, useFormStatus, useActionState), React.use() API, or 'use' directives
- Full compatibility: Project uses Pages Router only (no App Router configuration), which is fully backward compatible with React 18 in Next.js 15; @types/react is correctly aligned to ^18.2.66
All concerns raised have been verified as non-issues. The downgrade is safe to proceed.
pages/episodes/[id].tsx (3)
42-42: Excellent EventSource lifecycle management!The addition of
eventSourceRefand proper cleanup before creating a new EventSource prevents memory leaks and multiple concurrent connections. This matches the best practice pattern for managing Server-Sent Events.Also applies to: 101-107
129-135: LGTM! Proper error handling to prevent reconnection attempts.Closing the EventSource on error prevents automatic reconnection attempts, which is the correct behavior for this use case.
137-142: LGTM! Proper cleanup on unmount.The cleanup function ensures the EventSource is closed and the ref is cleared when the component unmounts or the effect re-runs, preventing resource leaks.
backend/package.json (1)
23-23: OpenAI package version is current and secure—no action needed.The latest stable version of the openai package is 6.9.0, which matches the version specified in package.json. No known security vulnerabilities were found for this package.
playwright.config.ts (1)
1-39: Playwright setup looks solidConfig values (testDir, baseURL, CI-specific retries/workers, webServer, Chromium project) are coherent and should work well for local + CI E2E runs.
pages/index.tsx (1)
375-402: Feature badges and copy updatesThe updated badges/descriptions (GPT-5 Mini / GPT-Image-1) are consistent and purely presentational; no behavioral impact from these changes.
backend/.env.example (1)
4-27: Provider and model env docs align with backend configThe added PLANNER_PROVIDER/RENDERER_PROVIDER and model envs (OPENAI_PLANNER_MODEL, OPENAI_IMAGE_MODEL, PLANNER_MODEL, RENDERER_IMAGE_MODEL) match how PlannerService and getRendererConfig() read configuration, so this example will help users wire providers correctly.
backend/src/episodes/episodes.service.ts (2)
10-23: EpisodesService DI + rendererModel propagation look consistent
- Adding
@Injectable()(Line 10) correctly registersEpisodesServicefor NestJS DI given the injected services in the constructor.planEpisodenow derivesrendererModelfromgetRendererConfig()and persists it both in Prisma (Line 52) and the in-memory Episode (Lines 77–78, 102).getEpisodemirrors this viarendererModel: e.rendererModel ?? undefined(Line 310).This keeps the episode’s renderer model visible to callers without changing how
RendererServiceitself selects a provider, and doesn’t introduce behavioral regressions.Also applies to: 41-53, 64-80, 95-103, 310-319
295-307: Overlays normalization for DB pages is reasonableThe new mapping for
overlays:overlays: Array.isArray((p as any).overlays) ? (p as any).overlays : (p as any).overlays?.items ?? undefinedhandles both legacy “array-only” and newer
{ items, dialogueOverride }shapes cleanly, so consumers always see either an array orundefined. This is a pragmatic compatibility shim given the evolving overlays schema.backend/src/renderer/config.ts (1)
1-10: Renderer config matches env example and renderer usageThe struct returned by
getRendererConfig()(provider + separate OpenAI/Gemini model names) lines up with.env.exampleand the new provider-routing inRendererService. Defaults (openai,gpt-image-1,gemini-2.5-flash-image-preview) are sensible.
| - **Model**: `gpt-5-mini-2025-08-07` | ||
| - **Purpose**: Generate 10-page manga outlines |
There was a problem hiding this comment.
🧩 Analysis chain
Verify the AI model identifiers are correct and available.
The documentation references several AI models:
gpt-5-mini-2025-08-07(line 254)gpt-image-1(line 262)gemini-2.5-flash(line 285)gemini-2.5-flash-image-preview(line 286)
Since my knowledge is from March 2025 and this PR is from November 2025, please verify these model names are correct and currently available from the respective providers.
Also applies to: 262-263, 285-286
🌐 Web query:
What are the current OpenAI GPT models available for text generation in November 2025?
💡 Result:
Short answer — as of November 15, 2025, OpenAI’s main GPT-family text-generation models available via the API (and in ChatGPT/Products) include:
- GPT-5 series: gpt-5 (and smaller variants gpt-5‑mini, gpt-5‑nano) and the new GPT‑5.1 family (gpt-5.1, gpt-5.1-chat-latest, gpt-5.1-codex, etc.). [1][2]
- GPT-4.1 family: gpt-4.1 and its smaller variants (gpt-4.1-mini, gpt-4.1-nano). [2][5]
- GPT-4o family: gpt-4o (and snapshots/variants such as gpt-4o‑mini, audio/realtime/search preview variants used in products). [5][6]
- GPT-4 / GPT-4-turbo (legacy / API snapshots still available in the API). [6]
- GPT-3.5 family: gpt-3.5-turbo (and 16k / other snapshot variants). [6]
- “o” series (OpenAI O models): o3 (and o3-pro), o4‑mini / o4‑mini variants, o1 / o1‑mini — lower-latency / cost-optimized options used for many text tasks. [5][7]
- Open-weight releases for self‑hosting: gpt-oss-120b and gpt-oss-20b (text-only open‑weight models). [4]
Sources: OpenAI product announcements and pricing/release notes (GPT‑5/GPT‑5.1, model/pricing pages, and ChatGPT release notes). [1][2][4][5]
If you want, I can produce the exact API model IDs and example API calls for any specific model you plan to use.
🌐 Web query:
What are the current OpenAI image generation models available in November 2025?
💡 Result:
As of November 15, 2025, OpenAI’s publicly available image-generation models include:
- DALL·E 3 — current flagship DALL·E model (API + ChatGPT). [1][3]
- DALL·E 2 — older DALL·E generation still available for backward compatibility. [1][4]
- GPT Image / GPT-4o (omnimodal) image generation — image generation integrated into ChatGPT and offered via the GPT Image API (autoregressive, multimodal image capability). [2][5]
Sources: OpenAI DALL·E 3 API and docs; OpenAI release notes about image generation in GPT-4o; reporting on ChatGPT image/GPT Image rollout. [1][3][2][5]
🌐 Web query:
What are the current Google Gemini models available in November 2025?
💡 Result:
As of November 15, 2025, the Gemini lineup in active use (cloud/consumer/Vertex AI) includes multiple families and variants. Major available models are:
- Gemini 2.5 family: Gemini 2.5 Pro (most capable), Gemini 2.5 Flash, Gemini 2.5 Flash‑Lite (and image/video variants). [1][2]
- Gemini 2.0 family: Gemini 2.0 Flash (default/fast), Gemini 2.0 Flash‑Lite, and Gemini 2.0 Pro / experimental “thinking” variants (still offered in some surfaces). [1][3]
- Earlier 1.5 family still present in some places: Gemini 1.5 Pro and Gemini 1.5 Flash/Nano (on‑device/lightweight) though superseded by 2.x for most new features. [3][4]
Sources: news coverage and model listings from Google/press summaries. [1][2][3][4]
References
- The Verge — coverage of Gemini 2.5 / 2.0 updates.
- TechRadar — Oracle/Google Cloud Gemini offering (lists 2.5 availability).
- Tom’s Guide — Gemini model-family overview (1.0 → 2.5).
- Wikipedia — Gemini (language model) version table and status.
[/run_scripts]
🌐 Web query:
Is gpt-image-1 a valid OpenAI model identifier in November 2025?
💡 Result:
Yes — as of November 15, 2025, gpt-image-1 is a valid OpenAI model identifier (OpenAI's image-generation model). [1][2]
Sources:
[1] OpenAI — Introducing our latest image generation model in the API (Apr 23, 2025).
[2] TechCrunch — "OpenAI makes its upgraded image generator available to developers" (Apr 23, 2025).
🌐 Web query:
What is the exact model identifier for Gemini 2.5 Flash image generation?
💡 Result:
The model identifier is gemini-2.5-flash-image (preview alias: gemini-2.5-flash-image-preview). [1][2]
Sources:
[1] Google AI (Gemini models) — model code: gemini-2.5-flash-image.
[2] Firebase / Vertex AI docs — lists gemini-2.5-flash-image and gemini-2.5-flash-image-preview.
🌐 Web query:
Do OpenAI GPT models use date suffixes like 2025-08-07 in their API model identifiers?
💡 Result:
Short answer: No — OpenAI’s official API model identifiers do not use date suffixes. Official docs and pages list models with names like gpt-4o, gpt-5, gpt-5-mini, gpt-4o-mini, etc., not with YYYY‑MM‑DD suffixes. [1][2]
Why you might see date-suffixed names: some third-party directories or gateway services (not OpenAI) sometimes append dates to a model alias (e.g., "gpt-5-mini-2025-08-07") for their own indexing or release-date tagging — those are provider-specific labels, not the official OpenAI model IDs. [3][4]
Sources:
- OpenAI GPT-5 System Card / model naming examples. [1]
- OpenAI API/pricing and model pages showing model IDs (gpt-5, gpt-5-mini, gpt-4o, etc.). [2]
- Third-party listings that append dates to model aliases (example: Glama). [3]
- Community model index showing date-tagged entries (models.dev). [4]
Remove the date suffix from the gpt-5-mini model identifier.
The model identifier on line 254 should be gpt-5-mini, not gpt-5-mini-2025-08-07. OpenAI's official API model identifiers do not use date suffixes; date-suffixed names are provider-specific labels used by third-party services, not official OpenAI model IDs.
The other models verified correctly: gpt-image-1 is a valid OpenAI image-generation model as of November 2025, and gemini-2.5-flash-image-preview is a valid preview alias for Google's Gemini 2.5 Flash image model.
🤖 Prompt for AI Agents
In DOCUMENTATION.md around lines 254 to 255 the model identifier is incorrectly
listed as "gpt-5-mini-2025-08-07"; replace that string with the official OpenAI
model id "gpt-5-mini" (remove the date suffix) so the document uses the correct
vendor-neutral model identifier.
Critical Fixes: - Fix OpenAI image generation to handle b64_json response format - Update generatePageOpenAI to accept both b64_json and url formats - Update generateCharacterOpenAI to handle both response formats - Add response_format: 'b64_json' parameter for consistent behavior - Fix log message to use actual model name instead of hardcoded "DALL-E 3" - Sanitize assetFilename to prevent path traversal attacks - Add sanitization in generateCharacterOpenAI (renderer.service.ts:405) - Add sanitization in generateCharacterGemini (renderer.service.ts:457) - Prevent malicious filenames like "../../../etc/passwd" - Fix SSE fallback timeout to prevent double execution and memory leaks - Add planningTimeoutRef to track timeout in pages/index.tsx - Clear timeout when planning completes successfully - Clear timeout on component unmount to prevent post-unmount state updates - Prevents continueAfterPlanning from running twice for same episode - Update OpenAI model identifier to correct API format - Change from 'gpt-5-mini-2025-08-07' to 'gpt-5-mini' - OpenAI API doesn't support date-suffixed model names - Updated in backend/.env.example and planner.service.ts All changes verified - backend builds successfully with zero TypeScript errors.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pages/index.tsx (1)
30-50: Add cleanup to prevent post-unmount state updates.The fetch lacks an abort mechanism, so if the component unmounts before the fetch completes,
setHeroTitleandsetHeroSubtitlewill trigger React warnings.Apply this diff to add a mounted flag:
useEffect(() => { + let mounted = true; // Load content from words.md fetch('/words.md') .then(response => response.text()) .then(content => { + if (!mounted) return; const lines = content.split('\n').filter(line => line.trim()); if (lines.length > 0) { // Extract title from first line (remove # and trim) const title = lines[0].replace(/^#+\s*/, '').trim(); setHeroTitle(title); // Extract subtitle from second line if it exists if (lines.length > 1) { setHeroSubtitle(lines[1].trim()); } } }) .catch(error => { console.log('Could not load words.md, using default text'); }); + return () => { mounted = false; }; }, []);backend/src/renderer/renderer.service.ts (1)
275-275: Inconsistent zero-padding for page numbers.Line 275 uses
padStart(4, '0')while the rest of the file (lines 96, 116, 245, 268) usespadStart(2, '0'). This inconsistency could cause confusion when debugging or searching logs.Apply this diff:
- const padded = String(request.pageNumber).padStart(4, '0'); + const padded = String(request.pageNumber).padStart(2, '0');
🧹 Nitpick comments (6)
pages/index.tsx (1)
96-103: Set ref to null after closing for consistency.For consistency with the other cleanup sites (lines 118, 135, 143), set
eventSourceRef.current = nullafter closing it.Apply this diff:
// Close any existing EventSource and timeout before creating new ones if (eventSourceRef.current) { eventSourceRef.current.close(); + eventSourceRef.current = null; } if (planningTimeoutRef.current) { clearTimeout(planningTimeoutRef.current); planningTimeoutRef.current = null; }backend/src/planner/planner.service.ts (3)
14-22: Consider caching client instances to avoid repeated instantiation.Both
geminiClientandopenaiClientgetters create new client instances on every access. If these getters are called frequently (e.g., multiple outline generations in quick succession), this could add unnecessary overhead.Apply this refactor to cache client instances:
+ private _geminiClient?: GoogleGenerativeAI; + private _openaiClient?: OpenAI; + private get geminiClient() { if (!this.geminiApiKey) throw new Error('GEMINI_API_KEY not set'); - return new GoogleGenerativeAI(this.geminiApiKey); + if (!this._geminiClient) { + this._geminiClient = new GoogleGenerativeAI(this.geminiApiKey); + } + return this._geminiClient; } private get openaiClient() { if (!this.openaiApiKey) throw new Error('OPENAI_API_KEY not set'); - return new OpenAI({ apiKey: this.openaiApiKey }); + if (!this._openaiClient) { + this._openaiClient = new OpenAI({ apiKey: this.openaiApiKey }); + } + return this._openaiClient; }
32-33: Redundant API key validation.Line 33 validates
OPENAI_API_KEYexplicitly, but theopenaiClientgetter (line 110) already throws the same error if the key is missing. This validation is redundant and can be removed.Apply this diff:
private async generateOutlineOpenAI(seed: EpisodeSeed): Promise<PlannerOutput> { - if (!this.openaiApiKey) throw new Error('Planner unavailable: OPENAI_API_KEY not set'); - const system = [
135-137: Redundant API key validation.Similar to the OpenAI path, line 136 validates
GEMINI_API_KEYexplicitly, but thegeminiClientgetter (line 211) already performs this check. This validation is redundant.Apply this diff:
private async generateOutlineGemini(seed: EpisodeSeed): Promise<PlannerOutput> { - if (!this.geminiApiKey) throw new Error('Planner unavailable: GEMINI_API_KEY not set'); - const system = [backend/src/renderer/renderer.service.ts (2)
99-106: Consider returning data URL when storage is disabled and b64_json is available.When storage is disabled (lines 103-106), the code returns a placeholder URL even though
imageBuffercontains the actual generated image fromb64_json. This discards the generated image unnecessarily.Apply this diff to preserve the generated image as a data URL when storage is unavailable:
if (this.storage.enabled) { finalImageUrl = await this.storage.uploadImage(imageBuffer, filename, 'image/png'); console.log(`Image uploaded to storage: ${finalImageUrl}`); } else { - console.warn('Storage not configured, using fallback placeholder'); + console.warn('Storage not configured, returning data URL'); const shortBeat = encodeURIComponent(request.outline.beat.slice(0, 40)); - finalImageUrl = `https://placehold.co/1024x1536/FFA500/000000?text=STORAGE+DISABLED%0APAGE+${padded}%0A${shortBeat}`; + const base64Image = imageBuffer.toString('base64'); + finalImageUrl = `data:image/png;base64,${base64Image}`; }
343-344: Image generation models typically struggle with rendering readable text.Lines 343-344 instruct the model to "INCLUDE speech bubbles with readable text" and specify formatting. However, most current image generation models (including DALL-E 3, gpt-image-1, and Gemini image models) have difficulty rendering accurate, readable text within images. This requirement may result in gibberish or poorly formed text in speech bubbles.
Consider either:
- Documenting this as a known limitation
- Implementing text overlay in post-processing rather than relying on AI-generated text
- Adjusting expectations in the prompt to accept stylized/approximated text
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/.env.example(1 hunks)backend/src/planner/planner.service.ts(2 hunks)backend/src/renderer/renderer.service.ts(5 hunks)pages/index.tsx(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
backend/src/planner/planner.service.ts (1)
backend/src/episodes/types.ts (2)
EpisodeSeed(48-56)PlannerOutput(42-46)
backend/src/renderer/renderer.service.ts (4)
backend/src/planner/planner.service.ts (1)
Injectable(6-247)backend/src/storage/storage.service.ts (1)
Injectable(4-201)backend/src/episodes/episodes.service.ts (1)
Injectable(10-632)backend/src/renderer/config.ts (1)
getRendererConfig(1-10)
🪛 dotenv-linter (4.0.0)
backend/.env.example
[warning] 18-18: [UnorderedKey] The OPENAI_IMAGE_MODEL key should go before the OPENAI_PLANNER_MODEL key
(UnorderedKey)
🔇 Additional comments (6)
pages/index.tsx (2)
2-2: Excellent EventSource and timeout cleanup implementation!The ref-based tracking and cleanup properly addresses the concerns from the previous review about duplicate
continueAfterPlanningcalls and post-unmount state updates:
clearTimeout(planningTimeoutRef.current)onplanning_completeensures the fallback doesn't run if SSE succeeds- Closing the EventSource before the timeout fires prevents late-arriving
planning_completeevents- The cleanup effect guarantees both resources are released on unmount
The flow is now safe: either SSE completes and clears the timeout, or the timeout fires and closes SSE, but never both.
Also applies to: 23-24, 52-64, 96-103, 107-107, 116-123, 132-137, 139-147
403-403: LGTM! UI updates reflect the new OpenAI provider integration.The badge and description changes accurately reflect the dual-provider architecture introduced in this PR, highlighting GPT-5 Mini for planning and GPT-Image-1 for image generation.
Also applies to: 406-406, 415-415
backend/src/planner/planner.service.ts (1)
223-246: LGTM! Robust JSON extraction with appropriate fallbacks.The multi-stage parsing approach (direct parse → bracket extraction → code fence) handles common AI response patterns effectively. The silent catch blocks are appropriate here since each failure triggers the next fallback strategy.
backend/src/renderer/renderer.service.ts (3)
405-407: LGTM! Filename sanitization prevents path traversal.The sanitization of
assetFilenameusingreplace(/[^a-zA-Z0-9._-]/g, '_')properly prevents path traversal attacks and ensures safe storage paths. This addresses the concern from the previous review.
457-459: LGTM! Consistent filename sanitization.The Gemini character generation path also properly sanitizes
assetFilename, maintaining consistency with the OpenAI path and preventing path traversal vulnerabilities.
67-67: OpenAI DALL-E 3 limitation, not an implementation error.DALL-E 3 only supports three sizes: 1024x1024, 1792x1024, or 1024x1792. The intended 1024x1536 (2:3 ratio) is unsupported. The code correctly uses 1024x1792 as the closest available alternative for the vertical manga format.
The misalignment between the API size (1024x1792, which is 16:9) and the prompts/specs (which reference 1024x1536, which is 2:3) creates a documentation/expectation gap: the generated image dimensions won't match what the prompts request. Consider updating the prompts to align expectations with what DALL-E 3 can actually produce, or clarifying that 1024x1792 was selected as the closest practical alternative.
- Increase prompt limit from 4,000 to 32,000 characters for gpt-image-1 - gpt-image-1 supports up to 32k chars vs DALL-E 3's 4k limit - Prevents unnecessary truncation of detailed manga generation prompts - Applied to both generatePageOpenAI and generateCharacterOpenAI methods - Updated comments to clarify model-specific limits
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/src/renderer/renderer.service.ts (1)
50-123: OpenAI page generation path looks solid and fixes the earlier b64/url issueThis implementation now correctly requests
b64_json, handles bothb64_jsonandurl, uploads the resultingBufferto storage, and has clear error placeholders, so the previous problem wheregpt-image-1produced no usable URL is resolved. As a follow-up, you might want a small helper that derives theprompt.slice(...)limit fromthis.config.openaiModelso switching to a model with a smaller window does not unexpectedly start failing.
🧹 Nitpick comments (4)
backend/src/renderer/renderer.service.ts (4)
3-37: Client initialization could be memoized
geminiClientandopenaiClientcurrently construct new SDK clients on every access. This is acceptable but you might want to memoize them in private fields (lazy-init on first use) so you can reuse connections and centralize options like timeouts or custom transports.
39-48: Prefer nullish coalescing for seed handlingIf
0is ever treated as a valid seed,const seed = request.seed || ...will override it with a random value. Usingconst seed = request.seed ?? Math.floor(Math.random() * 1_000_000);would avoid that surprise.
361-422: Consider logging OpenAI character generation failures and revisiting storage-disabled behaviorThis path correctly supports both
b64_jsonandurlresponses and sanitizesassetFilename, but thecatchblock silently returns a placeholder without logging the underlying error, which will make production debugging harder; adding aconsole.erroror Nest logger call there would help. Also, when storage is disabled you currently discard a successfully generated image and return a placeholder—if you ever want character previews to work without Supabase, you might later prefer returning a data URL derived from theBufferinstead.
405-408: Filename sanitization is a good hardening; guard against empty resultsThe new
assetFilenamesanitization for both OpenAI and Gemini character flows nicely closes off path traversal and odd keys. One small edge case: if the original filename is empty or composed only of stripped characters,sanitizedFilenamebecomes an empty string and you end up uploading to thecharacters/prefix itself; consider falling back to a generated default (e.g., based onrequest.nameplus an extension) whensanitizedFilename.length === 0.Also applies to: 457-459
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/src/renderer/renderer.service.ts(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/src/renderer/renderer.service.ts (4)
backend/src/planner/planner.service.ts (1)
Injectable(6-247)backend/src/episodes/episodes.service.ts (1)
Injectable(10-632)backend/src/storage/storage.service.ts (1)
Injectable(4-201)backend/src/renderer/config.ts (1)
getRendererConfig(1-10)
🔇 Additional comments (3)
backend/src/renderer/renderer.service.ts (3)
125-145: Gemini provider wiring is consistent with renderer configUsing
this.config.geminiModelin both the log message andgetGenerativeModelkeeps runtime behavior aligned with configuration, and the explicitGEMINI_API_KEYguard matches the OpenAI path. This looks correct.
353-359: Character provider routing mirrors page routing as expectedThe provider switch in
generateCharactermirrorsgeneratePageand keeps the public API unchanged while enabling OpenAI-based character rendering, which aligns with howEpisodesServicecalls this service.
424-472: Gemini character generation path matches the OpenAI flow and storage contractThe Gemini character path reuses the same prompt shape, extracts inline image data correctly, applies the same sanitized filename convention, and respects
this.storage.enabledwith reasonable placeholders. This keeps behavior consistent across providers.
Implemented 4 critical improvements from code review: 1. Singleton Redis client (#2): Refactored QueueEventsBridgeService to use a singleton Redis publisher pattern, preventing connection overhead from creating new connections for every worker event emission. 2. Defensive null checks (#3): Added null checks in worker after Supabase storage upload and getPublicUrl calls to prevent runtime errors when storage operations return no data. 3. Character job error handling (#4): Enhanced character job processing to emit character_done and character_failed events for real-time updates, matching the consistency of page job event handling. 4. Parallel export downloads (#6): Refactored PDF export to download all page images in parallel using Promise.all(), improving performance from ~30s to ~5s for 10-page episodes (5-10x speedup). All changes tested with successful TypeScript build.
PR Type
Enhancement, Bug fix
Description
Add OpenAI support alongside Gemini with provider selection via environment variables
Implement dual-provider architecture for story planning and image generation
Add
@Injectable()decorators to NestJS services for proper dependency injectionDowngrade React from 19.1.1 to 18.3.1 for stability and disable strict mode
Add Playwright testing infrastructure with browser configuration and test suites
Improve error handling with proper type checking for Error instances
Add comprehensive documentation covering architecture, setup, and usage
Fix EventSource cleanup in React components to prevent memory leaks
Diagram Walkthrough
flowchart LR A["Story Input"] --> B["Planner Service"] B --> C{"Provider Selection"} C -->|OpenAI| D["GPT-5-Mini"] C -->|Gemini| E["Gemini 2.5 Flash"] D --> F["Episode Outline"] E --> F F --> G["Renderer Service"] G --> H{"Provider Selection"} H -->|OpenAI| I["GPT-Image-1"] H -->|Gemini| J["Gemini Image Preview"] I --> K["Manga Pages"] J --> K K --> L["Storage & Database"]File Walkthrough
2 files
Add OpenAI GPT-5-Mini support with dual-provider routingAdd OpenAI GPT-Image-1 support with provider routing2 files
Update renderer config for dual-provider supportDisable React strict mode for stability8 files
Add @Injectable decorator and fix renderer model selectionAdd @Injectable decorator to EventsServiceAdd @Injectable decorator and improve error handlingAdd @Injectable decorator to PrismaServiceAdd @Injectable decorator to QueueServiceAdd @Injectable decorator to StorageServiceFix EventSource cleanup and update UI badges to OpenAIFix EventSource cleanup and prevent memory leaks1 files
Improve error handling with proper Error type checking2 files
Document OpenAI and Gemini provider configuration optionsAdd comprehensive documentation for architecture and setup2 files
Add OpenAI SDK and multer types dependenciesDowngrade React to 18.3.1 and add Playwright testing4 files
Add Playwright test configuration with Chrome browserAdd Playwright test suite for application debuggingAdd error capture test for debugging browser crashesAdd Firefox browser debugging testSummary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores