Bug Fix: Graceful Backend Servers Shutdown#890
Bug Fix: Graceful Backend Servers Shutdown#890rohan-pandeyy wants to merge 6 commits intoAOSSIE-Org:mainfrom
Conversation
Introduces a new /shutdown endpoint to the backend for graceful shutdown, including stopping the sync microservice. Refactors sync microservice management to track the process globally, support graceful and forced shutdown via HTTP and process signals, and updates subprocess handling for cross-platform compatibility. Updates main backend entry to include the shutdown route and allow configurable host binding. Frontend and utility code updated to support new shutdown flow.
|
|
📝 WalkthroughWalkthroughAdds coordinated shutdown support: new /shutdown endpoints in backend and sync services, frontend changes to trigger shutdown on app close (Tauri-aware), tauri capability additions to allow killing processes, and host/config tweaks for service bindings. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant Tauri as Tauri API
participant Backend as Backend /shutdown
participant Sync as Sync /shutdown
participant Shell as Shell (tauri shell/OS plugin)
User->>Frontend: Close window
Frontend->>Tauri: onCloseRequested (Tauri only)
Frontend->>Frontend: triggerShutdown() [idempotent]
alt Graceful shutdown
Frontend->>Backend: POST /shutdown (with timeout)
Backend-->>Frontend: ShutdownResponse
Frontend->>Sync: POST /shutdown (with timeout)
Sync-->>Frontend: ShutdownResponse
else Fallback / failures
Frontend->>Shell: execute pkill/taskkill for server & sync
Shell-->>Frontend: exec result
end
Frontend->>User: exit (app closes)
note right of Backend: background task sleeps then exits process
note right of Sync: background task sleeps then exits process
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (6)
sync-microservice/main.py (1)
50-60: Security improvement: default binding to localhost.The environment-driven host configuration with
127.0.0.1default is a solid security enhancement. Consider also treating::1(IPv6 localhost) as safe in the validation check for environments that prefer IPv6.- if host not in ("127.0.0.1", "localhost"): + if host not in ("127.0.0.1", "localhost", "::1"):backend/main.py (1)
60-63: Consider stopping sync microservice in lifespan finally block.The
finallyblock only shuts down the executor. If the application exits without going through the/shutdownendpoint (e.g., SIGTERM directly), the sync microservice may become orphaned. Consider addingmicroservice_util_stop_sync_service()to the cleanup.🔎 Proposed enhancement
+from app.utils.microservice import microservice_util_start_sync_service, microservice_util_stop_sync_service try: yield finally: app.state.executor.shutdown(wait=True) + microservice_util_stop_sync_service(timeout=3.0)sync-microservice/app/routes/shutdown.py (1)
35-70: Consider adding basic access control to shutdown endpoint.The
/shutdownendpoint has no authentication. While binding to localhost mitigates external exposure, any local process can trigger shutdown. For defense-in-depth, consider adding a simple bearer token or checking for a shared secret from an environment variable.backend/app/utils/microservice.py (3)
40-100: Return value doesn't reflect actual success/failure.The function always returns
Trueregardless of whether the shutdown succeeded or encountered errors. Consider returningFalsewhen force-kill is needed or exceptions occur, so callers can react appropriately.🔎 Proposed fix
+ success = True + # Wait for process to exit if we have a reference if _sync_process is not None: try: # ... except subprocess.TimeoutExpired: logger.warning( f"Sync microservice did not exit within {wait_timeout}s, force killing..." ) _force_kill_sync_process() + success = False # Clean up log threads cleanup_log_threads() logger.info("Sync microservice stopped") - return True + return success
59-59: Moveimport timeto top of file.The
timemodule import inside the function works but is unconventional. Moving it to the top-level imports improves readability and follows Python conventions.
126-134: Consider importing signal at module level.The
signalmodule is imported inside the function. While functional, top-level imports are preferred for consistency and to avoid repeated import overhead if this function is called multiple times.+import signal # at top of file def _force_kill_sync_process(): # ... else: # Unix/Linux/macOS - kill process group - import signal try: os.killpg(os.getpgid(pid), signal.SIGKILL)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
backend/app/routes/shutdown.pybackend/app/utils/microservice.pybackend/main.pydocs/backend/backend_python/openapi.jsonfrontend/src/main.tsxfrontend/src/utils/serverUtils.tssync-microservice/app/routes/shutdown.pysync-microservice/main.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-01T20:44:19.146Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/utils/microservice.py:53-62
Timestamp: 2025-10-01T20:44:19.146Z
Learning: In backend/app/utils/microservice.py, the stream_logs function uses print() instead of logger to forward sync-microservice output because the sync-microservice already formats its logs using its own logging setup (get_sync_logger and LoggerWriter). Using logger would result in double-prefixing.
Applied to files:
backend/app/utils/microservice.py
🧬 Code graph analysis (7)
frontend/src/main.tsx (4)
frontend/src/utils/serverUtils.ts (2)
triggerShutdown(46-55)startServer(26-43)frontend/src/utils/tauriUtils.ts (1)
isTauriEnvironment(15-38)frontend/src/utils/isProd.ts (1)
isProd(3-3)frontend/__mocks__/isProd.ts (1)
isProd(1-1)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
watcher_util_stop_folder_watcher(300-328)sync-microservice/app/logging/setup_logging.py (1)
get_sync_logger(205-217)backend/app/routes/shutdown.py (4)
ShutdownResponse(15-20)_delayed_shutdown(23-34)shutdown(38-83)_handle_shutdown_exception(69-73)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
shutdown(38-83)sync-microservice/app/routes/shutdown.py (1)
shutdown(36-70)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
BACKEND_URL(1-1)
backend/app/routes/shutdown.py (2)
backend/app/utils/microservice.py (1)
microservice_util_stop_sync_service(40-100)sync-microservice/app/routes/shutdown.py (4)
ShutdownResponse(14-18)_delayed_shutdown(21-32)shutdown(36-70)_handle_shutdown_exception(57-61)
backend/app/utils/microservice.py (2)
backend/app/logging/setup_logging.py (2)
setup_logging(134-196)get_logger(199-209)sync-microservice/app/logging/setup_logging.py (2)
setup_logging(134-189)get_logger(192-202)
sync-microservice/main.py (4)
backend/main.py (1)
lifespan(45-63)sync-microservice/app/core/lifespan.py (1)
lifespan(19-57)backend/app/routes/shutdown.py (1)
shutdown(38-83)sync-microservice/app/routes/shutdown.py (1)
shutdown(36-70)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Backend Tests
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
🔇 Additional comments (22)
sync-microservice/main.py (1)
1-6: LGTM on imports and router addition.The
osimport for environment variable access and theshutdownrouter integration are correctly implemented, enabling the new graceful shutdown capability.backend/main.py (3)
29-29: LGTM on shutdown router import.The shutdown router is correctly imported for the new graceful shutdown functionality.
136-136: LGTM on router registration.The shutdown router is registered without a prefix, making the endpoint accessible at
/shutdownwhich aligns with the frontend integration inserverUtils.ts.
146-156: Consistent host configuration with sync-microservice.The host binding logic mirrors the sync-microservice implementation. Same suggestion applies: consider including
::1for IPv6 localhost support.sync-microservice/app/routes/shutdown.py (1)
14-18: LGTM on response model.Clean Pydantic model for the shutdown response. Note that this differs from the backend's
ShutdownResponsewhich includes an additionalsync_stoppedfield - this is appropriate since this microservice doesn't manage other services.frontend/src/main.tsx (3)
6-6: LGTM on import change.The switch from
stopServertotriggerShutdownaligns with the new fire-and-forget shutdown pattern that's appropriate for window close scenarios.
11-26: Window close listener only triggers in production.The
onCloseListeneris defined here but only invoked whenisProd()is true (line 51). In development mode, window close won't trigger shutdown. This is likely intentional per the comment on line 48-49, but verify this matches the expected developer workflow.
46-58: LGTM on production-only initialization.The conditional startup logic clearly separates production and development modes. The
.catch()onstartServer()prevents unhandled promise rejections.backend/app/routes/shutdown.py (4)
1-12: LGTM on imports and setup.The imports are well-organized. Using
functools.partialfor the executor call is the correct approach for passing arguments.
15-20: Response model appropriately extends sync-microservice's model.The additional
sync_stoppedfield provides valuable feedback to the frontend about whether the sync service was stopped cleanly.
23-34: LGTM on delayed shutdown implementation.The 0.5s delay (vs 0.1s in sync-microservice) is appropriate here since the backend has more work to coordinate before terminating.
57-66: Good use of executor to avoid blocking.Running
microservice_util_stop_sync_servicein an executor prevents blocking the event loop during the synchronous HTTP call and process wait operations. The 3.0s timeout is reasonable for graceful shutdown.frontend/src/utils/serverUtils.ts (5)
5-10: LGTM on constants and platform detection.The timeout constants are well-named and the
isWindowsdetection viauserAgentis a reasonable approach for the frontend.
14-14: Good: Health check now uses dedicated endpoint.Changing from root URL to
/healthis more semantically correct and aligns with the backend's health endpoint definition.
45-55: LGTM on fire-and-forget shutdown trigger.The
keepalive: trueoption is crucial for ensuring the request completes even when the window is closing. This is the correct pattern for shutdown-on-close scenarios.
123-125: Empty string argument to Command.create may not work as intended.The second argument to
Command.createis an empty string, but this represents the command arguments. If the kill commands need a PID or process name, this won't work. Verify the Tauri sidecar/command configuration handles finding the correct process.await Command.create(commandName, '').execute();
118-130: ThekillProcessWindowsandkillProcessUnixcommands are already defined in the Tauri configuration.Both commands are properly configured in
frontend/src-tauri/capabilities/migrated.jsonwith appropriate platform-specific implementations:
killProcessUnix: Uses bash withpkill -f PictoPy_ServerkillProcessWindows: Uses PowerShell withtaskkill /t /f /im PictoPy_Server.exeNo action needed.
backend/app/utils/microservice.py (5)
18-22: LGTM on global state for process tracking.The global
_sync_processreference andSYNC_SERVICE_URLconstant enable coordinated shutdown across the application lifecycle.
103-140: Robust force-kill implementation.The platform-specific handling is well-implemented:
- Windows:
taskkill /F /Tkills the process tree- Unix:
os.killpgwithSIGKILLterminates the process group- Fallback to
_sync_process.kill()if process group operations failThe exception handling for
ProcessLookupErrorandPermissionErrorcovers common edge cases.
262-275: Good: Process group creation for clean termination.Using
CREATE_NEW_PROCESS_GROUPon Windows andstart_new_sessionon Unix ensures the sync microservice runs in its own process group, enabling clean termination of the entire process tree during shutdown.
297-299: LGTM on process reference storage.Storing the process reference in the global variable enables the shutdown logic to properly track and terminate the sync microservice.
467-481: Consistent process group handling in dev mode.The same process group creation pattern is applied in
_start_fastapi_service, ensuring consistent shutdown behavior between frozen and development modes.
Enhanced backend and sync microservice shutdown routes to handle Windows-specific process termination and log cleanup. Added @tauri-apps/plugin-os as a dependency and updated frontend to use @tauri-apps/plugin-os for platform detection, improved shutdown logic to prevent race conditions.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/app/routes/shutdown.py (1)
1-14: Minor: Consider consolidating duplicate import from same module.Lines 7 and 9 both import from
app.utils.microservice. Consider combining them:-from app.utils.microservice import cleanup_log_threads -from pydantic import BaseModel -from app.utils.microservice import microservice_util_stop_sync_service +from pydantic import BaseModel +from app.utils.microservice import cleanup_log_threads, microservice_util_stop_sync_servicefrontend/src/utils/serverUtils.ts (1)
49-82: Potential race condition:shutdownInProgressresets before force kill completes.In the fallback paths (lines 69-71 and 77-79),
forceKillServer()is called butshutdownInProgressis reset immediately after, not after the force kill completes. This could allow a second shutdown attempt while force kill is still in progress.For a truly fire-and-forget shutdown during window close, this may be acceptable since the window is closing anyway. However, if this function could be called in other contexts where the window remains open, consider:
🔎 Proposed fix
.then((response) => { if (response.ok) { console.log('Shutdown request sent successfully'); + shutdownInProgress = false; } else { console.error( `Shutdown request failed with status: ${response.status}`, ); - forceKillServer().catch((err) => - console.error('Fallback force kill failed:', err), - ); + forceKillServer() + .catch((err) => console.error('Fallback force kill failed:', err)) + .finally(() => { shutdownInProgress = false; }); + return; // Don't reset below } - shutdownInProgress = false; // Reset on success or handled failure }) .catch((error) => { console.error('Shutdown request failed:', error); - forceKillServer().catch((err) => - console.error('Fallback force kill failed:', err), - ); - shutdownInProgress = false; // Reset on error + forceKillServer() + .catch((err) => console.error('Fallback force kill failed:', err)) + .finally(() => { shutdownInProgress = false; }); });
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (4)
backend/app/routes/shutdown.pyfrontend/package.jsonfrontend/src/utils/serverUtils.tssync-microservice/app/routes/shutdown.py
🧰 Additional context used
🧬 Code graph analysis (2)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
watcher_util_stop_folder_watcher(300-328)sync-microservice/app/logging/setup_logging.py (1)
get_sync_logger(205-217)backend/app/routes/shutdown.py (4)
ShutdownResponse(17-22)_delayed_shutdown(25-42)shutdown(46-96)_handle_shutdown_exception(77-81)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
BACKEND_URL(1-1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Backend Tests
🔇 Additional comments (13)
frontend/package.json (1)
46-46: LGTM!The
@tauri-apps/plugin-osdependency addition is appropriate for the platform detection needs in the shutdown logic. The version range is consistent with other Tauri plugins in the project.sync-microservice/app/routes/shutdown.py (4)
1-10: LGTM!Imports are appropriate and well-organized. The logger setup follows the project's logging conventions.
15-19: LGTM!The response model is appropriately scoped for the sync microservice's shutdown response. Unlike the backend's
ShutdownResponse, it doesn't need thesync_stoppedfield since this IS the sync service.
22-35: Note:os._exit(0)bypasses lifespan cleanup on Windows.While
os._exit(0)avoids the SIGTERM reliability issues on Windows, it also bypasses Python's normal cleanup (finally blocks, atexit handlers). This means the lifespan context's cleanup inlifespan.pywon't execute.This is mitigated by the explicit
watcher_util_stop_folder_watcher()call before scheduling shutdown, which handles the critical cleanup. Just be aware that any future cleanup logic added to lifespan won't run on Windows shutdown.
38-73: LGTM!The shutdown endpoint correctly:
- Stops the watcher synchronously with proper error handling
- Schedules the delayed shutdown as a background task
- Attaches an exception callback to avoid unhandled task exceptions
- Returns the response before termination occurs
backend/app/routes/shutdown.py (2)
25-42: LGTM with same caveat as sync-microservice.The implementation correctly handles platform differences. The longer 0.5s delay is appropriate since the backend coordinates the sync service shutdown first. The explicit
cleanup_log_threads()call beforeos._exit(0)on Windows is good defensive programming.
45-96: Well-structured shutdown orchestration.Good implementation choices:
- Using
run_in_executorprevents blocking the event loop during sync service stop- The 3.0s timeout for sync service leaves headroom within the frontend's 5s overall timeout
- The response clearly communicates the sync service stop outcome
- Exception callback properly handles background task failures
frontend/src/utils/serverUtils.ts (6)
3-14: LGTM!Good additions:
- Platform detection via Tauri plugin
- Shutdown constants for consistent timeout handling
shutdownInProgressflag to prevent race conditions between concurrent shutdown calls
16-28: LGTM!Using
/healthendpoint is more appropriate for health checks than the root URL, as it's semantically correct and typically more lightweight.
84-124: LGTM!Robust multi-phase shutdown:
- Early exit if server not running
- Attempt graceful HTTP shutdown
- Recheck after delay
- Force kill as last resort
- Proper guard flag management in all paths
126-154: LGTM!Well-implemented graceful shutdown attempt:
- Proper timeout handling with AbortController
- Defensive response body consumption
- Clear boolean return for caller's decision logic
- Specific AbortError handling for timeout scenario
170-183: LGTM!Clean utility functions. The 2-second delay in
restartServerprovides adequate time for cleanup before starting fresh.
156-168: Tauri shell commands are properly configured.The
killProcessWindowsandkillProcessUnixcommands are defined infrontend/src-tauri/capabilities/migrated.json. Unix usespkill -f PictoPy_Serverand Windows usestaskkill /t /f /im PictoPy_Server.exe, so the implementation is complete and platform-appropriate.
…n reliability - Add [P] trick to pkill commands to prevent pkill from matching itself - Add _kill_sync_by_name() function with multiple fallback methods - Add _final_kill_sync() for last-resort sync termination - Change from SIGTERM to os._exit(0) for reliable uvicorn termination - Add sync killing to forceKillServer on both Unix and Windows - Replace platform() with navigator.userAgent to fix Linux init race - Add waitForTauriReady() helper for Linux Tauri initialization - Run forceKillServer in parallel with HTTP shutdown (belt-and-suspenders)
There was a problem hiding this comment.
Actionable comments posted: 2
Fix all issues with AI Agents 🤖
In @backend/app/utils/microservice.py:
- Around line 117-121: The final unconditional call to _kill_sync_by_name() and
the constant True return should be changed: make the call to
_kill_sync_by_name() conditional only if previous shutdown attempts failed
(track a success flag through the graceful shutdown sequence and call
_kill_sync_by_name() when that flag is False) and update the function return to
reflect the actual success state (return the boolean success flag instead of
always True, or return None only if callers don’t use the result); also update
the logger call (logger.info("Sync microservice stopped")) to run only when
success is True and log an error/ warning when shutdown ultimately failed.
In @frontend/src/utils/serverUtils.ts:
- Around line 11-13: Replace the browser-based navigator check in isWindows with
Tauri's OS API: import the platform function from '@tauri-apps/plugin-os',
change isWindows to async (returning Promise<boolean>), call await platform()
and return whether it equals 'windows' (or use the enum/value provided by the
plugin), and update any callers to await isWindows(). Ensure you remove the
navigator.userAgent usage and add the new import (e.g., import { platform } from
'@tauri-apps/plugin-os').
🧹 Nitpick comments (5)
frontend/src-tauri/capabilities/migrated.json (2)
119-119: Consider graceful termination before force-kill.The command uses
pkill -9(SIGKILL) which immediately terminates processes without allowing cleanup. Given the PR introduces graceful HTTP shutdown endpoints, consider usingpkill -15(SIGTERM) first, then falling back topkill -9if needed:- "args": ["-c", "pkill -9 -f '[P]ictoPy_Server'; pkill -9 -f '[P]ictoPy_Sync'"], + "args": ["-c", "pkill -15 -f '[P]ictoPy_Server' || pkill -9 -f '[P]ictoPy_Server'; pkill -15 -f '[P]ictoPy_Sync' || pkill -9 -f '[P]ictoPy_Sync'"],Additionally, the semicolon (
;) runs the second command regardless of the first command's success. Consider using&&if you want the second kill to run only if the first succeeds, or keep;if both should always run.
130-130: Consider graceful termination on Windows.The Windows command uses
taskkill /fwhich forces immediate termination. For consistency with the graceful shutdown approach in the PR, consider attempting termination without/ffirst:- "args": ["taskkill /t /f /im PictoPy_Server.exe; taskkill /t /f /im PictoPy_Sync.exe"], + "args": ["taskkill /t /im PictoPy_Server.exe 2>$null; if ($LASTEXITCODE -ne 0) { taskkill /t /f /im PictoPy_Server.exe }; taskkill /t /im PictoPy_Sync.exe 2>$null; if ($LASTEXITCODE -ne 0) { taskkill /t /f /im PictoPy_Sync.exe }"],This attempts graceful termination first, then falls back to force kill if needed.
backend/app/utils/microservice.py (3)
22-22: Consider environment-based host configuration.The
SYNC_SERVICE_URLis hardcoded tolocalhost:8001. The PR introducesPICTOPY_HOSTenvironment variable support (default 127.0.0.1) for the servers. Consider using the same environment variable here for consistency:-SYNC_SERVICE_URL = "http://localhost:8001" +SYNC_SERVICE_URL = f"http://{os.getenv('PICTOPY_HOST', '127.0.0.1')}:8001"This ensures the backend client connects to the same host the sync service binds to.
124-162: Consider graceful termination before SIGKILL.The function uses
pkill -9(SIGKILL) andkillall -9which forcefully terminate processes without allowing cleanup. For better hygiene, consider attemptingSIGTERMfirst:else: - # Kill by name on Linux/macOS using pkill - # Use [P] trick so pkill doesn't match itself + # Try SIGTERM first, then SIGKILL result = subprocess.run( - ["pkill", "-9", "-f", "[P]ictoPy_Sync"], + ["pkill", "-15", "-f", "[P]ictoPy_Sync"], capture_output=True, timeout=5, ) - if result.returncode == 0: - logger.info("Killed PictoPy_Sync by name using pkill") - else: + if result.returncode != 0: + # Fallback to SIGKILL + result = subprocess.run( + ["pkill", "-9", "-f", "[P]ictoPy_Sync"], + capture_output=True, + timeout=5, + ) + if result.returncode == 0: + logger.info("Killed PictoPy_Sync by name using pkill") + else: # Fallback: use killall (matches by process name, not cmdline) result = subprocess.run( - ["killall", "-9", "PictoPy_Sync"], + ["killall", "-15", "PictoPy_Sync"], capture_output=True, timeout=5, ) - if result.returncode == 0: + if result.returncode != 0: + subprocess.run(["killall", "-9", "PictoPy_Sync"], capture_output=True, timeout=5) + if result.returncode == 0 or subprocess.run(["killall", "-9", "PictoPy_Sync"], capture_output=True, timeout=5).returncode == 0: logger.info("Killed PictoPy_Sync by name using killall")This is consistent with the graceful shutdown philosophy of the PR.
194-214: Simplify nested error handling.The deeply nested try-except blocks (lines 194-214) make the control flow difficult to follow. Consider refactoring to use a list of kill methods and iterate through them:
🔎 Proposed refactor for cleaner error handling
# Unix/Linux/macOS - try multiple kill methods import signal kill_methods = [ ("process group", lambda: os.killpg(os.getpgid(pid), signal.SIGKILL)), ("direct kill", lambda: _sync_process.kill()), ("os.kill", lambda: os.kill(pid, signal.SIGKILL)), ] success = False for method_name, kill_func in kill_methods: try: kill_func() logger.info(f"Successfully killed using {method_name}") success = True break except (ProcessLookupError, PermissionError, OSError, Exception) as e: logger.warning(f"{method_name} failed: {e}") if not success: logger.warning("All direct kill methods failed, trying by name...") _kill_sync_by_name()
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/app/routes/shutdown.pybackend/app/utils/microservice.pyfrontend/src-tauri/capabilities/migrated.jsonfrontend/src/utils/serverUtils.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/routes/shutdown.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-01T20:44:19.146Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/utils/microservice.py:53-62
Timestamp: 2025-10-01T20:44:19.146Z
Learning: In backend/app/utils/microservice.py, the stream_logs function uses print() instead of logger to forward sync-microservice output because the sync-microservice already formats its logs using its own logging setup (get_sync_logger and LoggerWriter). Using logger would result in double-prefixing.
Applied to files:
backend/app/utils/microservice.py
🧬 Code graph analysis (2)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
BACKEND_URL(1-1)
backend/app/utils/microservice.py (2)
backend/app/logging/setup_logging.py (2)
setup_logging(134-196)get_logger(199-209)sync-microservice/app/logging/setup_logging.py (2)
setup_logging(134-189)get_logger(192-202)
🔇 Additional comments (13)
frontend/src/utils/serverUtils.ts (9)
5-9: LGTM: Shutdown coordination setup.The timeout constants and the
shutdownInProgressflag properly prevent race conditions between concurrent shutdown attempts.
15-38: LGTM: Robust Tauri readiness check.The polling approach with timeout and the additional 50ms delay for plugin initialization is a solid solution for the Linux initialization race issues mentioned in the PR objectives.
40-52: LGTM: Health endpoint check.Using the
/healthendpoint instead of the root URL is a best practice for health checks.
54-78: LGTM: Startup with Tauri readiness check.The addition of
waitForTauriReady()before server startup properly addresses the Linux initialization race issues mentioned in the PR objectives.
115-155: LGTM: Robust graceful shutdown with fallback.The shutdown flow properly attempts graceful termination, waits and rechecks, then falls back to force kill if needed. The comprehensive error handling ensures
shutdownInProgressis always reset.
157-185: LGTM: Proper timeout handling with AbortController.The use of
AbortControllerfor request timeout and the handling of empty response bodies (line 171) is correct and robust.
187-199: LGTM: Platform-specific force kill.The implementation correctly delegates to platform-specific Tauri commands defined in the capabilities file.
201-214: LGTM: Cleaner restart flow.The
sleep()utility and its use inrestartServeris cleaner and more maintainable than the previoussetTimeout-based approach.
80-113: Both concerns are not applicable to this implementation.
Keepalive is supported: The code uses native webview fetch (no
@tauri-apps/plugin-httpimport), which supports thekeepaliveflag. Tauri's HTTP plugin doesn't document keepalive support, but this code doesn't use it.Parallel execution is intentional: The immediate
forceKillServer()call (line 91) is documented as a belt-and-suspenders approach for the fire-and-forget shutdown triggered by window close. There's a separateattemptGracefulShutdown()function with proper timeout/abort control for other scenarios. TheshutdownInProgressflag prevents race conditions. This design is sound.Likely an incorrect or invalid review comment.
backend/app/utils/microservice.py (4)
254-263: LGTM: Process reference getter.Simple and correct encapsulation of the global process reference.
352-390: LGTM: Process group creation for robust shutdown.The process group setup (CREATE_NEW_PROCESS_GROUP on Windows, start_new_session on Unix) and global process tracking properly enable the coordinated shutdown flow described in the PR objectives.
557-594: LGTM: Consistent process group handling.The process group creation and global tracking is consistent with the frozen service startup, ensuring uniform shutdown behavior across development and production modes.
233-252: LGTM: Dual health check strategy.The function properly checks both the process reference and HTTP health endpoint, providing a robust running state check. The
/api/v1/healthendpoint is correctly configured and the timeout handling is appropriate.
| const isWindows = (): boolean => { | ||
| return navigator.userAgent.toLowerCase().includes('windows'); | ||
| }; |
There was a problem hiding this comment.
Replace browser-based platform detection with Tauri's platform API.
The navigator.userAgent detection is browser-based and unreliable in a Tauri desktop application. The PR added @tauri-apps/plugin-os (version 2.3.2) which provides proper platform detection.
🔎 Proposed fix using @tauri-apps/plugin-os
+import { platform } from '@tauri-apps/plugin-os';
+
const isWindows = (): boolean => {
- return navigator.userAgent.toLowerCase().includes('windows');
+ return platform() === 'windows';
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const isWindows = (): boolean => { | |
| return navigator.userAgent.toLowerCase().includes('windows'); | |
| }; | |
| import { platform } from '@tauri-apps/plugin-os'; | |
| const isWindows = async (): Promise<boolean> => { | |
| return (await platform()) === 'windows'; | |
| }; |
🤖 Prompt for AI Agents
In @frontend/src/utils/serverUtils.ts around lines 11-13, Replace the
browser-based navigator check in isWindows with Tauri's OS API: import the
platform function from '@tauri-apps/plugin-os', change isWindows to async
(returning Promise<boolean>), call await platform() and return whether it equals
'windows' (or use the enum/value provided by the plugin), and update any callers
to await isWindows(). Ensure you remove the navigator.userAgent usage and add
the new import (e.g., import { platform } from '@tauri-apps/plugin-os').
- Services are now independent (backend no longer spawns sync) - Backend port changed to 52123, sync to 52124 - Updated serverUtils.ts to start/shutdown both services separately - Removed microservice.py (no longer needed) - Simplified shutdown.py (each service handles its own shutdown) - Added kill commands with [P] trick to prevent pkill self-match - Added waitForTauriReady() to fix Linux initialization race
There was a problem hiding this comment.
Actionable comments posted: 9
Fix all issues with AI Agents 🤖
In @backend/app/routes/shutdown.py:
- Around line 19-28: The delayed shutdown currently calls os._exit(0) in
_delayed_shutdown which forcefully terminates the process and bypasses
FastAPI/uvicorn lifespan cleanup (e.g., the finally block in main.py that runs
app.state.executor.shutdown(wait=True)); replace os._exit(0) with a graceful
termination such as sending SIGTERM to the current PID via os.kill(os.getpid(),
signal.SIGTERM) or calling sys.exit(0) so that uvicorn and the application's
lifespan handlers run and app.state.executor.shutdown(wait=True) executes.
In @backend/main.py:
- Around line 144-158: The code computes host from PICTOPY_HOST and logs a
warning but then hardcodes "0.0.0.0" when building the Config; update the Config
instantiation (symbol Config) to use the computed host variable (host) instead
of the string literal "0.0.0.0" so the effective bind address respects the
PICTOPY_HOST fallback to "127.0.0.1" and the conditional warning remains
accurate; ensure you do not change the default assignment of host and keep the
same log behavior.
In @docs/backend/backend_rust/api.md:
- Around line 18-19: The example uses resourcesFolderPath but the console.log
incorrectly references serverPath; update the console.log to log
resourcesFolderPath (e.g., console.log("Resources folder path:",
resourcesFolderPath)) so the variable names match and avoid a ReferenceError
when running the snippet.
In @frontend/src/utils/serverUtils.ts:
- Around line 119-172: The triggerShutdown function currently calls
forceKillServer() immediately, which prevents graceful shutdown because it kills
services before sending POST shutdown requests; update triggerShutdown to
attempt graceful shutdown first by sending the POSTs to BACKEND_URL and
SYNC_MICROSERVICE_URL, then only call forceKillServer() as a fallback after a
short timeout or if both fetches fail; alternatively replace triggerShutdown's
body to call stopServer() (which already implements graceful→force-kill) so you
reuse that correct flow and remove the premature forceKillServer() invocation.
In @scripts/setup.js:
- Around line 28-33: The chmodSync call on bashScript can throw if permissions
are insufficient; wrap the fs.chmodSync(bashScript, 0o755) in its own try-catch
(after the existing accessSync catch) and handle errors gracefully—catch the
error, log a clear message including bashScript and the caught error, and then
either exit process with a non-zero code (process.exit(1)) or otherwise surface
the failure so the setup script doesn't crash unhandled.
- Around line 19-44: The change removed macOS handling and now exits for darwin
platforms; restore macOS support by adding a branch for os.platform() ===
'darwin' (parallel to the Debian branch) that sets command = bashScript and args
= [], ensures bashScript is executable using fs.accessSync/fs.chmodSync (same
logic as the Debian block), and avoids the process.exit(1) path for macOS, or
explicitly document the removal of macOS support in the PR/README and provide a
migration note if dropping support is intentional.
In @scripts/setup.sh:
- Around line 102-111: The script currently runs cd sync-microservice without
checking whether the directory change succeeded, which can cause the venv
creation and pip installs to run in the wrong location; update the block around
the cd sync-microservice (the lines that create .sync-env with python -m venv
and install requirements.txt) to verify the cd succeeded and exit with an error
if it didn’t (e.g., replace the bare cd with a guarded cd or an if ! cd ...;
then echo an error and exit 1; fi), mirroring the error-handling pattern used
for the frontend setup so subsequent commands only run when the directory
exists.
In @sync-microservice/app/core/lifespan.py:
- Around line 37-57: The async lifespan context is using blocking time.sleep in
the retry loop which will freeze the event loop; replace the blocking call by
importing asyncio and using await asyncio.sleep(retry_interval) inside the while
loop in the lifespan function (where db_check_database_connection() is polled
and the retry/warning logic resides) so the coroutine yields during waits.
In @sync-microservice/main.py:
- Around line 50-64: The code reads PICTOPY_HOST into the local variable host
and warns when it is not localhost, but then ignores it by hardcoding
host="0.0.0.0" in the Config; change the Config creation to use the computed
host variable (the one assigned from os.getenv("PICTOPY_HOST", "127.0.0.1"))
instead of the literal "0.0.0.0", so update the Config(...) call (the Config
instantiation near the end of the snippet) to set host=host and keep other
settings the same, preserving the existing logger.warning behavior.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/setup.sh (1)
79-84: Remove dead code and undefined variable reference.The
OS_TYPEvariable is never defined in this script, making both conditional branches unreachable. Since macOS support was removed and the script now only supports Debian-based Linux (exiting at line 29 for other OSes), these OS-specific checks are unnecessary.🔎 Proposed fix
if ! command -v node &> /dev/null; then echo -e "${RED}Node.js is not installed. Installing...${NC}" - if [ "$OS_TYPE" = "Linux" ]; then - curl -fsSL https://deb.nodesource.com/setup_18.x | -E bash - - apt-get install -y nodejs - elif [ "$OS_TYPE" = "Darwin" ]; then - brew install node - fi + curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - + sudo apt-get install -y nodejs else echo -e "${GREEN}Node.js is installed. Version: $(node --version)${NC}" echo -e "${GREEN}npm version: $(npm --version)${NC}"Note: Added
sudoprefix to match the Debian package installation pattern used elsewhere in the script.
♻️ Duplicate comments (2)
sync-microservice/app/routes/shutdown.py (1)
20-29: Inconsistency: Code still usesos._exit(0)contrary to PR description.The PR summary explicitly states: "Replaced os._exit(0) with os.kill(pid, SIGTERM) in the sync service so Uvicorn can run shutdown handlers and close connections." However, this code still uses
os._exit(0)at line 29, which immediately terminates the process without running shutdown handlers or allowing graceful cleanup.Additionally, the past review comment flagged that SIGTERM behaves differently on Windows and may not trigger Uvicorn's graceful shutdown handlers reliably. This concern remains unaddressed.
🔎 Recommended approach for graceful shutdown
If the goal is truly graceful shutdown with cleanup:
+import signal + async def _delayed_shutdown(delay: float = 0.1): """ Shutdown the server after a short delay to allow the response to be sent. Args: delay: Seconds to wait before signaling shutdown (kept minimal) """ await asyncio.sleep(delay) logger.info("Exiting sync microservice...") - os._exit(0) + # Send SIGTERM to allow Uvicorn to run shutdown handlers + os.kill(os.getpid(), signal.SIGTERM)However, note that on Windows, SIGTERM is mapped to
TerminateProcesswhich may not reliably trigger cleanup. Consider platform-specific handling or testing on Windows.Based on learnings from past review and PR description.
frontend/src/utils/serverUtils.ts (1)
11-13: Replace browser-based platform detection with Tauri's platform API.The
navigator.userAgentdetection is browser-based and unreliable in a Tauri desktop application. The PR added@tauri-apps/plugin-oswhich provides proper platform detection.🔎 Proposed fix using @tauri-apps/plugin-os
+import { platform } from '@tauri-apps/plugin-os'; + -const isWindows = (): boolean => { - return navigator.userAgent.toLowerCase().includes('windows'); +const isWindows = async (): Promise<boolean> => { + return (await platform()) === 'windows'; };Then update callers to await:
- const commandName = isWindows() ? 'killProcessWindows' : 'killProcessUnix'; + const commandName = (await isWindows()) ? 'killProcessWindows' : 'killProcessUnix';Based on learnings from past review.
🧹 Nitpick comments (11)
docs/Manual_Setup_Guide.md (2)
122-122: Capitalize "macOS" correctly.The operating system name should be capitalized as "macOS" (not "MacOS") for consistency with Apple's branding and technical documentation standards.
🔎 Proposed fix for capitalization
- Bash(Linux/MacOS) + Bash(Linux/macOS)Apply the same change to line 136.
Also applies to: 136-136
108-167: Consider documenting the PICTOPY_HOST environment variable.The PR objectives mention that security hardening introduced a
PICTOPY_HOSTenvironment variable (default 127.0.0.1) that can be overridden. This information may be useful to document in the setup guide so users understand the security posture and how to override it if needed (e.g., for development scenarios requiring external access).Consider adding a brief note in the backend and sync-microservice sections explaining the
PICTOPY_HOSTvariable, its default behavior, and how to override it if required.docs/Script_Setup_Guide.md (1)
8-8: Minor grammar refinement on the note about Step 8.Consider rephrasing "please follow that step also" to "please also follow that step" for more formal tone in technical documentation.
🔎 Suggested improvement
- > Note that the Step No. 8 given below is not mentioned in the video, please follow that step also. + > Note that the Step No. 8 given below is not mentioned in the video, please also follow that step.scripts/setup.sh (1)
93-100: Consider adding error handling for consistency.For consistency with the frontend setup pattern (line 116) and the suggested fix for sync-microservice, consider adding error handling to the backend directory change as well.
🔎 Optional improvement
# ---- Set up the backend ---- echo -e "${YELLOW}Setting up backend...${NC}" -cd ../backend +cd ../backend || { echo -e "${RED}backend directory not found${NC}"; exit 1; } python -m venv .env source .env/bin/activate pip install --upgrade pip pip install -r requirements.txt deactivate cd ..frontend/src/hooks/useFolderOperations.tsx (1)
101-107: Consider the asymmetric user feedback for AI tagging operations.Success notification is now disabled for enabling AI tagging (line 104:
showSuccess: false) but remains active for disabling AI tagging (lines 120-121). This creates inconsistent feedback patterns for related operations.If this is intentional (e.g., the tagging status is immediately visible elsewhere in the UI), the asymmetry is acceptable. Otherwise, consider either:
- Showing success for both operations, or
- Hiding success for both if the status change is self-evident
frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx (1)
125-148: Grid layout mismatch: 3 columns defined but only 2 buttons present.The grid is configured with
md:grid-cols-3but now contains only 2 buttons after removing the Restart Server button. This may cause uneven spacing on medium+ screens.🔎 Suggested fix
- <div className="grid grid-cols-1 gap-4 md:grid-cols-3"> + <div className="grid grid-cols-1 gap-4 md:grid-cols-2">frontend/src/components/OnboardingSteps/ThemeSelectionStep.tsx (1)
42-46: MissingisEditingin useEffect dependency array.The
useEffectreferencesisEditingbut has an empty dependency array. IfisEditingchanges after mount, the effect won't re-evaluate. This appears intentional (run once on mount), but could cause subtle bugs if the editing state changes while this component is mounted.Consider either adding
isEditingto the dependency array or adding a comment explaining the intentional omission:useEffect(() => { + // Intentionally run only on mount - isEditing is checked but not a dependency if (localStorage.getItem('themeChosen') === 'true' && !isEditing) { dispatch(markCompleted(stepIndex)); } - }, []); + }, [dispatch, stepIndex]); // Or add isEditing if re-evaluation is desiredsync-microservice/app/core/lifespan.py (1)
32-35: Consider making retry parameters configurable.The hardcoded
connection_timeout = 60andretry_interval = 5values are reasonable defaults, but making them configurable via environment variables would improve flexibility for different deployment scenarios.frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)
44-52: Same missing dependency issue as ThemeSelectionStep.The
useEffectreferencesisEditingbut has an empty dependency array. This is the same pattern asThemeSelectionStep.tsx. If this is intentional, consider adding a comment to document the design decision.frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (2)
38-42: Consider addingisEditingto dependency array.The
useEffectreadsisEditingbut omits it from the dependency array. IfisEditingchanges after mount (e.g., user navigates back then forward), the effect won't re-evaluate, potentially causing stale behavior.🔎 Proposed fix
useEffect(() => { if (localStorage.getItem('folderChosen') === 'true' && !isEditing) { dispatch(markCompleted(stepIndex)); } - }, []); + }, [isEditing, dispatch, stepIndex]);
14-14: Merge duplicate imports from the same module.The imports from
@/features/onboardingSliceare split across two statements.🔎 Proposed fix
-import { markCompleted, previousStep } from '@/features/onboardingSlice'; +import { markCompleted, previousStep, setIsEditing } from '@/features/onboardingSlice'; import { AppFeatures } from '@/components/OnboardingSteps/AppFeatures'; import { useFolder } from '@/hooks/useFolder'; import { useEffect, useState } from 'react'; -import { setIsEditing } from '@/features/onboardingSlice';Also applies to: 18-18
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
backend/app/config/settings.pybackend/app/routes/shutdown.pybackend/app/utils/microservice.pybackend/main.pydocs/Manual_Setup_Guide.mddocs/Script_Setup_Guide.mddocs/backend/backend_python/openapi.jsondocs/backend/backend_rust/api.mdfrontend/src-tauri/capabilities/migrated.jsonfrontend/src-tauri/src/main.rsfrontend/src-tauri/src/services/mod.rsfrontend/src-tauri/tauri.conf.jsonfrontend/src/components/OnboardingSteps/AvatarSelectionStep.tsxfrontend/src/components/OnboardingSteps/FolderSetupStep.tsxfrontend/src/components/OnboardingSteps/ServerCheck.tsxfrontend/src/components/OnboardingSteps/ThemeSelectionStep.tsxfrontend/src/config/Backend.tsfrontend/src/features/onboardingSlice.tsfrontend/src/hooks/useFolderOperations.tsxfrontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsxfrontend/src/utils/serverUtils.tsscripts/setup.jsscripts/setup.ps1scripts/setup.shsync-microservice/README.mdsync-microservice/app/config/settings.pysync-microservice/app/core/lifespan.pysync-microservice/app/routes/folders.pysync-microservice/app/routes/health.pysync-microservice/app/routes/shutdown.pysync-microservice/app/routes/watcher.pysync-microservice/main.py
💤 Files with no reviewable changes (1)
- backend/app/utils/microservice.py
✅ Files skipped from review due to trivial changes (2)
- sync-microservice/README.md
- docs/backend/backend_python/openapi.json
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-31T17:00:50.132Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 570
File: backend/app/database/connection.py:16-24
Timestamp: 2025-10-31T17:00:50.132Z
Learning: In PictoPy backend, the user prefers not to use database connection retry logic or extended busy timeouts in the centralized get_db_connection() context manager, even though the app has concurrent access patterns via ProcessPoolExecutor and FastAPI.
Applied to files:
sync-microservice/app/core/lifespan.pyfrontend/src/components/OnboardingSteps/ServerCheck.tsx
📚 Learning: 2025-10-15T07:13:34.502Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/database/images.py:115-115
Timestamp: 2025-10-15T07:13:34.502Z
Learning: In the PictoPy backend and sync-microservice, exception logging using `logger.error(f"...")` without stack traces (i.e., without `exc_info=True` or `logger.exception()`) is acceptable for console logging, as this is a deliberate design decision made with the maintainer for console-only logs.
Applied to files:
sync-microservice/app/core/lifespan.py
🧬 Code graph analysis (11)
frontend/src-tauri/src/main.rs (1)
frontend/src-tauri/src/services/mod.rs (1)
get_resources_folder_path(5-11)
backend/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
SYNC_MICROSERVICE_URL(2-2)
frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)
frontend/src/app/store.ts (1)
RootState(22-22)
frontend/src/components/OnboardingSteps/ThemeSelectionStep.tsx (2)
frontend/src/app/store.ts (1)
RootState(22-22)frontend/src/features/onboardingSlice.ts (2)
setIsEditing(33-35)previousStep(48-55)
sync-microservice/app/core/lifespan.py (1)
sync-microservice/app/database/folders.py (1)
db_check_database_connection(47-72)
backend/app/routes/shutdown.py (1)
sync-microservice/app/routes/shutdown.py (4)
ShutdownResponse(13-17)_delayed_shutdown(20-29)shutdown(33-67)_handle_shutdown_exception(54-58)
sync-microservice/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
SYNC_MICROSERVICE_URL(2-2)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
watcher_util_stop_folder_watcher(300-328)sync-microservice/app/logging/setup_logging.py (1)
get_sync_logger(205-217)backend/app/routes/shutdown.py (4)
ShutdownResponse(12-16)_delayed_shutdown(19-28)shutdown(32-58)_handle_shutdown_exception(45-49)
sync-microservice/main.py (2)
sync-microservice/app/core/lifespan.py (1)
lifespan(20-77)sync-microservice/app/routes/shutdown.py (1)
shutdown(33-67)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
shutdown(32-58)sync-microservice/app/routes/shutdown.py (1)
shutdown(33-67)
frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (2)
frontend/src/app/store.ts (1)
RootState(22-22)frontend/src/features/onboardingSlice.ts (2)
setIsEditing(33-35)previousStep(48-55)
🪛 LanguageTool
docs/Script_Setup_Guide.md
[style] ~8-~8: The adverb ‘also’ is commonly used to connect clauses and isn’t usually used at the end of a phrase or before a conjunction. Consider replacing it with a more formal alternative.
Context: ...d in the video, please follow that step also. ### Prerequisites: - [NodeJS](https:...
(ALSO_AS_WELL)
docs/Manual_Setup_Guide.md
[uncategorized] ~122-~122: The operating system from Apple is written “macOS”.
Context: ...cies. Create one using: Bash(Linux/MacOS) ``` python3 -m venv .sync-env...
(MAC_OS)
[uncategorized] ~136-~136: The operating system from Apple is written “macOS”.
Context: ... Virtual Environment:** Bash(Linux/MacOS) ``` source .sync-env/bin/acti...
(MAC_OS)
🪛 markdownlint-cli2 (0.18.1)
docs/Script_Setup_Guide.md
60-60: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
66-66: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
74-74: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
80-80: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
100-100: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Shellcheck (0.11.0)
scripts/setup.sh
[warning] 104-104: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🔇 Additional comments (34)
docs/Manual_Setup_Guide.md (3)
81-81: PowerShell activation script name corrected.The capitalization change from
activate.ps1toActivate.ps1is correct for PowerShell on Windows. This is the actual script name in virtual environments.
101-101: Backend port migration reflected consistently.The port updates (8000 → 52123) are correctly reflected in both the command and the server URL. This aligns with the PR objectives for the backend port migration.
Also applies to: 104-104
108-167: Well-structured new Sync-Microservice setup section.The new section mirrors the backend setup structure effectively, providing parallel Windows/Linux/macOS guidance. The port 52124 and virtual environment naming (
.sync-env) are consistent with the PR changes.Verify that all steps in this new section correspond to the actual sync-microservice implementation (virtual environment location, dependencies file name, port, startup command syntax). You may want to confirm by running through the setup steps on a test environment or reviewing the sync-microservice directory structure.
docs/Script_Setup_Guide.md (4)
53-54: Port specification correctly updated to reflect new backend server port.Backend startup commands now specify
--port 52123consistently across Windows and Linux, aligning with the PR's port reassignment for the backend microservice.Also applies to: 62-62
65-83: New Step 8 (Sync-Microservice startup) properly documents dual-server architecture.The new section clearly separates backend and sync-microservice startup with platform-specific instructions (Windows PowerShell and Linux bash) and the correct port (52124). This effectively communicates the architectural change from a single microservice to independent servers, as outlined in the PR objectives.
53-53: Windows activation script filename capitalization is corrected.The change from
.env\Scripts\activate.ps1to.env\Scripts\Activate.ps1matches the actual PowerShell virtual environment activation script name on Windows systems.
85-85: Step numbering updated consistently after introduction of Step 8.The subsequent steps are correctly renumbered (Frontend app is now step 9, pre-commit setup is now step 10), maintaining logical flow and clarity.
Also applies to: 94-94
frontend/src-tauri/tauri.conf.json (1)
65-65: LGTM! Port migration aligns with PR objectives.The CSP update correctly reflects the new port assignments (backend → 52123, sync microservice → 52124) and maintains localhost-only restrictions, which aligns well with the security hardening goals mentioned in the PR. Both HTTP and WebSocket protocols are properly included for each service.
scripts/setup.ps1 (2)
146-147: LGTM! Improved user feedback.The added success message provides clear feedback that the backend setup completed successfully, consistent with the frontend setup messaging pattern.
153-173: Code structure is valid and follows the established setup pattern correctly.The sync-microservice setup block properly mirrors the backend approach with isolated virtual environment (
.sync-env), dependency installation, and error handling. Both thesync-microservicedirectory andrequirements.txtexist with appropriate FastAPI-based dependencies (FastAPI, uvicorn, Pydantic, etc.).frontend/src/components/OnboardingSteps/ServerCheck.tsx (2)
25-25: Extended retry window accommodates new process startup.Increasing retries from 10 to 60 (1 minute total) aligns with the PR's new process group management, which may take longer to initialize. This provides better resilience for slower systems during onboarding.
35-35: Consistent extended retry for sync microservice.Matching the 60-retry pattern ensures both services have equal time to start, which is appropriate given they're both launched independently in this PR's new architecture.
sync-microservice/app/routes/health.py (1)
6-6: No action required — tags are properly specified at router mounting.The
tagsparameter was moved from router initialization to theinclude_router()call inmain.py(line 41:app.include_router(health.router, tags=["Health"])). This is standard FastAPI practice and ensures the health endpoint is correctly grouped under the "Health" tag in OpenAPI documentation. All other routers follow the same pattern.Likely an incorrect or invalid review comment.
backend/app/config/settings.py (1)
5-5: All sync microservice references are correctly updated to port 52124.Verification confirms no stale references to old ports or API paths remain. The SYNC_MICROSERVICE_URL is properly centralized in settings and correctly referenced in API.py for the watcher restart endpoint.
sync-microservice/app/routes/watcher.py (1)
14-14: Router prefix is correctly applied at mount time.The watcher router is properly mounted at line 42 of
sync-microservice/main.pywithprefix="/watcher"andtags=["Watcher"]. All endpoints (/status,/restart,/stop,/start) are accessible at the correct paths (/watcher/status,/watcher/restart, etc.), and OpenAPI documentation correctly groups them under the "Watcher" tag.frontend/src/config/Backend.ts (1)
1-2: LGTM! Port updates align with PR objectives.The port changes from 8000→52123 and 8001→52124 are consistent with the backend configuration updates and align with the PR's infrastructure changes.
sync-microservice/app/config/settings.py (1)
3-4: LGTM! Configuration changes are consistent across services.The port updates match the frontend configuration and maintain consistency across the application.
frontend/src/utils/serverUtils.ts (3)
18-38: LGTM! Tauri readiness check prevents initialization race conditions.The
waitForTauriReadyfunction properly addresses the Linux initialization race mentioned in the PR objectives by polling for Tauri internals with a timeout.
174-214: LGTM!stopServerimplements proper graceful→force-kill fallback flow.This function correctly attempts graceful shutdown first via
attemptGracefulShutdown(), waits and rechecks server status, and only falls back toforceKillServer()if graceful shutdown fails. TheshutdownInProgressflag properly prevents concurrent shutdown attempts.
216-253: LGTM! Well-implemented graceful shutdown with timeout handling.The use of
AbortControllerwith timeout andPromise.allSettledto coordinate shutdown of both services is solid. The error handling properly distinguishes abort timeouts from other errors.sync-microservice/app/routes/folders.py (1)
10-10: No breaking change — the endpoint remains accessible at/folders/status.The router prefix was removed from the
APIRouter()definition infolders.py, but the/foldersprefix is applied at the app registration level insync-microservice/main.py(line 43:app.include_router(folders.router, prefix="/folders", ...)). This pattern is consistent across all route modules in the service. Frontend and backend clients already reference/folders/statusand will continue to work without changes.frontend/src-tauri/src/services/mod.rs (1)
4-11: LGTM!The function rename from
get_server_pathtoget_resources_folder_pathwith the updated path resolution to"resources"is clean and aligns with the PR's refactoring goals. Error handling is properly preserved.frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)
36-42: LGTM on localStorage-based state initialization.Initializing
nameandselectedAvatarfrom localStorage with fallback to empty strings is a clean approach for preserving user input across component re-renders and page refreshes.sync-microservice/main.py (1)
41-44: LGTM on router mounting.The router inclusions are well-organized with appropriate prefixes and tags. The new shutdown router integration aligns with the PR's graceful shutdown objectives.
frontend/src-tauri/src/main.rs (1)
18-27: Clarify the purpose of the differing paths in setup block vs exposed command.The
setupblock validates"resources/backend"(line 21) whileget_resources_folder_pathresolves"resources". The setup block logs this path but doesn't use it further. Add a comment to clarify whether the setup intentionally validates a specific subdirectory or if the paths should be aligned.backend/app/routes/shutdown.py (1)
31-58: Endpoint structure is well-designed.The pattern of scheduling shutdown after the response is sent (via
asyncio.create_task) with a done callback for exception handling is correct and consistent with the sync-microservice implementation. The docstring clearly documents the responsibility split between frontend and backend.frontend/src/features/onboardingSlice.ts (1)
12-12: LGTM!The
isEditingstate addition follows Redux Toolkit conventions and integrates cleanly with the existing slice structure. The reducer is straightforward and the type is correctly defined.Also applies to: 21-21, 33-35
backend/main.py (1)
28-28: Router integration and port update look correct.The shutdown router is properly imported and included. The server URL and port are consistently updated to 52123 as per the PR objectives.
Also applies to: 74-74, 134-134
docs/backend/backend_rust/api.md (1)
5-8: Documentation updates align with API rename.The function name change from
get_server_pathtoget_resources_folder_pathand the description update are consistent with the codebase changes.Also applies to: 26-26
frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (1)
65-71: Editing flow logic looks correct.Setting
isEditingtotruebeforepreviousStep()ensures that when returning to a previously-completed step, it renders instead of being auto-skipped. The conditional rendering at line 70 correctly respects this flag.frontend/src-tauri/capabilities/migrated.json (4)
95-111: LGTM: Clear naming for backend service.The rename from
StartServerUnix/WindowstoStartBackendUnix/Windowsimproves clarity now that backend and sync are independent services.
113-134: LGTM: Sync service spawn permissions added.The new
StartSyncServiceUnixandStartSyncServiceWindowspermissions correctly enable independent spawning of the sync service, consistent with the PR's architectural change.
141-141: The[P]pattern and fallback behavior are correctly implemented. The pattern matching works as expected—'[P]ictoPy_Server'matches the process namePictoPy_Serverbut not the pkill command itself. The kill commands are properly invoked as a fallback:stopServer()first attempts graceful HTTP shutdown, and only callsforceKillServer()(which executes the pkill commands) after the timeout expires and services remain running.
152-152: PowerShell semicolon chaining works as stated, but kill commands are not fallback-only.The semicolon-separated commands in PowerShell do execute correctly and sequentially regardless of the first command's exit status. However, the code reveals these kill commands are invoked unconditionally (with a "belt-and-suspenders" defensive pattern), not only after graceful shutdown fails. The
forceKillServer()function is called:
- Always when
stopServer()is invoked- Again as a fallback if graceful shutdown times out
- Again in the error handler if graceful shutdown encounters an error
The
/t /fflags appropriately terminate process trees with forced termination.Likely an incorrect or invalid review comment.
| const resourcesFolderPath = await invoke("get_resources_folder_path"); | ||
| console.log("Resources folder path:", serverPath); |
There was a problem hiding this comment.
Variable name mismatch in example.
The variable is renamed to resourcesFolderPath on line 18, but line 19 still references the old serverPath variable, which would cause a ReferenceError if copied.
🔎 Proposed fix
const resourcesFolderPath = await invoke("get_resources_folder_path");
-console.log("Resources folder path:", serverPath);
+console.log("Resources folder path:", resourcesFolderPath);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const resourcesFolderPath = await invoke("get_resources_folder_path"); | |
| console.log("Resources folder path:", serverPath); | |
| const resourcesFolderPath = await invoke("get_resources_folder_path"); | |
| console.log("Resources folder path:", resourcesFolderPath); |
🤖 Prompt for AI Agents
In @docs/backend/backend_rust/api.md around lines 18-19, The example uses
resourcesFolderPath but the console.log incorrectly references serverPath;
update the console.log to log resourcesFolderPath (e.g., console.log("Resources
folder path:", resourcesFolderPath)) so the variable names match and avoid a
ReferenceError when running the snippet.
| } else if (os.platform() === 'linux') { | ||
| // Check if it's Debian-based Linux | ||
| const debianVersionPath = '/etc/debian_version'; | ||
| if (fs.existsSync(debianVersionPath)) { | ||
| // On Debian-based Linux, use the bash script | ||
| command = bashScript; | ||
| args = []; | ||
|
|
||
| // Ensure the bash script is executable; if not, set the execute permission. | ||
| try { | ||
| fs.accessSync(bashScript, fs.constants.X_OK); | ||
| } catch (err) { | ||
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | ||
| fs.chmodSync(bashScript, 0o755); | ||
| } | ||
| } else { | ||
| console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.'); | ||
| console.error('Please install system dependencies manually and run the individual setup commands.'); | ||
| process.exit(1); | ||
| } | ||
| } else { | ||
| console.error(`Unsupported operating system: ${os.platform()}`); | ||
| console.error('This setup script only supports Windows and Debian-based Linux distributions.'); | ||
| console.error('Please install system dependencies manually and run the individual setup commands.'); | ||
| process.exit(1); | ||
| } |
There was a problem hiding this comment.
Breaking change: macOS support removed.
The refactor replaces the previous Linux/macOS universal branch with Debian-specific logic, causing the script to exit with an error on macOS (lines 39-44). This breaks existing macOS workflows unless platform support was intentionally deprecated.
If macOS is no longer supported, document this breaking change in the PR description, README, and consider a migration guide. Otherwise, restore macOS support by adding a darwin platform branch.
🔎 Proposed fix to restore macOS support
} else if (os.platform() === 'linux') {
// Check if it's Debian-based Linux
const debianVersionPath = '/etc/debian_version';
if (fs.existsSync(debianVersionPath)) {
// On Debian-based Linux, use the bash script
command = bashScript;
args = [];
// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
} else {
console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
+} else if (os.platform() === 'darwin') {
+ // On macOS, use the bash script
+ command = bashScript;
+ args = [];
+
+ // Ensure the bash script is executable; if not, set the execute permission.
+ try {
+ fs.accessSync(bashScript, fs.constants.X_OK);
+ } catch (err) {
+ console.log(`File ${bashScript} is not executable. Setting execute permission...`);
+ fs.chmodSync(bashScript, 0o755);
+ }
} else {
console.error(`Unsupported operating system: ${os.platform()}`);
- console.error('This setup script only supports Windows and Debian-based Linux distributions.');
+ console.error('This setup script only supports Windows, Debian-based Linux, and macOS.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else if (os.platform() === 'linux') { | |
| // Check if it's Debian-based Linux | |
| const debianVersionPath = '/etc/debian_version'; | |
| if (fs.existsSync(debianVersionPath)) { | |
| // On Debian-based Linux, use the bash script | |
| command = bashScript; | |
| args = []; | |
| // Ensure the bash script is executable; if not, set the execute permission. | |
| try { | |
| fs.accessSync(bashScript, fs.constants.X_OK); | |
| } catch (err) { | |
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | |
| fs.chmodSync(bashScript, 0o755); | |
| } | |
| } else { | |
| console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.'); | |
| console.error('Please install system dependencies manually and run the individual setup commands.'); | |
| process.exit(1); | |
| } | |
| } else { | |
| console.error(`Unsupported operating system: ${os.platform()}`); | |
| console.error('This setup script only supports Windows and Debian-based Linux distributions.'); | |
| console.error('Please install system dependencies manually and run the individual setup commands.'); | |
| process.exit(1); | |
| } | |
| } else if (os.platform() === 'linux') { | |
| // Check if it's Debian-based Linux | |
| const debianVersionPath = '/etc/debian_version'; | |
| if (fs.existsSync(debianVersionPath)) { | |
| // On Debian-based Linux, use the bash script | |
| command = bashScript; | |
| args = []; | |
| // Ensure the bash script is executable; if not, set the execute permission. | |
| try { | |
| fs.accessSync(bashScript, fs.constants.X_OK); | |
| } catch (err) { | |
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | |
| fs.chmodSync(bashScript, 0o755); | |
| } | |
| } else { | |
| console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.'); | |
| console.error('Please install system dependencies manually and run the individual setup commands.'); | |
| process.exit(1); | |
| } | |
| } else if (os.platform() === 'darwin') { | |
| // On macOS, use the bash script | |
| command = bashScript; | |
| args = []; | |
| // Ensure the bash script is executable; if not, set the execute permission. | |
| try { | |
| fs.accessSync(bashScript, fs.constants.X_OK); | |
| } catch (err) { | |
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | |
| fs.chmodSync(bashScript, 0o755); | |
| } | |
| } else { | |
| console.error(`Unsupported operating system: ${os.platform()}`); | |
| console.error('This setup script only supports Windows, Debian-based Linux, and macOS.'); | |
| console.error('Please install system dependencies manually and run the individual setup commands.'); | |
| process.exit(1); | |
| } |
🤖 Prompt for AI Agents
In @scripts/setup.js around lines 19-44, The change removed macOS handling and
now exits for darwin platforms; restore macOS support by adding a branch for
os.platform() === 'darwin' (parallel to the Debian branch) that sets command =
bashScript and args = [], ensures bashScript is executable using
fs.accessSync/fs.chmodSync (same logic as the Debian block), and avoids the
process.exit(1) path for macOS, or explicitly document the removal of macOS
support in the PR/README and provide a migration note if dropping support is
intentional.
| try { | ||
| fs.accessSync(bashScript, fs.constants.X_OK); | ||
| } catch (err) { | ||
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | ||
| fs.chmodSync(bashScript, 0o755); | ||
| } |
There was a problem hiding this comment.
Handle potential chmodSync failure.
The try-catch wraps accessSync but not chmodSync (line 32), which can throw if the process lacks permission to modify file attributes. An unhandled exception here would crash the setup script.
🔎 Proposed fix to wrap chmodSync in error handling
// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
- fs.chmodSync(bashScript, 0o755);
+ try {
+ fs.chmodSync(bashScript, 0o755);
+ } catch (chmodErr) {
+ console.error(`Failed to set execute permission on ${bashScript}: ${chmodErr.message}`);
+ console.error('Please set execute permission manually: chmod +x scripts/setup.sh');
+ process.exit(1);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| fs.accessSync(bashScript, fs.constants.X_OK); | |
| } catch (err) { | |
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | |
| fs.chmodSync(bashScript, 0o755); | |
| } | |
| try { | |
| fs.accessSync(bashScript, fs.constants.X_OK); | |
| } catch (err) { | |
| console.log(`File ${bashScript} is not executable. Setting execute permission...`); | |
| try { | |
| fs.chmodSync(bashScript, 0o755); | |
| } catch (chmodErr) { | |
| console.error(`Failed to set execute permission on ${bashScript}: ${chmodErr.message}`); | |
| console.error('Please set execute permission manually: chmod +x scripts/setup.sh'); | |
| process.exit(1); | |
| } | |
| } |
🤖 Prompt for AI Agents
In @scripts/setup.js around lines 28-33, The chmodSync call on bashScript can
throw if permissions are insufficient; wrap the fs.chmodSync(bashScript, 0o755)
in its own try-catch (after the existing accessSync catch) and handle errors
gracefully—catch the error, log a clear message including bashScript and the
caught error, and then either exit process with a non-zero code
(process.exit(1)) or otherwise surface the failure so the setup script doesn't
crash unhandled.
| # ---- Set up the sync-microservice ---- | ||
| echo -e "${YELLOW}Setting up sync-microservice...${NC}" | ||
| cd sync-microservice | ||
| python -m venv .sync-env | ||
| source .sync-env/bin/activate | ||
| pip install --upgrade pip | ||
| pip install -r requirements.txt | ||
| deactivate | ||
| cd .. | ||
|
|
There was a problem hiding this comment.
Add error handling for directory change.
The cd sync-microservice command lacks error handling. If the directory doesn't exist, subsequent commands would execute in the wrong directory, leading to silent failures.
🔎 Proposed fix to match frontend pattern
# ---- Set up the sync-microservice ----
echo -e "${YELLOW}Setting up sync-microservice...${NC}"
-cd sync-microservice
+cd sync-microservice || { echo -e "${RED}sync-microservice directory not found${NC}"; exit 1; }
python -m venv .sync-env
source .sync-env/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
cd ..Based on static analysis hints (Shellcheck SC2164).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # ---- Set up the sync-microservice ---- | |
| echo -e "${YELLOW}Setting up sync-microservice...${NC}" | |
| cd sync-microservice | |
| python -m venv .sync-env | |
| source .sync-env/bin/activate | |
| pip install --upgrade pip | |
| pip install -r requirements.txt | |
| deactivate | |
| cd .. | |
| # ---- Set up the sync-microservice ---- | |
| echo -e "${YELLOW}Setting up sync-microservice...${NC}" | |
| cd sync-microservice || { echo -e "${RED}sync-microservice directory not found${NC}"; exit 1; } | |
| python -m venv .sync-env | |
| source .sync-env/bin/activate | |
| pip install --upgrade pip | |
| pip install -r requirements.txt | |
| deactivate | |
| cd .. | |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 104-104: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
In @scripts/setup.sh around lines 102-111, The script currently runs cd
sync-microservice without checking whether the directory change succeeded, which
can cause the venv creation and pip installs to run in the wrong location;
update the block around the cd sync-microservice (the lines that create
.sync-env with python -m venv and install requirements.txt) to verify the cd
succeeded and exit with an error if it didn’t (e.g., replace the bare cd with a
guarded cd or an if ! cd ...; then echo an error and exit 1; fi), mirroring the
error-handling pattern used for the frontend setup so subsequent commands only
run when the directory exists.
| while True: | ||
| attempt += 1 | ||
| elapsed_time = time.time() - start_time | ||
|
|
||
| if db_check_database_connection(): | ||
| logger.info(f"Database connection successful on attempt {attempt}") | ||
| break | ||
|
|
||
| logger.info("Database connection successful") | ||
| if elapsed_time >= connection_timeout: | ||
| logger.error( | ||
| f"Failed to connect to PictoPy database after {attempt} attempts over {elapsed_time:.1f} seconds" | ||
| ) | ||
| logger.error( | ||
| "Make sure the main PictoPy backend is set up and the database exists" | ||
| ) | ||
| raise RuntimeError("Database connection failed after multiple attempts") | ||
|
|
||
| logger.warning( | ||
| f"Database connection attempt {attempt} failed. Retrying in {retry_interval} seconds... ({elapsed_time:.1f}s elapsed)" | ||
| ) | ||
| time.sleep(retry_interval) |
There was a problem hiding this comment.
Blocking time.sleep() in async context will freeze the event loop.
The lifespan function is an async context manager, but time.sleep(retry_interval) at Line 57 is a blocking call that will freeze the entire event loop during retries. Use asyncio.sleep() instead.
🔎 Proposed fix
Add the import at the top:
import asyncioThen replace the blocking sleep:
- time.sleep(retry_interval)
+ await asyncio.sleep(retry_interval)🤖 Prompt for AI Agents
In @sync-microservice/app/core/lifespan.py around lines 37-57, The async
lifespan context is using blocking time.sleep in the retry loop which will
freeze the event loop; replace the blocking call by importing asyncio and using
await asyncio.sleep(retry_interval) inside the while loop in the lifespan
function (where db_check_database_connection() is polled and the retry/warning
logic resides) so the coroutine yields during waits.
… os._exit on Windows), fix shutdown order
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
backend/main.py (1)
73-75: OpenAPI servers configuration hardcodeslocalhostwhile runtime host is configurable.The
serverslist hardcodeshttp://localhost:52123, but the runtime host binding now usesPICTOPY_HOST(defaulting to127.0.0.1). Whilelocalhostand127.0.0.1typically resolve the same, this could cause confusion if someone setsPICTOPY_HOST=0.0.0.0and expects the OpenAPI docs to reflect the actual binding.🔎 Proposed fix to use dynamic host in OpenAPI config
+# Read host at module level for OpenAPI config +_openapi_host = os.getenv("PICTOPY_HOST", "127.0.0.1") + # Create FastAPI app app = FastAPI( lifespan=lifespan, title="PictoPy", description="The API calls to PictoPy are done via HTTP requests. This backend is built using FastAPI.", contact={ "name": "PictoPy Postman Collection", "url": "https://www.postman.com/aossie-pictopy/pictopy/overview", }, servers=[ - {"url": "http://localhost:52123", "description": "Local Development server"} + {"url": f"http://{_openapi_host}:52123", "description": "Local Development server"} ], )
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/app/routes/shutdown.pybackend/main.pyfrontend/src/utils/serverUtils.tssync-microservice/app/routes/shutdown.pysync-microservice/main.py
🚧 Files skipped from review as they are similar to previous changes (1)
- sync-microservice/app/routes/shutdown.py
🧰 Additional context used
🧬 Code graph analysis (3)
sync-microservice/main.py (4)
backend/main.py (1)
lifespan(44-61)sync-microservice/app/core/lifespan.py (1)
lifespan(20-77)backend/app/routes/shutdown.py (1)
shutdown(40-66)sync-microservice/app/routes/shutdown.py (1)
shutdown(41-75)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
shutdown(40-66)sync-microservice/app/routes/shutdown.py (1)
shutdown(41-75)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (2)
BACKEND_URL(1-1)SYNC_MICROSERVICE_URL(2-2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Backend Tests
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
🔇 Additional comments (11)
sync-microservice/main.py (2)
50-64: LGTM! Host configuration now correctly uses the environment variable.The
PICTOPY_HOSTenvironment variable is properly read (defaulting to127.0.0.1) and thehostvariable is correctly passed to theConfigat Line 60. The security warning for non-local bindings is appropriate.
6-6: Shutdown router integration looks correct.The shutdown router is properly imported and registered without a prefix, exposing the
/shutdownendpoint at the root level consistently with the backend.Also applies to: 44-44
backend/app/routes/shutdown.py (3)
52-61: Exception handling for background task is well-implemented.The callback pattern using
add_done_callbackproperly captures and logs any exceptions from the background shutdown task without blocking the response.
14-18: Response model is clean and appropriate.The
ShutdownResponsemodel provides a clear contract for the shutdown endpoint response.
31-36: Windows shutdown still bypasses cleanup due toos._exit()limitation.The Unix path correctly uses
SIGTERMto allow cleanup handlers to run. On Windows,os._exit(0)is used with a comment explaining that SIGTERM doesn't work reliably with uvicorn subprocesses. This means the lifespan'sfinallyblock (e.g.,app.state.executor.shutdown(wait=True)) will not execute on Windows.However,
sys.exit(0)won't resolve this since_delayed_shutdownis a background async task—callingsys.exit()from a background task only raisesSystemExitin that task and won't exit the process. The current approach is the practical choice given that graceful Windows shutdown via signals is not reliable. The existing comment adequately documents this Windows limitation.backend/main.py (2)
144-158: LGTM! Host configuration correctly uses the environment variable.The
PICTOPY_HOSTenvironment variable is properly read and applied to theConfig. The security warning for non-local bindings is consistent with the sync-microservice implementation.
28-28: Shutdown router integration is correct.The shutdown router is properly imported and registered, exposing the
/shutdownendpoint.Also applies to: 134-134
frontend/src/utils/serverUtils.ts (4)
18-38: Tauri readiness check is well-implemented.The polling approach with timeout ensures the server startup waits for Tauri internals to be initialized, preventing race conditions during app initialization.
175-215:stopServerimplements a proper graceful-then-force shutdown flow.The implementation correctly:
- Checks if services are already stopped
- Attempts graceful shutdown first
- Rechecks after a delay
- Falls back to force kill only if needed
- Handles errors with force kill as last resort
This is the correct pattern for reliable shutdown.
217-254: Graceful shutdown implementation is solid.The
AbortControllerwith timeout prevents hanging on unresponsive services. UsingPromise.allSettledensures both shutdown requests are attempted even if one fails. The success check (line 240) returnstrueif either service responds successfully.
69-76: Good addition of Tauri readiness check before starting services.Waiting for Tauri to be fully initialized prevents race conditions that could occur if
invokeis called before Tauri internals are ready.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI Agents
In @docs/stylesheets/extra.css:
- Line 318: The CSS rule has an invalid leading backslash causing the selector
"\ .md-footer-meta__inner.md-grid" to be ignored; open the stylesheet and remove
the backslash so the selector reads ".md-footer-meta__inner.md-grid", then save
and re-run the linter/formatter (Biome) to ensure the rule block is parsed and
the footer layout is restored.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.github/ISSUE_TEMPLATE/feature.yml.github/workflows/pr-check-tests.ymlCODE_OF_CONDUCT.mddocs/Manual_Setup_Guide.mddocs/backend/backend_python/openapi.jsondocs/frontend/screenshots.mddocs/frontend/state-management.mddocs/stylesheets/extra.cssfrontend/src-tauri/capabilities/migrated.json
💤 Files with no reviewable changes (3)
- docs/Manual_Setup_Guide.md
- .github/workflows/pr-check-tests.yml
- .github/ISSUE_TEMPLATE/feature.yml
✅ Files skipped from review due to trivial changes (2)
- docs/frontend/state-management.md
- docs/frontend/screenshots.md
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src-tauri/capabilities/migrated.json
🧰 Additional context used
🪛 Biome (2.1.2)
docs/stylesheets/extra.css
[error] 317-318: Unknown type selector is not allowed.
See MDN web docs for more details.
Consider replacing the unknown type selector with valid one.
(lint/correctness/noUnknownTypeSelector)
🪛 Checkov (3.2.334)
docs/backend/backend_python/openapi.json
[high] 1-2760: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-2760: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🪛 LanguageTool
CODE_OF_CONDUCT.md
[style] ~32-~32: Try using a synonym here to strengthen your wording.
Context: ...ind - Trolling, insulting or derogatory comments, and personal or political attacks - Pu...
(COMMENT_REMARK)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Backend Tests
🔇 Additional comments (3)
docs/backend/backend_python/openapi.json (1)
1-2760: Verify OpenAPI spec is up-to-date and includes security definitions.This file contains only formatting changes (converting multi-line arrays to single-line format). However, Checkov has flagged that the OpenAPI spec lacks a global
securityfield and has emptysecurityOperations.Given that this PR introduces security-hardening changes (default host binding to
127.0.0.1, new shutdown endpoints), the OpenAPI spec should document these security controls. Additionally, the PR objectives mention new/shutdownendpoints in both backend and sync services, but these are not visible in the provided excerpt.Please verify:
- Are the newly added
/shutdownendpoints documented in the full OpenAPI spec?- Should a global
securityfield be added to enforce authorization/authentication requirements?- Are the Checkov warnings (
CKV_OPENAPI_4,CKV_OPENAPI_5) newly introduced by this PR, or pre-existing?docs/stylesheets/extra.css (2)
14-19: LGTM: cosmetic formatting changes.The formatting adjustments have no semantic impact on the CSS.
46-47: LGTM: multi-selector line splits.The multi-line selector formatting is valid CSS and improves readability.
Also applies to: 194-195, 212-217
|
|
||
| \ | ||
| .md-footer-meta__inner.md-grid { | ||
| \ .md-footer-meta__inner.md-grid { |
There was a problem hiding this comment.
Critical: Invalid CSS syntax with leading backslash.
The backslash before the selector is invalid CSS syntax and will cause the entire rule block (lines 318-325) to be ignored by browsers, breaking the footer layout.
🔎 Proposed fix
-\ .md-footer-meta__inner.md-grid {
+.md-footer-meta__inner.md-grid {Based on static analysis hints, this is flagged by Biome as an unknown type selector error.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| \ .md-footer-meta__inner.md-grid { | |
| .md-footer-meta__inner.md-grid { |
🧰 Tools
🪛 Biome (2.1.2)
[error] 317-318: Unknown type selector is not allowed.
See MDN web docs for more details.
Consider replacing the unknown type selector with valid one.
(lint/correctness/noUnknownTypeSelector)
🤖 Prompt for AI Agents
In @docs/stylesheets/extra.css at line 318, The CSS rule has an invalid leading
backslash causing the selector "\ .md-footer-meta__inner.md-grid" to be ignored;
open the stylesheet and remove the backslash so the selector reads
".md-footer-meta__inner.md-grid", then save and re-run the linter/formatter
(Biome) to ensure the rule block is parsed and the footer layout is restored.
closes #870
Description
This PR significantly refactors the application's startup and shutdown lifecycle to ensure reliability, prevent orphaned processes, (and close some critical security vulnerabilities regarding network exposure.)
Key Changes
1. Robust Graceful Shutdown
microservice.pyto launch the sync service in a new process group (CREATE_NEW_PROCESS_GROUPon Windows,setsidon Unix). This guarantees that a force-kill command terminates the entire process tree, leaving no orphans.os._exit(0)withos.kill(pid, SIGTERM)in the sync service to allow Uvicorn to perform proper cleanup (close connections, run shutdown handlers).2. Security Hardening
0.0.0.0(All Interfaces) to127.0.0.1(Localhost). This prevents the application from being accessible to other devices on the network by default.PICTOPY_HOSTenvironment variable.127.0.0.1(Secure).PICTOPY_HOST=0.0.0.0) for specific testing scenarios (VMs/Containers), with a startup warning logged if an insecure binding is detected.Note
Hot-Reload Disabled for Sync Service: The
--reloadflag has been intentionally removed from thesync-microservicestartup command. This is critical to ensure the PID tracked by the backend is the actual server process, enabling reliable termination. Hot-reloading wrappers interfere with signal propagation. Developers working onsync-microservicemust restart the backend manually to apply changes.Testing
python.exeprocesses remain after window close.Graceful_Shutdown_Updated.mp4
Graceful_Shutdown_Debian.mp4
Note
The
ShutdownResponsedoes still exist, not for theonCloseListenerin frontend, but for the "Restart Server" button in the settings page.Summary by CodeRabbit
New Features
Dependencies
Chores
✏️ Tip: You can customize this high-level summary in your review settings.