Skip to content

Bug Fix: Graceful Backend Servers Shutdown#890

Open
rohan-pandeyy wants to merge 6 commits intoAOSSIE-Org:mainfrom
rohan-pandeyy:fix/shut-server-and-sync-on-close
Open

Bug Fix: Graceful Backend Servers Shutdown#890
rohan-pandeyy wants to merge 6 commits intoAOSSIE-Org:mainfrom
rohan-pandeyy:fix/shut-server-and-sync-on-close

Conversation

@rohan-pandeyy
Copy link
Contributor

@rohan-pandeyy rohan-pandeyy commented Dec 30, 2025

closes #870

Description

This PR significantly refactors the application's startup and shutdown lifecycle to ensure reliability, prevent orphaned processes, (and close some critical security vulnerabilities regarding network exposure.)

Key Changes

1. Robust Graceful Shutdown

  • Process Group Isolation: Modified microservice.py to launch the sync service in a new process group (CREATE_NEW_PROCESS_GROUP on Windows, setsid on Unix). This guarantees that a force-kill command terminates the entire process tree, leaving no orphans.
  • Graceful Signal Handling:
    • Replaced os._exit(0) with os.kill(pid, SIGTERM) in the sync service to allow Uvicorn to perform proper cleanup (close connections, run shutdown handlers).
    • Backend now attempts a graceful HTTP shutdown of the sync service before resorting to termination.

2. Security Hardening

  • Localhost Binding (Critical): Changed the default host binding for both the Backend and Sync Microservice from 0.0.0.0 (All Interfaces) to 127.0.0.1 (Localhost). This prevents the application from being accessible to other devices on the network by default.
  • Flexible Deployment Support: Introduced the PICTOPY_HOST environment variable.
    • Defaults to 127.0.0.1 (Secure).
    • Can be overridden (e.g., PICTOPY_HOST=0.0.0.0) for specific testing scenarios (VMs/Containers), with a startup warning logged if an insecure binding is detected.

Note

Hot-Reload Disabled for Sync Service: The --reload flag has been intentionally removed from the sync-microservice startup command. This is critical to ensure the PID tracked by the backend is the actual server process, enabling reliable termination. Hot-reloading wrappers interfere with signal propagation. Developers working on sync-microservice must restart the backend manually to apply changes.

Testing

  • Manual Verification: Verified app startup and shutdown on Windows. Confirmed no "orphaned" python.exe processes remain after window close.
Graceful_Shutdown_Updated.mp4
  • Verified app startup and shutdown on Debian 👇
Graceful_Shutdown_Debian.mp4

Note

The ShutdownResponse does still exist, not for the onCloseListener in frontend, but for the "Restart Server" button in the settings page.

Summary by CodeRabbit

  • New Features

    • Added graceful shutdown endpoints for backend and sync services; frontend now triggers coordinated shutdown and improved startup handling.
    • Configurable host binding via environment variable for services.
    • Enhanced shutdown flow with force‑kill fallback for reliability; frontend restart/stop controls exposed.
  • Dependencies

    • Added @tauri-apps/plugin-os.
  • Chores

    • Updated Tauri shell permissions and minor docs/CI formatting refinements.

✏️ Tip: You can customize this high-level summary in your review settings.

Introduces a new /shutdown endpoint to the backend for graceful shutdown, including stopping the sync microservice. Refactors sync microservice management to track the process globally, support graceful and forced shutdown via HTTP and process signals, and updates subprocess handling for cross-platform compatibility. Updates main backend entry to include the shutdown route and allow configurable host binding. Frontend and utility code updated to support new shutdown flow.
@github-actions
Copy link
Contributor

⚠️ No issue was linked in the PR description.
Please make sure to link an issue (e.g., 'Fixes #issue_number')

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 30, 2025

📝 Walkthrough

Walkthrough

Adds coordinated shutdown support: new /shutdown endpoints in backend and sync services, frontend changes to trigger shutdown on app close (Tauri-aware), tauri capability additions to allow killing processes, and host/config tweaks for service bindings.

Changes

Cohort / File(s) Summary
Backend Shutdown API
backend/app/routes/shutdown.py
New FastAPI POST /shutdown with ShutdownResponse model; schedules _delayed_shutdown() background task that exits the process (os._exit on Windows, SIGTERM on Unix); uses logger and background-task exception callback.
Backend Integration
backend/main.py
Imports & includes shutdown_router in app (included twice in diff); introduces PICTOPY_HOST env var (default 127.0.0.1) and warning for non-local hosts; OpenAPI/Uvicorn host set from env.
Sync Service Shutdown API
sync-microservice/app/routes/shutdown.py
Mirror of backend shutdown: ShutdownResponse, attempt to stop folder watcher, schedules _delayed_shutdown() via asyncio.create_task(), logs errors and status.
Sync Service Integration
sync-microservice/main.py
Registers shutdown router; adds PICTOPY_HOST env handling and warning; uses env-derived host for Uvicorn config.
Frontend: window-close & startup
frontend/src/main.tsx
Replaces earlier stop flow with triggerShutdown on close (Tauri-gated); only starts backend in production Tauri mode; removes early BrowserWarning conditional rendering.
Frontend: server utilities
frontend/src/utils/serverUtils.ts
Large refactor: waitForTauriReady(), triggerShutdown() (idempotent fire-and-forget), attemptGracefulShutdown() with AbortController, stopServer() (graceful→force-kill fallback), forceKillServer() (platform-specific), sleep(), updated startServer()/restartServer(); exported APIs updated. Review for correctness, timeouts, and concurrency guards.
Tauri capabilities & deps
frontend/src-tauri/capabilities/migrated.json, frontend/package.json
Adds shell:allow-execute entries to kill backend/sync on Unix/Windows and adds @tauri-apps/plugin-os ^2.3.2. Check permission names and commands for platform safety.
Docs / CI / Misc
docs/*, .github/*, CODE_OF_CONDUCT.md, docs/stylesheets/extra.css
Mostly formatting/typography/whitespace changes in docs, small workflow template whitespace edits, and Code of Conduct reflow. No behavioral change.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Frontend
    participant Tauri as Tauri API
    participant Backend as Backend /shutdown
    participant Sync as Sync /shutdown
    participant Shell as Shell (tauri shell/OS plugin)

    User->>Frontend: Close window
    Frontend->>Tauri: onCloseRequested (Tauri only)
    Frontend->>Frontend: triggerShutdown() [idempotent]
    alt Graceful shutdown
      Frontend->>Backend: POST /shutdown (with timeout)
      Backend-->>Frontend: ShutdownResponse
      Frontend->>Sync: POST /shutdown (with timeout)
      Sync-->>Frontend: ShutdownResponse
    else Fallback / failures
      Frontend->>Shell: execute pkill/taskkill for server & sync
      Shell-->>Frontend: exec result
    end
    Frontend->>User: exit (app closes)
    note right of Backend: background task sleeps then exits process
    note right of Sync: background task sleeps then exits process
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

frontend

Suggested reviewers

  • rahulharpal1603

Poem

🐰 I hopped in code with nimble paws,

When windows close, I mind the laws.
I whisper "shutdown" — gentle, neat,
No ghost processes left on the street.
Hooray! Both server and sync find sleep.

Pre-merge checks

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Out of Scope Changes check ⚠️ Warning Several out-of-scope changes are present: formatting-only edits to documentation files (CODE_OF_CONDUCT.md, screenshots.md, state-management.md, extra.css), GitHub workflow/template changes, and OpenAPI JSON reformatting unrelated to shutdown functionality. Remove formatting-only and documentation changes unrelated to graceful shutdown. Separate them into a dedicated formatting/documentation PR or commit to keep this PR focused on the shutdown fix.
Docstring Coverage ⚠️ Warning Docstring coverage is 65.22% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Bug Fix: Graceful Backend Servers Shutdown' accurately reflects the main change in the PR, which implements graceful shutdown mechanisms for backend and sync services. It is concise and directly addresses the primary objective.
Linked Issues check ✅ Passed The PR successfully addresses the core objective of #870: preventing orphaned backend and sync processes after app closure. Changes include graceful HTTP shutdown endpoints, process group management for reliable termination, and host binding security hardening.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

⚠️ No issue was linked in the PR description.
Please make sure to link an issue (e.g., 'Fixes #issue_number')

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (6)
sync-microservice/main.py (1)

50-60: Security improvement: default binding to localhost.

The environment-driven host configuration with 127.0.0.1 default is a solid security enhancement. Consider also treating ::1 (IPv6 localhost) as safe in the validation check for environments that prefer IPv6.

-    if host not in ("127.0.0.1", "localhost"):
+    if host not in ("127.0.0.1", "localhost", "::1"):
backend/main.py (1)

60-63: Consider stopping sync microservice in lifespan finally block.

The finally block only shuts down the executor. If the application exits without going through the /shutdown endpoint (e.g., SIGTERM directly), the sync microservice may become orphaned. Consider adding microservice_util_stop_sync_service() to the cleanup.

🔎 Proposed enhancement
+from app.utils.microservice import microservice_util_start_sync_service, microservice_util_stop_sync_service

     try:
         yield
     finally:
         app.state.executor.shutdown(wait=True)
+        microservice_util_stop_sync_service(timeout=3.0)
sync-microservice/app/routes/shutdown.py (1)

35-70: Consider adding basic access control to shutdown endpoint.

The /shutdown endpoint has no authentication. While binding to localhost mitigates external exposure, any local process can trigger shutdown. For defense-in-depth, consider adding a simple bearer token or checking for a shared secret from an environment variable.

backend/app/utils/microservice.py (3)

40-100: Return value doesn't reflect actual success/failure.

The function always returns True regardless of whether the shutdown succeeded or encountered errors. Consider returning False when force-kill is needed or exceptions occur, so callers can react appropriately.

🔎 Proposed fix
+    success = True
+
     # Wait for process to exit if we have a reference
     if _sync_process is not None:
         try:
             # ...
         except subprocess.TimeoutExpired:
             logger.warning(
                 f"Sync microservice did not exit within {wait_timeout}s, force killing..."
             )
             _force_kill_sync_process()
+            success = False

     # Clean up log threads
     cleanup_log_threads()

     logger.info("Sync microservice stopped")
-    return True
+    return success

59-59: Move import time to top of file.

The time module import inside the function works but is unconventional. Moving it to the top-level imports improves readability and follows Python conventions.


126-134: Consider importing signal at module level.

The signal module is imported inside the function. While functional, top-level imports are preferred for consistency and to avoid repeated import overhead if this function is called multiple times.

+import signal  # at top of file
 
 def _force_kill_sync_process():
     # ...
     else:
         # Unix/Linux/macOS - kill process group
-        import signal
         try:
             os.killpg(os.getpgid(pid), signal.SIGKILL)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a91a07d and 3a1a192.

📒 Files selected for processing (8)
  • backend/app/routes/shutdown.py
  • backend/app/utils/microservice.py
  • backend/main.py
  • docs/backend/backend_python/openapi.json
  • frontend/src/main.tsx
  • frontend/src/utils/serverUtils.ts
  • sync-microservice/app/routes/shutdown.py
  • sync-microservice/main.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-01T20:44:19.146Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/utils/microservice.py:53-62
Timestamp: 2025-10-01T20:44:19.146Z
Learning: In backend/app/utils/microservice.py, the stream_logs function uses print() instead of logger to forward sync-microservice output because the sync-microservice already formats its logs using its own logging setup (get_sync_logger and LoggerWriter). Using logger would result in double-prefixing.

Applied to files:

  • backend/app/utils/microservice.py
🧬 Code graph analysis (7)
frontend/src/main.tsx (4)
frontend/src/utils/serverUtils.ts (2)
  • triggerShutdown (46-55)
  • startServer (26-43)
frontend/src/utils/tauriUtils.ts (1)
  • isTauriEnvironment (15-38)
frontend/src/utils/isProd.ts (1)
  • isProd (3-3)
frontend/__mocks__/isProd.ts (1)
  • isProd (1-1)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
  • watcher_util_stop_folder_watcher (300-328)
sync-microservice/app/logging/setup_logging.py (1)
  • get_sync_logger (205-217)
backend/app/routes/shutdown.py (4)
  • ShutdownResponse (15-20)
  • _delayed_shutdown (23-34)
  • shutdown (38-83)
  • _handle_shutdown_exception (69-73)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
  • shutdown (38-83)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (36-70)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
  • BACKEND_URL (1-1)
backend/app/routes/shutdown.py (2)
backend/app/utils/microservice.py (1)
  • microservice_util_stop_sync_service (40-100)
sync-microservice/app/routes/shutdown.py (4)
  • ShutdownResponse (14-18)
  • _delayed_shutdown (21-32)
  • shutdown (36-70)
  • _handle_shutdown_exception (57-61)
backend/app/utils/microservice.py (2)
backend/app/logging/setup_logging.py (2)
  • setup_logging (134-196)
  • get_logger (199-209)
sync-microservice/app/logging/setup_logging.py (2)
  • setup_logging (134-189)
  • get_logger (192-202)
sync-microservice/main.py (4)
backend/main.py (1)
  • lifespan (45-63)
sync-microservice/app/core/lifespan.py (1)
  • lifespan (19-57)
backend/app/routes/shutdown.py (1)
  • shutdown (38-83)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (36-70)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Backend Tests
  • GitHub Check: Tauri Build Check (ubuntu-22.04)
  • GitHub Check: Tauri Build Check (windows-latest)
  • GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
🔇 Additional comments (22)
sync-microservice/main.py (1)

1-6: LGTM on imports and router addition.

The os import for environment variable access and the shutdown router integration are correctly implemented, enabling the new graceful shutdown capability.

backend/main.py (3)

29-29: LGTM on shutdown router import.

The shutdown router is correctly imported for the new graceful shutdown functionality.


136-136: LGTM on router registration.

The shutdown router is registered without a prefix, making the endpoint accessible at /shutdown which aligns with the frontend integration in serverUtils.ts.


146-156: Consistent host configuration with sync-microservice.

The host binding logic mirrors the sync-microservice implementation. Same suggestion applies: consider including ::1 for IPv6 localhost support.

sync-microservice/app/routes/shutdown.py (1)

14-18: LGTM on response model.

Clean Pydantic model for the shutdown response. Note that this differs from the backend's ShutdownResponse which includes an additional sync_stopped field - this is appropriate since this microservice doesn't manage other services.

frontend/src/main.tsx (3)

6-6: LGTM on import change.

The switch from stopServer to triggerShutdown aligns with the new fire-and-forget shutdown pattern that's appropriate for window close scenarios.


11-26: Window close listener only triggers in production.

The onCloseListener is defined here but only invoked when isProd() is true (line 51). In development mode, window close won't trigger shutdown. This is likely intentional per the comment on line 48-49, but verify this matches the expected developer workflow.


46-58: LGTM on production-only initialization.

The conditional startup logic clearly separates production and development modes. The .catch() on startServer() prevents unhandled promise rejections.

backend/app/routes/shutdown.py (4)

1-12: LGTM on imports and setup.

The imports are well-organized. Using functools.partial for the executor call is the correct approach for passing arguments.


15-20: Response model appropriately extends sync-microservice's model.

The additional sync_stopped field provides valuable feedback to the frontend about whether the sync service was stopped cleanly.


23-34: LGTM on delayed shutdown implementation.

The 0.5s delay (vs 0.1s in sync-microservice) is appropriate here since the backend has more work to coordinate before terminating.


57-66: Good use of executor to avoid blocking.

Running microservice_util_stop_sync_service in an executor prevents blocking the event loop during the synchronous HTTP call and process wait operations. The 3.0s timeout is reasonable for graceful shutdown.

frontend/src/utils/serverUtils.ts (5)

5-10: LGTM on constants and platform detection.

The timeout constants are well-named and the isWindows detection via userAgent is a reasonable approach for the frontend.


14-14: Good: Health check now uses dedicated endpoint.

Changing from root URL to /health is more semantically correct and aligns with the backend's health endpoint definition.


45-55: LGTM on fire-and-forget shutdown trigger.

The keepalive: true option is crucial for ensuring the request completes even when the window is closing. This is the correct pattern for shutdown-on-close scenarios.


123-125: Empty string argument to Command.create may not work as intended.

The second argument to Command.create is an empty string, but this represents the command arguments. If the kill commands need a PID or process name, this won't work. Verify the Tauri sidecar/command configuration handles finding the correct process.

await Command.create(commandName, '').execute();

118-130: The killProcessWindows and killProcessUnix commands are already defined in the Tauri configuration.

Both commands are properly configured in frontend/src-tauri/capabilities/migrated.json with appropriate platform-specific implementations:

  • killProcessUnix: Uses bash with pkill -f PictoPy_Server
  • killProcessWindows: Uses PowerShell with taskkill /t /f /im PictoPy_Server.exe

No action needed.

backend/app/utils/microservice.py (5)

18-22: LGTM on global state for process tracking.

The global _sync_process reference and SYNC_SERVICE_URL constant enable coordinated shutdown across the application lifecycle.


103-140: Robust force-kill implementation.

The platform-specific handling is well-implemented:

  • Windows: taskkill /F /T kills the process tree
  • Unix: os.killpg with SIGKILL terminates the process group
  • Fallback to _sync_process.kill() if process group operations fail

The exception handling for ProcessLookupError and PermissionError covers common edge cases.


262-275: Good: Process group creation for clean termination.

Using CREATE_NEW_PROCESS_GROUP on Windows and start_new_session on Unix ensures the sync microservice runs in its own process group, enabling clean termination of the entire process tree during shutdown.


297-299: LGTM on process reference storage.

Storing the process reference in the global variable enables the shutdown logic to properly track and terminate the sync microservice.


467-481: Consistent process group handling in dev mode.

The same process group creation pattern is applied in _start_fastapi_service, ensuring consistent shutdown behavior between frozen and development modes.

@github-actions github-actions bot added backend bug Something isn't working build labels Dec 30, 2025
Enhanced backend and sync microservice shutdown routes to handle Windows-specific process termination and log cleanup. Added @tauri-apps/plugin-os as a dependency and updated frontend to use @tauri-apps/plugin-os for platform detection, improved shutdown logic to prevent race conditions.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
backend/app/routes/shutdown.py (1)

1-14: Minor: Consider consolidating duplicate import from same module.

Lines 7 and 9 both import from app.utils.microservice. Consider combining them:

-from app.utils.microservice import cleanup_log_threads
-from pydantic import BaseModel
-from app.utils.microservice import microservice_util_stop_sync_service
+from pydantic import BaseModel
+from app.utils.microservice import cleanup_log_threads, microservice_util_stop_sync_service
frontend/src/utils/serverUtils.ts (1)

49-82: Potential race condition: shutdownInProgress resets before force kill completes.

In the fallback paths (lines 69-71 and 77-79), forceKillServer() is called but shutdownInProgress is reset immediately after, not after the force kill completes. This could allow a second shutdown attempt while force kill is still in progress.

For a truly fire-and-forget shutdown during window close, this may be acceptable since the window is closing anyway. However, if this function could be called in other contexts where the window remains open, consider:

🔎 Proposed fix
     .then((response) => {
       if (response.ok) {
         console.log('Shutdown request sent successfully');
+        shutdownInProgress = false;
       } else {
         console.error(
           `Shutdown request failed with status: ${response.status}`,
         );
-        forceKillServer().catch((err) =>
-          console.error('Fallback force kill failed:', err),
-        );
+        forceKillServer()
+          .catch((err) => console.error('Fallback force kill failed:', err))
+          .finally(() => { shutdownInProgress = false; });
+        return; // Don't reset below
       }
-      shutdownInProgress = false; // Reset on success or handled failure
     })
     .catch((error) => {
       console.error('Shutdown request failed:', error);
-      forceKillServer().catch((err) =>
-        console.error('Fallback force kill failed:', err),
-      );
-      shutdownInProgress = false; // Reset on error
+      forceKillServer()
+        .catch((err) => console.error('Fallback force kill failed:', err))
+        .finally(() => { shutdownInProgress = false; });
     });
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a1a192 and d2355b3.

⛔ Files ignored due to path filters (1)
  • frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • backend/app/routes/shutdown.py
  • frontend/package.json
  • frontend/src/utils/serverUtils.ts
  • sync-microservice/app/routes/shutdown.py
🧰 Additional context used
🧬 Code graph analysis (2)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
  • watcher_util_stop_folder_watcher (300-328)
sync-microservice/app/logging/setup_logging.py (1)
  • get_sync_logger (205-217)
backend/app/routes/shutdown.py (4)
  • ShutdownResponse (17-22)
  • _delayed_shutdown (25-42)
  • shutdown (46-96)
  • _handle_shutdown_exception (77-81)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
  • BACKEND_URL (1-1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Tauri Build Check (ubuntu-22.04)
  • GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
  • GitHub Check: Tauri Build Check (windows-latest)
  • GitHub Check: Backend Tests
🔇 Additional comments (13)
frontend/package.json (1)

46-46: LGTM!

The @tauri-apps/plugin-os dependency addition is appropriate for the platform detection needs in the shutdown logic. The version range is consistent with other Tauri plugins in the project.

sync-microservice/app/routes/shutdown.py (4)

1-10: LGTM!

Imports are appropriate and well-organized. The logger setup follows the project's logging conventions.


15-19: LGTM!

The response model is appropriately scoped for the sync microservice's shutdown response. Unlike the backend's ShutdownResponse, it doesn't need the sync_stopped field since this IS the sync service.


22-35: Note: os._exit(0) bypasses lifespan cleanup on Windows.

While os._exit(0) avoids the SIGTERM reliability issues on Windows, it also bypasses Python's normal cleanup (finally blocks, atexit handlers). This means the lifespan context's cleanup in lifespan.py won't execute.

This is mitigated by the explicit watcher_util_stop_folder_watcher() call before scheduling shutdown, which handles the critical cleanup. Just be aware that any future cleanup logic added to lifespan won't run on Windows shutdown.


38-73: LGTM!

The shutdown endpoint correctly:

  1. Stops the watcher synchronously with proper error handling
  2. Schedules the delayed shutdown as a background task
  3. Attaches an exception callback to avoid unhandled task exceptions
  4. Returns the response before termination occurs
backend/app/routes/shutdown.py (2)

25-42: LGTM with same caveat as sync-microservice.

The implementation correctly handles platform differences. The longer 0.5s delay is appropriate since the backend coordinates the sync service shutdown first. The explicit cleanup_log_threads() call before os._exit(0) on Windows is good defensive programming.


45-96: Well-structured shutdown orchestration.

Good implementation choices:

  • Using run_in_executor prevents blocking the event loop during sync service stop
  • The 3.0s timeout for sync service leaves headroom within the frontend's 5s overall timeout
  • The response clearly communicates the sync service stop outcome
  • Exception callback properly handles background task failures
frontend/src/utils/serverUtils.ts (6)

3-14: LGTM!

Good additions:

  • Platform detection via Tauri plugin
  • Shutdown constants for consistent timeout handling
  • shutdownInProgress flag to prevent race conditions between concurrent shutdown calls

16-28: LGTM!

Using /health endpoint is more appropriate for health checks than the root URL, as it's semantically correct and typically more lightweight.


84-124: LGTM!

Robust multi-phase shutdown:

  1. Early exit if server not running
  2. Attempt graceful HTTP shutdown
  3. Recheck after delay
  4. Force kill as last resort
  5. Proper guard flag management in all paths

126-154: LGTM!

Well-implemented graceful shutdown attempt:

  • Proper timeout handling with AbortController
  • Defensive response body consumption
  • Clear boolean return for caller's decision logic
  • Specific AbortError handling for timeout scenario

170-183: LGTM!

Clean utility functions. The 2-second delay in restartServer provides adequate time for cleanup before starting fresh.


156-168: Tauri shell commands are properly configured.

The killProcessWindows and killProcessUnix commands are defined in frontend/src-tauri/capabilities/migrated.json. Unix uses pkill -f PictoPy_Server and Windows uses taskkill /t /f /im PictoPy_Server.exe, so the implementation is complete and platform-appropriate.

…n reliability

- Add [P] trick to pkill commands to prevent pkill from matching itself
- Add _kill_sync_by_name() function with multiple fallback methods
- Add _final_kill_sync() for last-resort sync termination
- Change from SIGTERM to os._exit(0) for reliable uvicorn termination
- Add sync killing to forceKillServer on both Unix and Windows
- Replace platform() with navigator.userAgent to fix Linux init race
- Add waitForTauriReady() helper for Linux Tauri initialization
- Run forceKillServer in parallel with HTTP shutdown (belt-and-suspenders)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Fix all issues with AI Agents 🤖
In @backend/app/utils/microservice.py:
- Around line 117-121: The final unconditional call to _kill_sync_by_name() and
the constant True return should be changed: make the call to
_kill_sync_by_name() conditional only if previous shutdown attempts failed
(track a success flag through the graceful shutdown sequence and call
_kill_sync_by_name() when that flag is False) and update the function return to
reflect the actual success state (return the boolean success flag instead of
always True, or return None only if callers don’t use the result); also update
the logger call (logger.info("Sync microservice stopped")) to run only when
success is True and log an error/ warning when shutdown ultimately failed.

In @frontend/src/utils/serverUtils.ts:
- Around line 11-13: Replace the browser-based navigator check in isWindows with
Tauri's OS API: import the platform function from '@tauri-apps/plugin-os',
change isWindows to async (returning Promise<boolean>), call await platform()
and return whether it equals 'windows' (or use the enum/value provided by the
plugin), and update any callers to await isWindows(). Ensure you remove the
navigator.userAgent usage and add the new import (e.g., import { platform } from
'@tauri-apps/plugin-os').
🧹 Nitpick comments (5)
frontend/src-tauri/capabilities/migrated.json (2)

119-119: Consider graceful termination before force-kill.

The command uses pkill -9 (SIGKILL) which immediately terminates processes without allowing cleanup. Given the PR introduces graceful HTTP shutdown endpoints, consider using pkill -15 (SIGTERM) first, then falling back to pkill -9 if needed:

-          "args": ["-c", "pkill -9 -f '[P]ictoPy_Server'; pkill -9 -f '[P]ictoPy_Sync'"],
+          "args": ["-c", "pkill -15 -f '[P]ictoPy_Server' || pkill -9 -f '[P]ictoPy_Server'; pkill -15 -f '[P]ictoPy_Sync' || pkill -9 -f '[P]ictoPy_Sync'"],

Additionally, the semicolon (;) runs the second command regardless of the first command's success. Consider using && if you want the second kill to run only if the first succeeds, or keep ; if both should always run.


130-130: Consider graceful termination on Windows.

The Windows command uses taskkill /f which forces immediate termination. For consistency with the graceful shutdown approach in the PR, consider attempting termination without /f first:

-          "args": ["taskkill /t /f /im PictoPy_Server.exe; taskkill /t /f /im PictoPy_Sync.exe"],
+          "args": ["taskkill /t /im PictoPy_Server.exe 2>$null; if ($LASTEXITCODE -ne 0) { taskkill /t /f /im PictoPy_Server.exe }; taskkill /t /im PictoPy_Sync.exe 2>$null; if ($LASTEXITCODE -ne 0) { taskkill /t /f /im PictoPy_Sync.exe }"],

This attempts graceful termination first, then falls back to force kill if needed.

backend/app/utils/microservice.py (3)

22-22: Consider environment-based host configuration.

The SYNC_SERVICE_URL is hardcoded to localhost:8001. The PR introduces PICTOPY_HOST environment variable support (default 127.0.0.1) for the servers. Consider using the same environment variable here for consistency:

-SYNC_SERVICE_URL = "http://localhost:8001"
+SYNC_SERVICE_URL = f"http://{os.getenv('PICTOPY_HOST', '127.0.0.1')}:8001"

This ensures the backend client connects to the same host the sync service binds to.


124-162: Consider graceful termination before SIGKILL.

The function uses pkill -9 (SIGKILL) and killall -9 which forcefully terminate processes without allowing cleanup. For better hygiene, consider attempting SIGTERM first:

         else:
-            # Kill by name on Linux/macOS using pkill
-            # Use [P] trick so pkill doesn't match itself
+            # Try SIGTERM first, then SIGKILL
             result = subprocess.run(
-                ["pkill", "-9", "-f", "[P]ictoPy_Sync"],
+                ["pkill", "-15", "-f", "[P]ictoPy_Sync"],
                 capture_output=True,
                 timeout=5,
             )
-            if result.returncode == 0:
-                logger.info("Killed PictoPy_Sync by name using pkill")
-            else:
+            if result.returncode != 0:
+                # Fallback to SIGKILL
+                result = subprocess.run(
+                    ["pkill", "-9", "-f", "[P]ictoPy_Sync"],
+                    capture_output=True,
+                    timeout=5,
+                )
+            if result.returncode == 0:
+                logger.info("Killed PictoPy_Sync by name using pkill")
+            else:
                 # Fallback: use killall (matches by process name, not cmdline)
                 result = subprocess.run(
-                    ["killall", "-9", "PictoPy_Sync"],
+                    ["killall", "-15", "PictoPy_Sync"],
                     capture_output=True,
                     timeout=5,
                 )
-                if result.returncode == 0:
+                if result.returncode != 0:
+                    subprocess.run(["killall", "-9", "PictoPy_Sync"], capture_output=True, timeout=5)
+                if result.returncode == 0 or subprocess.run(["killall", "-9", "PictoPy_Sync"], capture_output=True, timeout=5).returncode == 0:
                     logger.info("Killed PictoPy_Sync by name using killall")

This is consistent with the graceful shutdown philosophy of the PR.


194-214: Simplify nested error handling.

The deeply nested try-except blocks (lines 194-214) make the control flow difficult to follow. Consider refactoring to use a list of kill methods and iterate through them:

🔎 Proposed refactor for cleaner error handling
# Unix/Linux/macOS - try multiple kill methods
import signal

kill_methods = [
    ("process group", lambda: os.killpg(os.getpgid(pid), signal.SIGKILL)),
    ("direct kill", lambda: _sync_process.kill()),
    ("os.kill", lambda: os.kill(pid, signal.SIGKILL)),
]

success = False
for method_name, kill_func in kill_methods:
    try:
        kill_func()
        logger.info(f"Successfully killed using {method_name}")
        success = True
        break
    except (ProcessLookupError, PermissionError, OSError, Exception) as e:
        logger.warning(f"{method_name} failed: {e}")

if not success:
    logger.warning("All direct kill methods failed, trying by name...")
    _kill_sync_by_name()
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2355b3 and f1b73ce.

📒 Files selected for processing (4)
  • backend/app/routes/shutdown.py
  • backend/app/utils/microservice.py
  • frontend/src-tauri/capabilities/migrated.json
  • frontend/src/utils/serverUtils.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/routes/shutdown.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-01T20:44:19.146Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/utils/microservice.py:53-62
Timestamp: 2025-10-01T20:44:19.146Z
Learning: In backend/app/utils/microservice.py, the stream_logs function uses print() instead of logger to forward sync-microservice output because the sync-microservice already formats its logs using its own logging setup (get_sync_logger and LoggerWriter). Using logger would result in double-prefixing.

Applied to files:

  • backend/app/utils/microservice.py
🧬 Code graph analysis (2)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (1)
  • BACKEND_URL (1-1)
backend/app/utils/microservice.py (2)
backend/app/logging/setup_logging.py (2)
  • setup_logging (134-196)
  • get_logger (199-209)
sync-microservice/app/logging/setup_logging.py (2)
  • setup_logging (134-189)
  • get_logger (192-202)
🔇 Additional comments (13)
frontend/src/utils/serverUtils.ts (9)

5-9: LGTM: Shutdown coordination setup.

The timeout constants and the shutdownInProgress flag properly prevent race conditions between concurrent shutdown attempts.


15-38: LGTM: Robust Tauri readiness check.

The polling approach with timeout and the additional 50ms delay for plugin initialization is a solid solution for the Linux initialization race issues mentioned in the PR objectives.


40-52: LGTM: Health endpoint check.

Using the /health endpoint instead of the root URL is a best practice for health checks.


54-78: LGTM: Startup with Tauri readiness check.

The addition of waitForTauriReady() before server startup properly addresses the Linux initialization race issues mentioned in the PR objectives.


115-155: LGTM: Robust graceful shutdown with fallback.

The shutdown flow properly attempts graceful termination, waits and rechecks, then falls back to force kill if needed. The comprehensive error handling ensures shutdownInProgress is always reset.


157-185: LGTM: Proper timeout handling with AbortController.

The use of AbortController for request timeout and the handling of empty response bodies (line 171) is correct and robust.


187-199: LGTM: Platform-specific force kill.

The implementation correctly delegates to platform-specific Tauri commands defined in the capabilities file.


201-214: LGTM: Cleaner restart flow.

The sleep() utility and its use in restartServer is cleaner and more maintainable than the previous setTimeout-based approach.


80-113: Both concerns are not applicable to this implementation.

  1. Keepalive is supported: The code uses native webview fetch (no @tauri-apps/plugin-http import), which supports the keepalive flag. Tauri's HTTP plugin doesn't document keepalive support, but this code doesn't use it.

  2. Parallel execution is intentional: The immediate forceKillServer() call (line 91) is documented as a belt-and-suspenders approach for the fire-and-forget shutdown triggered by window close. There's a separate attemptGracefulShutdown() function with proper timeout/abort control for other scenarios. The shutdownInProgress flag prevents race conditions. This design is sound.

Likely an incorrect or invalid review comment.

backend/app/utils/microservice.py (4)

254-263: LGTM: Process reference getter.

Simple and correct encapsulation of the global process reference.


352-390: LGTM: Process group creation for robust shutdown.

The process group setup (CREATE_NEW_PROCESS_GROUP on Windows, start_new_session on Unix) and global process tracking properly enable the coordinated shutdown flow described in the PR objectives.


557-594: LGTM: Consistent process group handling.

The process group creation and global tracking is consistent with the frozen service startup, ensuring uniform shutdown behavior across development and production modes.


233-252: LGTM: Dual health check strategy.

The function properly checks both the process reference and HTTP health endpoint, providing a robust running state check. The /api/v1/health endpoint is correctly configured and the timeout handling is appropriate.

Comment on lines +11 to +13
const isWindows = (): boolean => {
return navigator.userAgent.toLowerCase().includes('windows');
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace browser-based platform detection with Tauri's platform API.

The navigator.userAgent detection is browser-based and unreliable in a Tauri desktop application. The PR added @tauri-apps/plugin-os (version 2.3.2) which provides proper platform detection.

🔎 Proposed fix using @tauri-apps/plugin-os
+import { platform } from '@tauri-apps/plugin-os';
+
 const isWindows = (): boolean => {
-  return navigator.userAgent.toLowerCase().includes('windows');
+  return platform() === 'windows';
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const isWindows = (): boolean => {
return navigator.userAgent.toLowerCase().includes('windows');
};
import { platform } from '@tauri-apps/plugin-os';
const isWindows = async (): Promise<boolean> => {
return (await platform()) === 'windows';
};
🤖 Prompt for AI Agents
In @frontend/src/utils/serverUtils.ts around lines 11-13, Replace the
browser-based navigator check in isWindows with Tauri's OS API: import the
platform function from '@tauri-apps/plugin-os', change isWindows to async
(returning Promise<boolean>), call await platform() and return whether it equals
'windows' (or use the enum/value provided by the plugin), and update any callers
to await isWindows(). Ensure you remove the navigator.userAgent usage and add
the new import (e.g., import { platform } from '@tauri-apps/plugin-os').

- Services are now independent (backend no longer spawns sync)
- Backend port changed to 52123, sync to 52124
- Updated serverUtils.ts to start/shutdown both services separately
- Removed microservice.py (no longer needed)
- Simplified shutdown.py (each service handles its own shutdown)
- Added kill commands with [P] trick to prevent pkill self-match
- Added waitForTauriReady() to fix Linux initialization race
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Fix all issues with AI Agents 🤖
In @backend/app/routes/shutdown.py:
- Around line 19-28: The delayed shutdown currently calls os._exit(0) in
_delayed_shutdown which forcefully terminates the process and bypasses
FastAPI/uvicorn lifespan cleanup (e.g., the finally block in main.py that runs
app.state.executor.shutdown(wait=True)); replace os._exit(0) with a graceful
termination such as sending SIGTERM to the current PID via os.kill(os.getpid(),
signal.SIGTERM) or calling sys.exit(0) so that uvicorn and the application's
lifespan handlers run and app.state.executor.shutdown(wait=True) executes.

In @backend/main.py:
- Around line 144-158: The code computes host from PICTOPY_HOST and logs a
warning but then hardcodes "0.0.0.0" when building the Config; update the Config
instantiation (symbol Config) to use the computed host variable (host) instead
of the string literal "0.0.0.0" so the effective bind address respects the
PICTOPY_HOST fallback to "127.0.0.1" and the conditional warning remains
accurate; ensure you do not change the default assignment of host and keep the
same log behavior.

In @docs/backend/backend_rust/api.md:
- Around line 18-19: The example uses resourcesFolderPath but the console.log
incorrectly references serverPath; update the console.log to log
resourcesFolderPath (e.g., console.log("Resources folder path:",
resourcesFolderPath)) so the variable names match and avoid a ReferenceError
when running the snippet.

In @frontend/src/utils/serverUtils.ts:
- Around line 119-172: The triggerShutdown function currently calls
forceKillServer() immediately, which prevents graceful shutdown because it kills
services before sending POST shutdown requests; update triggerShutdown to
attempt graceful shutdown first by sending the POSTs to BACKEND_URL and
SYNC_MICROSERVICE_URL, then only call forceKillServer() as a fallback after a
short timeout or if both fetches fail; alternatively replace triggerShutdown's
body to call stopServer() (which already implements graceful→force-kill) so you
reuse that correct flow and remove the premature forceKillServer() invocation.

In @scripts/setup.js:
- Around line 28-33: The chmodSync call on bashScript can throw if permissions
are insufficient; wrap the fs.chmodSync(bashScript, 0o755) in its own try-catch
(after the existing accessSync catch) and handle errors gracefully—catch the
error, log a clear message including bashScript and the caught error, and then
either exit process with a non-zero code (process.exit(1)) or otherwise surface
the failure so the setup script doesn't crash unhandled.
- Around line 19-44: The change removed macOS handling and now exits for darwin
platforms; restore macOS support by adding a branch for os.platform() ===
'darwin' (parallel to the Debian branch) that sets command = bashScript and args
= [], ensures bashScript is executable using fs.accessSync/fs.chmodSync (same
logic as the Debian block), and avoids the process.exit(1) path for macOS, or
explicitly document the removal of macOS support in the PR/README and provide a
migration note if dropping support is intentional.

In @scripts/setup.sh:
- Around line 102-111: The script currently runs cd sync-microservice without
checking whether the directory change succeeded, which can cause the venv
creation and pip installs to run in the wrong location; update the block around
the cd sync-microservice (the lines that create .sync-env with python -m venv
and install requirements.txt) to verify the cd succeeded and exit with an error
if it didn’t (e.g., replace the bare cd with a guarded cd or an if ! cd ...;
then echo an error and exit 1; fi), mirroring the error-handling pattern used
for the frontend setup so subsequent commands only run when the directory
exists.

In @sync-microservice/app/core/lifespan.py:
- Around line 37-57: The async lifespan context is using blocking time.sleep in
the retry loop which will freeze the event loop; replace the blocking call by
importing asyncio and using await asyncio.sleep(retry_interval) inside the while
loop in the lifespan function (where db_check_database_connection() is polled
and the retry/warning logic resides) so the coroutine yields during waits.

In @sync-microservice/main.py:
- Around line 50-64: The code reads PICTOPY_HOST into the local variable host
and warns when it is not localhost, but then ignores it by hardcoding
host="0.0.0.0" in the Config; change the Config creation to use the computed
host variable (the one assigned from os.getenv("PICTOPY_HOST", "127.0.0.1"))
instead of the literal "0.0.0.0", so update the Config(...) call (the Config
instantiation near the end of the snippet) to set host=host and keep other
settings the same, preserving the existing logger.warning behavior.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/setup.sh (1)

79-84: Remove dead code and undefined variable reference.

The OS_TYPE variable is never defined in this script, making both conditional branches unreachable. Since macOS support was removed and the script now only supports Debian-based Linux (exiting at line 29 for other OSes), these OS-specific checks are unnecessary.

🔎 Proposed fix
 if ! command -v node &> /dev/null; then
     echo -e "${RED}Node.js is not installed. Installing...${NC}"
-    if [ "$OS_TYPE" = "Linux" ]; then
-        curl -fsSL https://deb.nodesource.com/setup_18.x | -E bash -
-        apt-get install -y nodejs
-    elif [ "$OS_TYPE" = "Darwin" ]; then
-        brew install node
-    fi
+    curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
+    sudo apt-get install -y nodejs
 else
     echo -e "${GREEN}Node.js is installed. Version: $(node --version)${NC}"
     echo -e "${GREEN}npm version: $(npm --version)${NC}"

Note: Added sudo prefix to match the Debian package installation pattern used elsewhere in the script.

♻️ Duplicate comments (2)
sync-microservice/app/routes/shutdown.py (1)

20-29: Inconsistency: Code still uses os._exit(0) contrary to PR description.

The PR summary explicitly states: "Replaced os._exit(0) with os.kill(pid, SIGTERM) in the sync service so Uvicorn can run shutdown handlers and close connections." However, this code still uses os._exit(0) at line 29, which immediately terminates the process without running shutdown handlers or allowing graceful cleanup.

Additionally, the past review comment flagged that SIGTERM behaves differently on Windows and may not trigger Uvicorn's graceful shutdown handlers reliably. This concern remains unaddressed.

🔎 Recommended approach for graceful shutdown

If the goal is truly graceful shutdown with cleanup:

+import signal
+
 async def _delayed_shutdown(delay: float = 0.1):
     """
     Shutdown the server after a short delay to allow the response to be sent.
 
     Args:
         delay: Seconds to wait before signaling shutdown (kept minimal)
     """
     await asyncio.sleep(delay)
     logger.info("Exiting sync microservice...")
-    os._exit(0)
+    # Send SIGTERM to allow Uvicorn to run shutdown handlers
+    os.kill(os.getpid(), signal.SIGTERM)

However, note that on Windows, SIGTERM is mapped to TerminateProcess which may not reliably trigger cleanup. Consider platform-specific handling or testing on Windows.

Based on learnings from past review and PR description.

frontend/src/utils/serverUtils.ts (1)

11-13: Replace browser-based platform detection with Tauri's platform API.

The navigator.userAgent detection is browser-based and unreliable in a Tauri desktop application. The PR added @tauri-apps/plugin-os which provides proper platform detection.

🔎 Proposed fix using @tauri-apps/plugin-os
+import { platform } from '@tauri-apps/plugin-os';
+
-const isWindows = (): boolean => {
-  return navigator.userAgent.toLowerCase().includes('windows');
+const isWindows = async (): Promise<boolean> => {
+  return (await platform()) === 'windows';
 };

Then update callers to await:

-  const commandName = isWindows() ? 'killProcessWindows' : 'killProcessUnix';
+  const commandName = (await isWindows()) ? 'killProcessWindows' : 'killProcessUnix';

Based on learnings from past review.

🧹 Nitpick comments (11)
docs/Manual_Setup_Guide.md (2)

122-122: Capitalize "macOS" correctly.

The operating system name should be capitalized as "macOS" (not "MacOS") for consistency with Apple's branding and technical documentation standards.

🔎 Proposed fix for capitalization
-    Bash(Linux/MacOS)
+    Bash(Linux/macOS)

Apply the same change to line 136.

Also applies to: 136-136


108-167: Consider documenting the PICTOPY_HOST environment variable.

The PR objectives mention that security hardening introduced a PICTOPY_HOST environment variable (default 127.0.0.1) that can be overridden. This information may be useful to document in the setup guide so users understand the security posture and how to override it if needed (e.g., for development scenarios requiring external access).

Consider adding a brief note in the backend and sync-microservice sections explaining the PICTOPY_HOST variable, its default behavior, and how to override it if required.

docs/Script_Setup_Guide.md (1)

8-8: Minor grammar refinement on the note about Step 8.

Consider rephrasing "please follow that step also" to "please also follow that step" for more formal tone in technical documentation.

🔎 Suggested improvement
- > Note that the Step No. 8 given below is not mentioned in the video, please follow that step also.
+ > Note that the Step No. 8 given below is not mentioned in the video, please also follow that step.
scripts/setup.sh (1)

93-100: Consider adding error handling for consistency.

For consistency with the frontend setup pattern (line 116) and the suggested fix for sync-microservice, consider adding error handling to the backend directory change as well.

🔎 Optional improvement
 # ---- Set up the backend ----
 echo -e "${YELLOW}Setting up backend...${NC}"
-cd ../backend
+cd ../backend || { echo -e "${RED}backend directory not found${NC}"; exit 1; }
 python -m venv .env
 source .env/bin/activate
 pip install --upgrade pip
 pip install -r requirements.txt
 deactivate
 cd ..
frontend/src/hooks/useFolderOperations.tsx (1)

101-107: Consider the asymmetric user feedback for AI tagging operations.

Success notification is now disabled for enabling AI tagging (line 104: showSuccess: false) but remains active for disabling AI tagging (lines 120-121). This creates inconsistent feedback patterns for related operations.

If this is intentional (e.g., the tagging status is immediately visible elsewhere in the UI), the asymmetry is acceptable. Otherwise, consider either:

  • Showing success for both operations, or
  • Hiding success for both if the status change is self-evident
frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx (1)

125-148: Grid layout mismatch: 3 columns defined but only 2 buttons present.

The grid is configured with md:grid-cols-3 but now contains only 2 buttons after removing the Restart Server button. This may cause uneven spacing on medium+ screens.

🔎 Suggested fix
-        <div className="grid grid-cols-1 gap-4 md:grid-cols-3">
+        <div className="grid grid-cols-1 gap-4 md:grid-cols-2">
frontend/src/components/OnboardingSteps/ThemeSelectionStep.tsx (1)

42-46: Missing isEditing in useEffect dependency array.

The useEffect references isEditing but has an empty dependency array. If isEditing changes after mount, the effect won't re-evaluate. This appears intentional (run once on mount), but could cause subtle bugs if the editing state changes while this component is mounted.

Consider either adding isEditing to the dependency array or adding a comment explaining the intentional omission:

  useEffect(() => {
+   // Intentionally run only on mount - isEditing is checked but not a dependency
    if (localStorage.getItem('themeChosen') === 'true' && !isEditing) {
      dispatch(markCompleted(stepIndex));
    }
- }, []);
+ }, [dispatch, stepIndex]); // Or add isEditing if re-evaluation is desired
sync-microservice/app/core/lifespan.py (1)

32-35: Consider making retry parameters configurable.

The hardcoded connection_timeout = 60 and retry_interval = 5 values are reasonable defaults, but making them configurable via environment variables would improve flexibility for different deployment scenarios.

frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)

44-52: Same missing dependency issue as ThemeSelectionStep.

The useEffect references isEditing but has an empty dependency array. This is the same pattern as ThemeSelectionStep.tsx. If this is intentional, consider adding a comment to document the design decision.

frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (2)

38-42: Consider adding isEditing to dependency array.

The useEffect reads isEditing but omits it from the dependency array. If isEditing changes after mount (e.g., user navigates back then forward), the effect won't re-evaluate, potentially causing stale behavior.

🔎 Proposed fix
   useEffect(() => {
     if (localStorage.getItem('folderChosen') === 'true' && !isEditing) {
       dispatch(markCompleted(stepIndex));
     }
-  }, []);
+  }, [isEditing, dispatch, stepIndex]);

14-14: Merge duplicate imports from the same module.

The imports from @/features/onboardingSlice are split across two statements.

🔎 Proposed fix
-import { markCompleted, previousStep } from '@/features/onboardingSlice';
+import { markCompleted, previousStep, setIsEditing } from '@/features/onboardingSlice';
 import { AppFeatures } from '@/components/OnboardingSteps/AppFeatures';
 import { useFolder } from '@/hooks/useFolder';
 import { useEffect, useState } from 'react';
-import { setIsEditing } from '@/features/onboardingSlice';

Also applies to: 18-18

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f1b73ce and e3fb5cc.

📒 Files selected for processing (32)
  • backend/app/config/settings.py
  • backend/app/routes/shutdown.py
  • backend/app/utils/microservice.py
  • backend/main.py
  • docs/Manual_Setup_Guide.md
  • docs/Script_Setup_Guide.md
  • docs/backend/backend_python/openapi.json
  • docs/backend/backend_rust/api.md
  • frontend/src-tauri/capabilities/migrated.json
  • frontend/src-tauri/src/main.rs
  • frontend/src-tauri/src/services/mod.rs
  • frontend/src-tauri/tauri.conf.json
  • frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx
  • frontend/src/components/OnboardingSteps/FolderSetupStep.tsx
  • frontend/src/components/OnboardingSteps/ServerCheck.tsx
  • frontend/src/components/OnboardingSteps/ThemeSelectionStep.tsx
  • frontend/src/config/Backend.ts
  • frontend/src/features/onboardingSlice.ts
  • frontend/src/hooks/useFolderOperations.tsx
  • frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx
  • frontend/src/utils/serverUtils.ts
  • scripts/setup.js
  • scripts/setup.ps1
  • scripts/setup.sh
  • sync-microservice/README.md
  • sync-microservice/app/config/settings.py
  • sync-microservice/app/core/lifespan.py
  • sync-microservice/app/routes/folders.py
  • sync-microservice/app/routes/health.py
  • sync-microservice/app/routes/shutdown.py
  • sync-microservice/app/routes/watcher.py
  • sync-microservice/main.py
💤 Files with no reviewable changes (1)
  • backend/app/utils/microservice.py
✅ Files skipped from review due to trivial changes (2)
  • sync-microservice/README.md
  • docs/backend/backend_python/openapi.json
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-31T17:00:50.132Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 570
File: backend/app/database/connection.py:16-24
Timestamp: 2025-10-31T17:00:50.132Z
Learning: In PictoPy backend, the user prefers not to use database connection retry logic or extended busy timeouts in the centralized get_db_connection() context manager, even though the app has concurrent access patterns via ProcessPoolExecutor and FastAPI.

Applied to files:

  • sync-microservice/app/core/lifespan.py
  • frontend/src/components/OnboardingSteps/ServerCheck.tsx
📚 Learning: 2025-10-15T07:13:34.502Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/database/images.py:115-115
Timestamp: 2025-10-15T07:13:34.502Z
Learning: In the PictoPy backend and sync-microservice, exception logging using `logger.error(f"...")` without stack traces (i.e., without `exc_info=True` or `logger.exception()`) is acceptable for console logging, as this is a deliberate design decision made with the maintainer for console-only logs.

Applied to files:

  • sync-microservice/app/core/lifespan.py
🧬 Code graph analysis (11)
frontend/src-tauri/src/main.rs (1)
frontend/src-tauri/src/services/mod.rs (1)
  • get_resources_folder_path (5-11)
backend/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
  • SYNC_MICROSERVICE_URL (2-2)
frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)
frontend/src/app/store.ts (1)
  • RootState (22-22)
frontend/src/components/OnboardingSteps/ThemeSelectionStep.tsx (2)
frontend/src/app/store.ts (1)
  • RootState (22-22)
frontend/src/features/onboardingSlice.ts (2)
  • setIsEditing (33-35)
  • previousStep (48-55)
sync-microservice/app/core/lifespan.py (1)
sync-microservice/app/database/folders.py (1)
  • db_check_database_connection (47-72)
backend/app/routes/shutdown.py (1)
sync-microservice/app/routes/shutdown.py (4)
  • ShutdownResponse (13-17)
  • _delayed_shutdown (20-29)
  • shutdown (33-67)
  • _handle_shutdown_exception (54-58)
sync-microservice/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
  • SYNC_MICROSERVICE_URL (2-2)
sync-microservice/app/routes/shutdown.py (3)
sync-microservice/app/utils/watcher.py (1)
  • watcher_util_stop_folder_watcher (300-328)
sync-microservice/app/logging/setup_logging.py (1)
  • get_sync_logger (205-217)
backend/app/routes/shutdown.py (4)
  • ShutdownResponse (12-16)
  • _delayed_shutdown (19-28)
  • shutdown (32-58)
  • _handle_shutdown_exception (45-49)
sync-microservice/main.py (2)
sync-microservice/app/core/lifespan.py (1)
  • lifespan (20-77)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (33-67)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
  • shutdown (32-58)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (33-67)
frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (2)
frontend/src/app/store.ts (1)
  • RootState (22-22)
frontend/src/features/onboardingSlice.ts (2)
  • setIsEditing (33-35)
  • previousStep (48-55)
🪛 LanguageTool
docs/Script_Setup_Guide.md

[style] ~8-~8: The adverb ‘also’ is commonly used to connect clauses and isn’t usually used at the end of a phrase or before a conjunction. Consider replacing it with a more formal alternative.
Context: ...d in the video, please follow that step also. ### Prerequisites: - [NodeJS](https:...

(ALSO_AS_WELL)

docs/Manual_Setup_Guide.md

[uncategorized] ~122-~122: The operating system from Apple is written “macOS”.
Context: ...cies. Create one using: Bash(Linux/MacOS) ``` python3 -m venv .sync-env...

(MAC_OS)


[uncategorized] ~136-~136: The operating system from Apple is written “macOS”.
Context: ... Virtual Environment:** Bash(Linux/MacOS) ``` source .sync-env/bin/acti...

(MAC_OS)

🪛 markdownlint-cli2 (0.18.1)
docs/Script_Setup_Guide.md

60-60: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


66-66: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


74-74: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


80-80: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


100-100: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Shellcheck (0.11.0)
scripts/setup.sh

[warning] 104-104: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🔇 Additional comments (34)
docs/Manual_Setup_Guide.md (3)

81-81: PowerShell activation script name corrected.

The capitalization change from activate.ps1 to Activate.ps1 is correct for PowerShell on Windows. This is the actual script name in virtual environments.


101-101: Backend port migration reflected consistently.

The port updates (8000 → 52123) are correctly reflected in both the command and the server URL. This aligns with the PR objectives for the backend port migration.

Also applies to: 104-104


108-167: Well-structured new Sync-Microservice setup section.

The new section mirrors the backend setup structure effectively, providing parallel Windows/Linux/macOS guidance. The port 52124 and virtual environment naming (.sync-env) are consistent with the PR changes.

Verify that all steps in this new section correspond to the actual sync-microservice implementation (virtual environment location, dependencies file name, port, startup command syntax). You may want to confirm by running through the setup steps on a test environment or reviewing the sync-microservice directory structure.

docs/Script_Setup_Guide.md (4)

53-54: Port specification correctly updated to reflect new backend server port.

Backend startup commands now specify --port 52123 consistently across Windows and Linux, aligning with the PR's port reassignment for the backend microservice.

Also applies to: 62-62


65-83: New Step 8 (Sync-Microservice startup) properly documents dual-server architecture.

The new section clearly separates backend and sync-microservice startup with platform-specific instructions (Windows PowerShell and Linux bash) and the correct port (52124). This effectively communicates the architectural change from a single microservice to independent servers, as outlined in the PR objectives.


53-53: Windows activation script filename capitalization is corrected.

The change from .env\Scripts\activate.ps1 to .env\Scripts\Activate.ps1 matches the actual PowerShell virtual environment activation script name on Windows systems.


85-85: Step numbering updated consistently after introduction of Step 8.

The subsequent steps are correctly renumbered (Frontend app is now step 9, pre-commit setup is now step 10), maintaining logical flow and clarity.

Also applies to: 94-94

frontend/src-tauri/tauri.conf.json (1)

65-65: LGTM! Port migration aligns with PR objectives.

The CSP update correctly reflects the new port assignments (backend → 52123, sync microservice → 52124) and maintains localhost-only restrictions, which aligns well with the security hardening goals mentioned in the PR. Both HTTP and WebSocket protocols are properly included for each service.

scripts/setup.ps1 (2)

146-147: LGTM! Improved user feedback.

The added success message provides clear feedback that the backend setup completed successfully, consistent with the frontend setup messaging pattern.


153-173: Code structure is valid and follows the established setup pattern correctly.

The sync-microservice setup block properly mirrors the backend approach with isolated virtual environment (.sync-env), dependency installation, and error handling. Both the sync-microservice directory and requirements.txt exist with appropriate FastAPI-based dependencies (FastAPI, uvicorn, Pydantic, etc.).

frontend/src/components/OnboardingSteps/ServerCheck.tsx (2)

25-25: Extended retry window accommodates new process startup.

Increasing retries from 10 to 60 (1 minute total) aligns with the PR's new process group management, which may take longer to initialize. This provides better resilience for slower systems during onboarding.


35-35: Consistent extended retry for sync microservice.

Matching the 60-retry pattern ensures both services have equal time to start, which is appropriate given they're both launched independently in this PR's new architecture.

sync-microservice/app/routes/health.py (1)

6-6: No action required — tags are properly specified at router mounting.

The tags parameter was moved from router initialization to the include_router() call in main.py (line 41: app.include_router(health.router, tags=["Health"])). This is standard FastAPI practice and ensures the health endpoint is correctly grouped under the "Health" tag in OpenAPI documentation. All other routers follow the same pattern.

Likely an incorrect or invalid review comment.

backend/app/config/settings.py (1)

5-5: All sync microservice references are correctly updated to port 52124.

Verification confirms no stale references to old ports or API paths remain. The SYNC_MICROSERVICE_URL is properly centralized in settings and correctly referenced in API.py for the watcher restart endpoint.

sync-microservice/app/routes/watcher.py (1)

14-14: Router prefix is correctly applied at mount time.

The watcher router is properly mounted at line 42 of sync-microservice/main.py with prefix="/watcher" and tags=["Watcher"]. All endpoints (/status, /restart, /stop, /start) are accessible at the correct paths (/watcher/status, /watcher/restart, etc.), and OpenAPI documentation correctly groups them under the "Watcher" tag.

frontend/src/config/Backend.ts (1)

1-2: LGTM! Port updates align with PR objectives.

The port changes from 8000→52123 and 8001→52124 are consistent with the backend configuration updates and align with the PR's infrastructure changes.

sync-microservice/app/config/settings.py (1)

3-4: LGTM! Configuration changes are consistent across services.

The port updates match the frontend configuration and maintain consistency across the application.

frontend/src/utils/serverUtils.ts (3)

18-38: LGTM! Tauri readiness check prevents initialization race conditions.

The waitForTauriReady function properly addresses the Linux initialization race mentioned in the PR objectives by polling for Tauri internals with a timeout.


174-214: LGTM! stopServer implements proper graceful→force-kill fallback flow.

This function correctly attempts graceful shutdown first via attemptGracefulShutdown(), waits and rechecks server status, and only falls back to forceKillServer() if graceful shutdown fails. The shutdownInProgress flag properly prevents concurrent shutdown attempts.


216-253: LGTM! Well-implemented graceful shutdown with timeout handling.

The use of AbortController with timeout and Promise.allSettled to coordinate shutdown of both services is solid. The error handling properly distinguishes abort timeouts from other errors.

sync-microservice/app/routes/folders.py (1)

10-10: No breaking change — the endpoint remains accessible at /folders/status.

The router prefix was removed from the APIRouter() definition in folders.py, but the /folders prefix is applied at the app registration level in sync-microservice/main.py (line 43: app.include_router(folders.router, prefix="/folders", ...)). This pattern is consistent across all route modules in the service. Frontend and backend clients already reference /folders/status and will continue to work without changes.

frontend/src-tauri/src/services/mod.rs (1)

4-11: LGTM!

The function rename from get_server_path to get_resources_folder_path with the updated path resolution to "resources" is clean and aligns with the PR's refactoring goals. Error handling is properly preserved.

frontend/src/components/OnboardingSteps/AvatarSelectionStep.tsx (1)

36-42: LGTM on localStorage-based state initialization.

Initializing name and selectedAvatar from localStorage with fallback to empty strings is a clean approach for preserving user input across component re-renders and page refreshes.

sync-microservice/main.py (1)

41-44: LGTM on router mounting.

The router inclusions are well-organized with appropriate prefixes and tags. The new shutdown router integration aligns with the PR's graceful shutdown objectives.

frontend/src-tauri/src/main.rs (1)

18-27: Clarify the purpose of the differing paths in setup block vs exposed command.

The setup block validates "resources/backend" (line 21) while get_resources_folder_path resolves "resources". The setup block logs this path but doesn't use it further. Add a comment to clarify whether the setup intentionally validates a specific subdirectory or if the paths should be aligned.

backend/app/routes/shutdown.py (1)

31-58: Endpoint structure is well-designed.

The pattern of scheduling shutdown after the response is sent (via asyncio.create_task) with a done callback for exception handling is correct and consistent with the sync-microservice implementation. The docstring clearly documents the responsibility split between frontend and backend.

frontend/src/features/onboardingSlice.ts (1)

12-12: LGTM!

The isEditing state addition follows Redux Toolkit conventions and integrates cleanly with the existing slice structure. The reducer is straightforward and the type is correctly defined.

Also applies to: 21-21, 33-35

backend/main.py (1)

28-28: Router integration and port update look correct.

The shutdown router is properly imported and included. The server URL and port are consistently updated to 52123 as per the PR objectives.

Also applies to: 74-74, 134-134

docs/backend/backend_rust/api.md (1)

5-8: Documentation updates align with API rename.

The function name change from get_server_path to get_resources_folder_path and the description update are consistent with the codebase changes.

Also applies to: 26-26

frontend/src/components/OnboardingSteps/FolderSetupStep.tsx (1)

65-71: Editing flow logic looks correct.

Setting isEditing to true before previousStep() ensures that when returning to a previously-completed step, it renders instead of being auto-skipped. The conditional rendering at line 70 correctly respects this flag.

frontend/src-tauri/capabilities/migrated.json (4)

95-111: LGTM: Clear naming for backend service.

The rename from StartServerUnix/Windows to StartBackendUnix/Windows improves clarity now that backend and sync are independent services.


113-134: LGTM: Sync service spawn permissions added.

The new StartSyncServiceUnix and StartSyncServiceWindows permissions correctly enable independent spawning of the sync service, consistent with the PR's architectural change.


141-141: The [P] pattern and fallback behavior are correctly implemented. The pattern matching works as expected—'[P]ictoPy_Server' matches the process name PictoPy_Server but not the pkill command itself. The kill commands are properly invoked as a fallback: stopServer() first attempts graceful HTTP shutdown, and only calls forceKillServer() (which executes the pkill commands) after the timeout expires and services remain running.


152-152: PowerShell semicolon chaining works as stated, but kill commands are not fallback-only.

The semicolon-separated commands in PowerShell do execute correctly and sequentially regardless of the first command's exit status. However, the code reveals these kill commands are invoked unconditionally (with a "belt-and-suspenders" defensive pattern), not only after graceful shutdown fails. The forceKillServer() function is called:

  1. Always when stopServer() is invoked
  2. Again as a fallback if graceful shutdown times out
  3. Again in the error handler if graceful shutdown encounters an error

The /t /f flags appropriately terminate process trees with forced termination.

Likely an incorrect or invalid review comment.

Comment on lines 18 to 19
const resourcesFolderPath = await invoke("get_resources_folder_path");
console.log("Resources folder path:", serverPath);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Variable name mismatch in example.

The variable is renamed to resourcesFolderPath on line 18, but line 19 still references the old serverPath variable, which would cause a ReferenceError if copied.

🔎 Proposed fix
 const resourcesFolderPath = await invoke("get_resources_folder_path");
-console.log("Resources folder path:", serverPath);
+console.log("Resources folder path:", resourcesFolderPath);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const resourcesFolderPath = await invoke("get_resources_folder_path");
console.log("Resources folder path:", serverPath);
const resourcesFolderPath = await invoke("get_resources_folder_path");
console.log("Resources folder path:", resourcesFolderPath);
🤖 Prompt for AI Agents
In @docs/backend/backend_rust/api.md around lines 18-19, The example uses
resourcesFolderPath but the console.log incorrectly references serverPath;
update the console.log to log resourcesFolderPath (e.g., console.log("Resources
folder path:", resourcesFolderPath)) so the variable names match and avoid a
ReferenceError when running the snippet.

Comment on lines 19 to 44
} else if (os.platform() === 'linux') {
// Check if it's Debian-based Linux
const debianVersionPath = '/etc/debian_version';
if (fs.existsSync(debianVersionPath)) {
// On Debian-based Linux, use the bash script
command = bashScript;
args = [];

// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
} else {
console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
} else {
console.error(`Unsupported operating system: ${os.platform()}`);
console.error('This setup script only supports Windows and Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Breaking change: macOS support removed.

The refactor replaces the previous Linux/macOS universal branch with Debian-specific logic, causing the script to exit with an error on macOS (lines 39-44). This breaks existing macOS workflows unless platform support was intentionally deprecated.

If macOS is no longer supported, document this breaking change in the PR description, README, and consider a migration guide. Otherwise, restore macOS support by adding a darwin platform branch.

🔎 Proposed fix to restore macOS support
 } else if (os.platform() === 'linux') {
   // Check if it's Debian-based Linux
   const debianVersionPath = '/etc/debian_version';
   if (fs.existsSync(debianVersionPath)) {
     // On Debian-based Linux, use the bash script
     command = bashScript;
     args = [];

     // Ensure the bash script is executable; if not, set the execute permission.
     try {
       fs.accessSync(bashScript, fs.constants.X_OK);
     } catch (err) {
       console.log(`File ${bashScript} is not executable. Setting execute permission...`);
       fs.chmodSync(bashScript, 0o755);
     }
   } else {
     console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.');
     console.error('Please install system dependencies manually and run the individual setup commands.');
     process.exit(1);
   }
+} else if (os.platform() === 'darwin') {
+  // On macOS, use the bash script
+  command = bashScript;
+  args = [];
+
+  // Ensure the bash script is executable; if not, set the execute permission.
+  try {
+    fs.accessSync(bashScript, fs.constants.X_OK);
+  } catch (err) {
+    console.log(`File ${bashScript} is not executable. Setting execute permission...`);
+    fs.chmodSync(bashScript, 0o755);
+  }
 } else {
   console.error(`Unsupported operating system: ${os.platform()}`);
-  console.error('This setup script only supports Windows and Debian-based Linux distributions.');
+  console.error('This setup script only supports Windows, Debian-based Linux, and macOS.');
   console.error('Please install system dependencies manually and run the individual setup commands.');
   process.exit(1);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if (os.platform() === 'linux') {
// Check if it's Debian-based Linux
const debianVersionPath = '/etc/debian_version';
if (fs.existsSync(debianVersionPath)) {
// On Debian-based Linux, use the bash script
command = bashScript;
args = [];
// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
} else {
console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
} else {
console.error(`Unsupported operating system: ${os.platform()}`);
console.error('This setup script only supports Windows and Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
} else if (os.platform() === 'linux') {
// Check if it's Debian-based Linux
const debianVersionPath = '/etc/debian_version';
if (fs.existsSync(debianVersionPath)) {
// On Debian-based Linux, use the bash script
command = bashScript;
args = [];
// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
} else {
console.error('Unsupported Linux distribution. This setup script only supports Debian-based Linux distributions.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
} else if (os.platform() === 'darwin') {
// On macOS, use the bash script
command = bashScript;
args = [];
// Ensure the bash script is executable; if not, set the execute permission.
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
} else {
console.error(`Unsupported operating system: ${os.platform()}`);
console.error('This setup script only supports Windows, Debian-based Linux, and macOS.');
console.error('Please install system dependencies manually and run the individual setup commands.');
process.exit(1);
}
🤖 Prompt for AI Agents
In @scripts/setup.js around lines 19-44, The change removed macOS handling and
now exits for darwin platforms; restore macOS support by adding a branch for
os.platform() === 'darwin' (parallel to the Debian branch) that sets command =
bashScript and args = [], ensures bashScript is executable using
fs.accessSync/fs.chmodSync (same logic as the Debian block), and avoids the
process.exit(1) path for macOS, or explicitly document the removal of macOS
support in the PR/README and provide a migration note if dropping support is
intentional.

Comment on lines 28 to 33
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle potential chmodSync failure.

The try-catch wraps accessSync but not chmodSync (line 32), which can throw if the process lacks permission to modify file attributes. An unhandled exception here would crash the setup script.

🔎 Proposed fix to wrap chmodSync in error handling
     // Ensure the bash script is executable; if not, set the execute permission.
     try {
       fs.accessSync(bashScript, fs.constants.X_OK);
     } catch (err) {
       console.log(`File ${bashScript} is not executable. Setting execute permission...`);
-      fs.chmodSync(bashScript, 0o755);
+      try {
+        fs.chmodSync(bashScript, 0o755);
+      } catch (chmodErr) {
+        console.error(`Failed to set execute permission on ${bashScript}: ${chmodErr.message}`);
+        console.error('Please set execute permission manually: chmod +x scripts/setup.sh');
+        process.exit(1);
+      }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
fs.chmodSync(bashScript, 0o755);
}
try {
fs.accessSync(bashScript, fs.constants.X_OK);
} catch (err) {
console.log(`File ${bashScript} is not executable. Setting execute permission...`);
try {
fs.chmodSync(bashScript, 0o755);
} catch (chmodErr) {
console.error(`Failed to set execute permission on ${bashScript}: ${chmodErr.message}`);
console.error('Please set execute permission manually: chmod +x scripts/setup.sh');
process.exit(1);
}
}
🤖 Prompt for AI Agents
In @scripts/setup.js around lines 28-33, The chmodSync call on bashScript can
throw if permissions are insufficient; wrap the fs.chmodSync(bashScript, 0o755)
in its own try-catch (after the existing accessSync catch) and handle errors
gracefully—catch the error, log a clear message including bashScript and the
caught error, and then either exit process with a non-zero code
(process.exit(1)) or otherwise surface the failure so the setup script doesn't
crash unhandled.

Comment on lines 102 to 111
# ---- Set up the sync-microservice ----
echo -e "${YELLOW}Setting up sync-microservice...${NC}"
cd sync-microservice
python -m venv .sync-env
source .sync-env/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
cd ..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add error handling for directory change.

The cd sync-microservice command lacks error handling. If the directory doesn't exist, subsequent commands would execute in the wrong directory, leading to silent failures.

🔎 Proposed fix to match frontend pattern
 # ---- Set up the sync-microservice ----
 echo -e "${YELLOW}Setting up sync-microservice...${NC}"
-cd sync-microservice
+cd sync-microservice || { echo -e "${RED}sync-microservice directory not found${NC}"; exit 1; }
 python -m venv .sync-env
 source .sync-env/bin/activate
 pip install --upgrade pip
 pip install -r requirements.txt
 deactivate
 cd ..

Based on static analysis hints (Shellcheck SC2164).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# ---- Set up the sync-microservice ----
echo -e "${YELLOW}Setting up sync-microservice...${NC}"
cd sync-microservice
python -m venv .sync-env
source .sync-env/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
cd ..
# ---- Set up the sync-microservice ----
echo -e "${YELLOW}Setting up sync-microservice...${NC}"
cd sync-microservice || { echo -e "${RED}sync-microservice directory not found${NC}"; exit 1; }
python -m venv .sync-env
source .sync-env/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
deactivate
cd ..
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 104-104: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🤖 Prompt for AI Agents
In @scripts/setup.sh around lines 102-111, The script currently runs cd
sync-microservice without checking whether the directory change succeeded, which
can cause the venv creation and pip installs to run in the wrong location;
update the block around the cd sync-microservice (the lines that create
.sync-env with python -m venv and install requirements.txt) to verify the cd
succeeded and exit with an error if it didn’t (e.g., replace the bare cd with a
guarded cd or an if ! cd ...; then echo an error and exit 1; fi), mirroring the
error-handling pattern used for the frontend setup so subsequent commands only
run when the directory exists.

Comment on lines 37 to 57
while True:
attempt += 1
elapsed_time = time.time() - start_time

if db_check_database_connection():
logger.info(f"Database connection successful on attempt {attempt}")
break

logger.info("Database connection successful")
if elapsed_time >= connection_timeout:
logger.error(
f"Failed to connect to PictoPy database after {attempt} attempts over {elapsed_time:.1f} seconds"
)
logger.error(
"Make sure the main PictoPy backend is set up and the database exists"
)
raise RuntimeError("Database connection failed after multiple attempts")

logger.warning(
f"Database connection attempt {attempt} failed. Retrying in {retry_interval} seconds... ({elapsed_time:.1f}s elapsed)"
)
time.sleep(retry_interval)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Blocking time.sleep() in async context will freeze the event loop.

The lifespan function is an async context manager, but time.sleep(retry_interval) at Line 57 is a blocking call that will freeze the entire event loop during retries. Use asyncio.sleep() instead.

🔎 Proposed fix

Add the import at the top:

import asyncio

Then replace the blocking sleep:

-            time.sleep(retry_interval)
+            await asyncio.sleep(retry_interval)
🤖 Prompt for AI Agents
In @sync-microservice/app/core/lifespan.py around lines 37-57, The async
lifespan context is using blocking time.sleep in the retry loop which will
freeze the event loop; replace the blocking call by importing asyncio and using
await asyncio.sleep(retry_interval) inside the while loop in the lifespan
function (where db_check_database_connection() is polled and the retry/warning
logic resides) so the coroutine yields during waits.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
backend/main.py (1)

73-75: OpenAPI servers configuration hardcodes localhost while runtime host is configurable.

The servers list hardcodes http://localhost:52123, but the runtime host binding now uses PICTOPY_HOST (defaulting to 127.0.0.1). While localhost and 127.0.0.1 typically resolve the same, this could cause confusion if someone sets PICTOPY_HOST=0.0.0.0 and expects the OpenAPI docs to reflect the actual binding.

🔎 Proposed fix to use dynamic host in OpenAPI config
+# Read host at module level for OpenAPI config
+_openapi_host = os.getenv("PICTOPY_HOST", "127.0.0.1")
+
 # Create FastAPI app
 app = FastAPI(
     lifespan=lifespan,
     title="PictoPy",
     description="The API calls to PictoPy are done via HTTP requests. This backend is built using FastAPI.",
     contact={
         "name": "PictoPy Postman Collection",
         "url": "https://www.postman.com/aossie-pictopy/pictopy/overview",
     },
     servers=[
-        {"url": "http://localhost:52123", "description": "Local Development server"}
+        {"url": f"http://{_openapi_host}:52123", "description": "Local Development server"}
     ],
 )
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e3fb5cc and cff9704.

📒 Files selected for processing (5)
  • backend/app/routes/shutdown.py
  • backend/main.py
  • frontend/src/utils/serverUtils.ts
  • sync-microservice/app/routes/shutdown.py
  • sync-microservice/main.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • sync-microservice/app/routes/shutdown.py
🧰 Additional context used
🧬 Code graph analysis (3)
sync-microservice/main.py (4)
backend/main.py (1)
  • lifespan (44-61)
sync-microservice/app/core/lifespan.py (1)
  • lifespan (20-77)
backend/app/routes/shutdown.py (1)
  • shutdown (40-66)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (41-75)
backend/main.py (2)
backend/app/routes/shutdown.py (1)
  • shutdown (40-66)
sync-microservice/app/routes/shutdown.py (1)
  • shutdown (41-75)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (2)
  • BACKEND_URL (1-1)
  • SYNC_MICROSERVICE_URL (2-2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Backend Tests
  • GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
  • GitHub Check: Tauri Build Check (windows-latest)
  • GitHub Check: Tauri Build Check (ubuntu-22.04)
🔇 Additional comments (11)
sync-microservice/main.py (2)

50-64: LGTM! Host configuration now correctly uses the environment variable.

The PICTOPY_HOST environment variable is properly read (defaulting to 127.0.0.1) and the host variable is correctly passed to the Config at Line 60. The security warning for non-local bindings is appropriate.


6-6: Shutdown router integration looks correct.

The shutdown router is properly imported and registered without a prefix, exposing the /shutdown endpoint at the root level consistently with the backend.

Also applies to: 44-44

backend/app/routes/shutdown.py (3)

52-61: Exception handling for background task is well-implemented.

The callback pattern using add_done_callback properly captures and logs any exceptions from the background shutdown task without blocking the response.


14-18: Response model is clean and appropriate.

The ShutdownResponse model provides a clear contract for the shutdown endpoint response.


31-36: Windows shutdown still bypasses cleanup due to os._exit() limitation.

The Unix path correctly uses SIGTERM to allow cleanup handlers to run. On Windows, os._exit(0) is used with a comment explaining that SIGTERM doesn't work reliably with uvicorn subprocesses. This means the lifespan's finally block (e.g., app.state.executor.shutdown(wait=True)) will not execute on Windows.

However, sys.exit(0) won't resolve this since _delayed_shutdown is a background async task—calling sys.exit() from a background task only raises SystemExit in that task and won't exit the process. The current approach is the practical choice given that graceful Windows shutdown via signals is not reliable. The existing comment adequately documents this Windows limitation.

backend/main.py (2)

144-158: LGTM! Host configuration correctly uses the environment variable.

The PICTOPY_HOST environment variable is properly read and applied to the Config. The security warning for non-local bindings is consistent with the sync-microservice implementation.


28-28: Shutdown router integration is correct.

The shutdown router is properly imported and registered, exposing the /shutdown endpoint.

Also applies to: 134-134

frontend/src/utils/serverUtils.ts (4)

18-38: Tauri readiness check is well-implemented.

The polling approach with timeout ensures the server startup waits for Tauri internals to be initialized, preventing race conditions during app initialization.


175-215: stopServer implements a proper graceful-then-force shutdown flow.

The implementation correctly:

  1. Checks if services are already stopped
  2. Attempts graceful shutdown first
  3. Rechecks after a delay
  4. Falls back to force kill only if needed
  5. Handles errors with force kill as last resort

This is the correct pattern for reliable shutdown.


217-254: Graceful shutdown implementation is solid.

The AbortController with timeout prevents hanging on unresponsive services. Using Promise.allSettled ensures both shutdown requests are attempted even if one fails. The success check (line 240) returns true if either service responds successfully.


69-76: Good addition of Tauri readiness check before starting services.

Waiting for Tauri to be fully initialized prevents race conditions that could occur if invoke is called before Tauri internals are ready.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI Agents
In @docs/stylesheets/extra.css:
- Line 318: The CSS rule has an invalid leading backslash causing the selector
"\ .md-footer-meta__inner.md-grid" to be ignored; open the stylesheet and remove
the backslash so the selector reads ".md-footer-meta__inner.md-grid", then save
and re-run the linter/formatter (Biome) to ensure the rule block is parsed and
the footer layout is restored.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cff9704 and 709e27d.

📒 Files selected for processing (9)
  • .github/ISSUE_TEMPLATE/feature.yml
  • .github/workflows/pr-check-tests.yml
  • CODE_OF_CONDUCT.md
  • docs/Manual_Setup_Guide.md
  • docs/backend/backend_python/openapi.json
  • docs/frontend/screenshots.md
  • docs/frontend/state-management.md
  • docs/stylesheets/extra.css
  • frontend/src-tauri/capabilities/migrated.json
💤 Files with no reviewable changes (3)
  • docs/Manual_Setup_Guide.md
  • .github/workflows/pr-check-tests.yml
  • .github/ISSUE_TEMPLATE/feature.yml
✅ Files skipped from review due to trivial changes (2)
  • docs/frontend/state-management.md
  • docs/frontend/screenshots.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/src-tauri/capabilities/migrated.json
🧰 Additional context used
🪛 Biome (2.1.2)
docs/stylesheets/extra.css

[error] 317-318: Unknown type selector is not allowed.

See MDN web docs for more details.
Consider replacing the unknown type selector with valid one.

(lint/correctness/noUnknownTypeSelector)

🪛 Checkov (3.2.334)
docs/backend/backend_python/openapi.json

[high] 1-2760: Ensure that the global security field has rules defined

(CKV_OPENAPI_4)


[high] 1-2760: Ensure that security operations is not empty.

(CKV_OPENAPI_5)

🪛 LanguageTool
CODE_OF_CONDUCT.md

[style] ~32-~32: Try using a synonym here to strengthen your wording.
Context: ...ind - Trolling, insulting or derogatory comments, and personal or political attacks - Pu...

(COMMENT_REMARK)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Tauri Build Check (ubuntu-22.04)
  • GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
  • GitHub Check: Tauri Build Check (windows-latest)
  • GitHub Check: Backend Tests
🔇 Additional comments (3)
docs/backend/backend_python/openapi.json (1)

1-2760: Verify OpenAPI spec is up-to-date and includes security definitions.

This file contains only formatting changes (converting multi-line arrays to single-line format). However, Checkov has flagged that the OpenAPI spec lacks a global security field and has empty securityOperations.

Given that this PR introduces security-hardening changes (default host binding to 127.0.0.1, new shutdown endpoints), the OpenAPI spec should document these security controls. Additionally, the PR objectives mention new /shutdown endpoints in both backend and sync services, but these are not visible in the provided excerpt.

Please verify:

  1. Are the newly added /shutdown endpoints documented in the full OpenAPI spec?
  2. Should a global security field be added to enforce authorization/authentication requirements?
  3. Are the Checkov warnings (CKV_OPENAPI_4, CKV_OPENAPI_5) newly introduced by this PR, or pre-existing?
docs/stylesheets/extra.css (2)

14-19: LGTM: cosmetic formatting changes.

The formatting adjustments have no semantic impact on the CSS.


46-47: LGTM: multi-selector line splits.

The multi-line selector formatting is valid CSS and improves readability.

Also applies to: 194-195, 212-217


\
.md-footer-meta__inner.md-grid {
\ .md-footer-meta__inner.md-grid {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Invalid CSS syntax with leading backslash.

The backslash before the selector is invalid CSS syntax and will cause the entire rule block (lines 318-325) to be ignored by browsers, breaking the footer layout.

🔎 Proposed fix
-\ .md-footer-meta__inner.md-grid {
+.md-footer-meta__inner.md-grid {

Based on static analysis hints, this is flagged by Biome as an unknown type selector error.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
\ .md-footer-meta__inner.md-grid {
.md-footer-meta__inner.md-grid {
🧰 Tools
🪛 Biome (2.1.2)

[error] 317-318: Unknown type selector is not allowed.

See MDN web docs for more details.
Consider replacing the unknown type selector with valid one.

(lint/correctness/noUnknownTypeSelector)

🤖 Prompt for AI Agents
In @docs/stylesheets/extra.css at line 318, The CSS rule has an invalid leading
backslash causing the selector "\ .md-footer-meta__inner.md-grid" to be ignored;
open the stylesheet and remove the backslash so the selector reads
".md-footer-meta__inner.md-grid", then save and re-run the linter/formatter
(Biome) to ensure the rule block is parsed and the footer layout is restored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend bug Something isn't working build

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BUG: Backend Serve and Sync Processes Keep Running After Closing PictoPy

1 participant