Skip to content

Conversation

@rahulharpal1603
Copy link
Contributor

@rahulharpal1603 rahulharpal1603 commented Sep 23, 2025

Summary by CodeRabbit

  • New Features

    • Face search: upload a photo to find matching images by similarity.
    • Streamlined search bar with query image preview and one-click clear.
    • Home page displays search results; media viewer opens over searched images.
  • API

    • Added face search endpoint (POST /face-clusters/face-search) with documented request/response.
    • OpenAPI documentation updated accordingly.
  • Refactor

    • Consolidated face-related routes; removed legacy endpoints.
  • Chores

    • Added .env to .gitignore.
    • Standardized YAML quoting/whitespace and ensured trailing newlines.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 23, 2025

Warning

Rate limit exceeded

@rahulharpal1603 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 22 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 5a02ca9 and c68e405.

📒 Files selected for processing (1)
  • frontend/src/components/Dialog/FaceSearchDialog.tsx (1 hunks)

Walkthrough

Adds a new face-search feature: backend endpoint (/face-clusters/face-search), embedding aggregation, detector updates, confidence constant, and OpenAPI docs. Introduces frontend API calls, Redux search slice, dialog/UI integrations, and MediaView prop changes. Removes legacy facetagging routes. Also standardizes release-drafter YAML, ignores .env, and minor formatting tweaks.

Changes

Cohort / File(s) Summary
GitHub/Release config
.github/ISSUE_TEMPLATE/*, .github/release-drafter-config.yml
Trimmed trailing whitespace in templates; standardized YAML quoting and release notes template structure in release-drafter.
Ignore files
.gitignore, sync-microservice/.gitignore
Added .env to ignore list; ensured trailing newline in sync-microservice ignore file.
Backend config & models
backend/app/config/settings.py, backend/app/models/FaceDetector.py
Added CONFIDENCE_PERCENT = 0.6; updated FaceDetector.detect_faces(image_id: str, image_path: str, forSearch: bool = False) with early-return on load failure and conditional DB insertion.
Backend database
backend/app/database/faces.py
Added get_all_face_embeddings() to aggregate image-level embeddings with tags, robust JSON parsing, and sorted output.
Backend routes (face search) & removal
backend/app/routes/face_clusters.py, backend/app/routes/facetagging.py
Implemented face-search workflow endpoint and response models; removed legacy facetagging router and its endpoints.
Backend schemas & docs
backend/app/schemas/images.py, `docs/backend/backend_python/openapi.json``
Added AddSingleImageRequest and FaceTaggingResponse; documented new POST /face-clusters/face-search.
Frontend API
frontend/src/api/apiEndpoints.ts, `frontend/src/api/api-functions/face_clusters.ts``
Added searchForFaces endpoint and fetchSearchedFaces(request) with FetchSearchedFacesRequest.
Frontend state/store
frontend/src/app/store.ts, `frontend/src/features/searchSlice.ts``
Registered search reducer; new slice with startSearch, setResults, clearSearch.
Frontend hooks
frontend/src/hooks/selectFile.ts
New useFile hook providing pickSingleFile() via Tauri dialog.
Frontend components (UI)
frontend/src/components/Dialog/FaceSearchDialog.tsx, `frontend/src/components/Navigation/Navbar/Navbar.tsx``
New FaceSearchDialog; Navbar refactored to inline search bar with query image preview and integrates face search dialog/actions.
Media viewer and types
frontend/src/components/Media/MediaView.tsx, `frontend/src/types/Media.ts``
MediaView now requires images: Image[] prop; computes currentImage from props.
Pages
frontend/src/pages/Home/Home.tsx, frontend/src/pages/AITagging/AITagging.tsx``, frontend/src/pages/PersonImages/PersonImages.tsx``
Home wired to search state/results and passes displayImages to MediaView; AITagging/PersonImages updated to pass images into MediaView.
Selectors
frontend/src/features/imageSelectors.ts
Minor refactor of selectImages to block body; behavior unchanged.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Navbar as Frontend: Navbar/FaceSearchDialog
  participant API as Backend: /face-clusters/face-search
  participant FD as FaceDetector
  participant FN as FaceNet
  participant DB as DB: faces/images

  User->>Navbar: Click "Upload Photo"
  Navbar->>Navbar: pickSingleFile() -> path
  Navbar->>API: POST path (AddSingleImageRequest)
  API->>FD: detect_faces(path, forSearch=true)
  alt No face / load fail
    API-->>Navbar: 400/500 ErrorResponse
  else Face detected
    API->>FN: compute embedding
    API->>DB: get_all_face_embeddings()
    DB-->>API: embeddings + metadata
    API->>API: cosine similarity ≥ CONFIDENCE_PERCENT
    API-->>Navbar: 200 ImageData[]
  end
  Navbar->>Navbar: setResults or show "No Matches" info
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Suggested labels

bug, enhancement

Poem

I twitched my nose and searched a face,
Through nets and boxes, bounding space—
A whisker-threshold, cosine trace,
Hop! results arrive in place.
Old burrows closed, new paths embrace;
I stash the finds with carrot grace.
Thump-thump—another perfect case. 🥕🐇

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Fix PR #524 Lint and Build Errors" correctly indicates that the branch addresses lint/build issues and matches several formatting/build-related edits in the diff (YAML/newlines/.gitignore), but it does not describe the substantial functional work present (new face-search endpoint, frontend search UI, Redux slice, and related backend changes), so it only partially reflects the PR's primary changes.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@rahulharpal1603 rahulharpal1603 changed the title Fix/face search errors Fix PR #524 Lint and Build Errors Sep 23, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
backend/app/models/FaceDetector.py (2)

24-27: Inconsistent return type on image load failure.

Returning None forces callers to add special-case checks. Prefer a consistent, empty-structured result or raise a controlled exception.

Apply this diff to return a safe empty payload:

-        if img is None:
-            print(f"Failed to load image: {image_path}")
-            return None
+        if img is None:
+            print(f"Failed to load image: {image_path}")
+            return {"ids": [], "processed_faces": [], "num_faces": 0}

59-63: Return class_ids as a list, not a string.

f"{class_ids}" stringifies the container; return a list for downstream consumers.

Apply this diff:

-        return {
-            "ids": f"{class_ids}",
+        return {
+            "ids": class_ids.tolist() if hasattr(class_ids, "tolist") else list(class_ids),
             "processed_faces": processed_faces,
             "num_faces": len(embeddings),
         }
frontend/src/components/Media/MediaView.tsx (1)

49-57: Fix navigation to respect the component's images prop (dispatch setCurrentViewIndex with clamped index)

nextImage/previousImage read the global slice images length (frontend/src/features/imageSlice.ts — nextImage/previousImage), so they operate on the global image list; when MediaView is given a filtered images prop this will produce incorrect navigation. Replace dispatch(nextImage()) / dispatch(previousImage()) with computing next/prev = clamp(currentIndex ± 1, 0, images.length - 1) against the component's images.length and dispatch(setCurrentViewIndex(next/prev)); keep handlers.resetZoom().

Location: frontend/src/components/Media/MediaView.tsx lines ~49-57 — update handleNextImage / handlePreviousImage.

🧹 Nitpick comments (21)
.github/release-drafter-config.yml (1)

23-30: Add a “Full Changelog” compare link for convenience (optional).

Including a compare link helps readers quickly diff releases.

Apply this minimal addition:

 template: |
   # What's Changed

   $CHANGES

   ## Special thanks to all our contributors:

   $CONTRIBUTORS
+
+  ---
+  Full Changelog: https://github.com/AOSSIE-Org/PictoPy/compare/$PREVIOUS_TAG...v$RESOLVED_VERSION
.gitignore (1)

29-31: Deduplicate videos_cache.txt entries.

Listed twice; keep a single entry.

Apply:

- videos_cache.txt
backend/app/config/settings.py (1)

7-7: Rename/clarify confidence constant; allow env override.

"Percent" suggests 60, but value is 0.6. Prefer threshold naming and make it configurable.

Apply this diff (keeps a backward‑compat alias):

-CONFIDENCE_PERCENT = 0.6
+# Confidence threshold in [0,1]. Override via env CONFIDENCE_THRESHOLD.
+try:
+    from os import getenv
+    CONFIDENCE_THRESHOLD = float(getenv("CONFIDENCE_THRESHOLD", "0.6"))
+except Exception:
+    CONFIDENCE_THRESHOLD = 0.6
+# Backward-compat alias for any existing references
+CONFIDENCE_PERCENT = CONFIDENCE_THRESHOLD

If you adopt the new name, please verify downstream usages of CONFIDENCE_PERCENT and update them to CONFIDENCE_THRESHOLD.

backend/app/models/FaceDetector.py (1)

28-31: Replace prints with logger for observability.

Use structured logging instead of print for production code.

Would you like a follow-up patch introducing a module-level logger with levels?

frontend/src/pages/PersonImages/PersonImages.tsx (2)

41-44: Minor: avoid shadowing images from Redux.

The local const images overshadowing the selector can confuse readers.

Apply this diff:

-      const images = (res?.images || []) as Image[];
-      dispatch(setImages(images));
+      const fetchedImages = (res?.images || []) as Image[];
+      dispatch(setImages(fetchedImages));

49-58: No-op setState calls.

setClusterName(clusterName) doesn’t change state in either handler. Safe to remove.

Apply this diff:

-  const handleEditName = () => {
-    setClusterName(clusterName);
-    setIsEditing(true);
-  };
+  const handleEditName = () => {
+    setIsEditing(true);
+  };
@@
-  const handleSaveName = () => {
-    setClusterName(clusterName);
-    renameClusterMutate(clusterName);
-    setIsEditing(false);
-  };
+  const handleSaveName = () => {
+    renameClusterMutate(clusterName);
+    setIsEditing(false);
+  };
backend/app/database/faces.py (1)

197-201: Return empty array for tags instead of None.
Set image_data["tags"] = [] (not None) when there are no tags so the API returns an empty array (frontend expects tags?: string[]). File: backend/app/database/faces.py:197-201

backend/app/schemas/images.py (1)

6-7: Add minimal validation for the incoming path.

Consider preventing empty strings at the schema level.

-class AddSingleImageRequest(BaseModel):
-    path: str
+class AddSingleImageRequest(BaseModel):
+    path: str  # non-empty; validate upstream or add pydantic constraints if acceptable

If acceptable, I can switch this to a constrained type (e.g., FilePath or min_length=1).

frontend/src/hooks/selectFile.ts (2)

3-5: Rename types to reflect file selection, not folder.

Minor naming nit for clarity.

-interface UseFolderPickerOptions {
+interface UseFilePickerOptions {
   title?: string;
 }
...
-export const useFile = (
-  options: UseFolderPickerOptions = {},
+export const useFile = (
+  options: UseFilePickerOptions = {},
 ): UseFilePickerReturn => {

Also applies to: 11-13


21-23: Consider broader image format support.

Optionally include webp/tiff/heic if supported.

-            extensions: ['png', 'jpg', 'jpeg', 'gif', 'bmp'],
+            extensions: ['png', 'jpg', 'jpeg', 'gif', 'bmp', 'webp', 'tiff', 'heic'],
frontend/src/components/Dialog/FaceSearchDialog.tsx (3)

28-45: Harden success handler with runtime shape checks.

Avoid blind cast; guard against unexpected payloads.

-    onSuccess: (data) => {
-      const result = data?.data as Image[];
+    onSuccess: (data) => {
       dispatch(hideLoader());
       setIsDialogOpen(false);
-      if (result && result.length > 0) {
-        dispatch(setResults(result));
+      const result = Array.isArray(data?.data) ? (data!.data as Image[]) : [];
+      if (result.length > 0) {
+        dispatch(setResults(result));
       } else {
         dispatch(clearSearch());
         dispatch(
           showInfoDialog({
             title: 'No Matches Found',
             message:
               'We couldn’t find any matching faces in your gallery for this photo.',
             variant: 'info',
           }),
         );
       }
     },

90-106: Remove no-op onClick and redundant disabled props.

Avoid empty handlers and hardcoded disabled={false}.

-          <Button
-            onClick={handlePickFile}
-            disabled={false}
+          <Button
+            onClick={handlePickFile}
             className="flex h-32 flex-col items-center justify-center gap-2 p-0"
             variant="outline"
           >
...
-          <Button
-            onClick={() => {}}
-            disabled={false}
+          <Button
+            disabled
             className="flex h-32 flex-col items-center justify-center gap-2 p-0"
             variant="outline"
           >

59-65: Optional: avoid double-closing the dialog.

You close before picking and again on success. Keeping only the pre-pick close is enough.

frontend/src/features/searchSlice.ts (1)

1-36: Optional: expose selectors for reuse.

Export memo-friendly selectors to standardize access across components.

// selectors (add in this file)
export const selectIsSearchActive = (state: { search: SearchState }) => state.search.active;
export const selectSearchImages = (state: { search: SearchState }) => state.search.images;
export const selectQueryImage = (state: { search: SearchState }) => state.search.queryImage;
frontend/src/components/Media/MediaView.tsx (2)

5-10: Drop unused nextImage/previousImage imports after refactor.

Clean up imports to avoid unused symbols.

Apply this diff:

 import {
   setCurrentViewIndex,
-  nextImage,
-  previousImage,
   closeImageView,
 } from '@/features/imageSlice';

100-103: Optional: auto-clamp index when images change.

Instead of returning null when index goes OOB due to a new images prop, consider clamping to 0 for a smoother UX.

-  if (!images.length || currentViewIndex === -1 || !currentImage) {
-    return null;
-  }
+  if (!images.length) return null;
+  if (currentViewIndex < 0 || currentViewIndex >= images.length) {
+    dispatch(setCurrentViewIndex(0));
+    return null;
+  }
backend/app/routes/face_clusters.py (3)

264-266: Consider handling multiple detected faces from the query image.

Currently only the first face is used. For group photos, iterate processed_faces and use the best-match embedding.

-        process_face = result["processed_faces"][0]
-        new_embedding = fn.get_embedding(process_face)
+        embeddings_query = [
+            fn.get_embedding(face) for face in (result["processed_faces"] or [])
+        ]
+        if not embeddings_query:
+            return GetAllImagesResponse(success=True, message="Successfully retrieved 0 images", data=[])

Then compare each image against the max over embeddings_query.


12-13: Remove TODO-esque inline comment in import.

Tidy the stray "# Add this import" note.

-    db_get_images_by_cluster_id,  # Add this import
+    db_get_images_by_cluster_id,

251-253: Avoid duplicating FaceNet initialization.

FaceDetector already owns a FaceNet instance. Either expose embeddings from detect_faces(forSearch=True) or reuse fd.facenet to save memory and load time per request.

frontend/src/components/Navigation/Navbar/Navbar.tsx (2)

15-19: Type the selector with RootState instead of any.

Prevents silent type drift and improves IDE help.

Apply this diff:

+import { RootState } from '@/app/store';
@@
-  const searchState = useSelector((state: any) => state.search);
+  const searchState = useSelector((state: RootState) => state.search);

62-64: Add aria-label to the search icon button.

Improves accessibility.

-          <button className="text-muted-foreground mx-1 hover:text-white">
+          <button className="text-muted-foreground mx-1 hover:text-white" aria-label="Search">
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3779796 and 5a02ca9.

⛔ Files ignored due to path filters (1)
  • frontend/public/photo.png is excluded by !**/*.png
📒 Files selected for processing (25)
  • .github/ISSUE_TEMPLATE/bug.yml (0 hunks)
  • .github/ISSUE_TEMPLATE/feature.yml (0 hunks)
  • .github/release-drafter-config.yml (2 hunks)
  • .gitignore (2 hunks)
  • backend/app/config/settings.py (1 hunks)
  • backend/app/database/faces.py (1 hunks)
  • backend/app/models/FaceDetector.py (2 hunks)
  • backend/app/routes/face_clusters.py (3 hunks)
  • backend/app/routes/facetagging.py (0 hunks)
  • backend/app/schemas/images.py (2 hunks)
  • docs/backend/backend_python/openapi.json (2 hunks)
  • frontend/src/api/api-functions/face_clusters.ts (2 hunks)
  • frontend/src/api/apiEndpoints.ts (1 hunks)
  • frontend/src/app/store.ts (2 hunks)
  • frontend/src/components/Dialog/FaceSearchDialog.tsx (1 hunks)
  • frontend/src/components/Media/MediaView.tsx (2 hunks)
  • frontend/src/components/Navigation/Navbar/Navbar.tsx (1 hunks)
  • frontend/src/features/imageSelectors.ts (1 hunks)
  • frontend/src/features/searchSlice.ts (1 hunks)
  • frontend/src/hooks/selectFile.ts (1 hunks)
  • frontend/src/pages/AITagging/AITagging.tsx (1 hunks)
  • frontend/src/pages/Home/Home.tsx (2 hunks)
  • frontend/src/pages/PersonImages/PersonImages.tsx (1 hunks)
  • frontend/src/types/Media.ts (1 hunks)
  • sync-microservice/.gitignore (1 hunks)
💤 Files with no reviewable changes (3)
  • .github/ISSUE_TEMPLATE/feature.yml
  • backend/app/routes/facetagging.py
  • .github/ISSUE_TEMPLATE/bug.yml
🧰 Additional context used
🧬 Code graph analysis (12)
frontend/src/api/api-functions/face_clusters.ts (3)
frontend/src/types/API.ts (1)
  • APIResponse (1-8)
frontend/src/api/axiosConfig.ts (1)
  • apiClient (5-12)
frontend/src/api/apiEndpoints.ts (1)
  • faceClustersEndpoints (5-10)
frontend/src/pages/PersonImages/PersonImages.tsx (1)
frontend/src/components/Media/MediaView.tsx (1)
  • MediaView (26-192)
backend/app/database/faces.py (1)
backend/test.py (1)
  • get_all_face_embeddings (8-32)
frontend/src/pages/AITagging/AITagging.tsx (1)
frontend/src/components/Media/MediaView.tsx (1)
  • MediaView (26-192)
backend/app/schemas/images.py (1)
backend/app/schemas/test.py (1)
  • AddSingleImageRequest (11-12)
frontend/src/features/imageSelectors.ts (1)
frontend/src/app/store.ts (1)
  • RootState (22-22)
frontend/src/features/searchSlice.ts (1)
frontend/src/types/Media.ts (1)
  • Image (1-10)
frontend/src/components/Media/MediaView.tsx (2)
frontend/src/types/Media.ts (1)
  • MediaViewProps (25-29)
frontend/src/features/imageSelectors.ts (1)
  • selectCurrentViewIndex (8-9)
frontend/src/components/Dialog/FaceSearchDialog.tsx (7)
frontend/src/hooks/selectFile.ts (1)
  • useFile (11-39)
frontend/src/hooks/useQueryExtension.ts (1)
  • usePictoMutation (26-78)
frontend/src/api/api-functions/face_clusters.ts (1)
  • fetchSearchedFaces (44-52)
frontend/src/types/Media.ts (1)
  • Image (1-10)
frontend/src/features/loaderSlice.ts (2)
  • hideLoader (21-24)
  • showLoader (17-20)
frontend/src/features/searchSlice.ts (3)
  • setResults (24-26)
  • clearSearch (27-31)
  • startSearch (20-23)
frontend/src/features/infoDialogSlice.ts (1)
  • showInfoDialog (16-30)
frontend/src/components/Navigation/Navbar/Navbar.tsx (3)
frontend/src/features/onboardingSelectors.ts (2)
  • selectName (15-15)
  • selectAvatar (13-13)
frontend/src/features/searchSlice.ts (1)
  • clearSearch (27-31)
frontend/src/components/Dialog/FaceSearchDialog.tsx (1)
  • FaceSearchDialog (20-124)
backend/app/routes/face_clusters.py (5)
backend/app/database/faces.py (1)
  • get_all_face_embeddings (137-207)
backend/app/models/FaceDetector.py (3)
  • FaceDetector (11-75)
  • detect_faces (22-63)
  • close (65-75)
backend/app/models/FaceNet.py (2)
  • FaceNet (8-25)
  • get_embedding (16-21)
backend/app/schemas/images.py (2)
  • AddSingleImageRequest (6-7)
  • ErrorResponse (53-56)
backend/app/utils/FaceNet.py (1)
  • FaceNet_util_cosine_similarity (20-23)
frontend/src/pages/Home/Home.tsx (9)
frontend/src/features/imageSelectors.ts (2)
  • selectIsImageViewOpen (24-27)
  • selectImages (5-7)
frontend/src/app/store.ts (1)
  • RootState (22-22)
frontend/src/hooks/useQueryExtension.ts (1)
  • usePictoQuery (80-108)
frontend/src/api/api-functions/images.ts (1)
  • fetchAllImages (5-10)
frontend/src/features/loaderSlice.ts (2)
  • showLoader (17-20)
  • hideLoader (21-24)
frontend/src/features/infoDialogSlice.ts (1)
  • showInfoDialog (16-30)
frontend/src/types/Media.ts (1)
  • Image (1-10)
frontend/src/features/imageSlice.ts (1)
  • setImages (22-26)
frontend/src/components/Media/MediaView.tsx (1)
  • MediaView (26-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
  • GitHub Check: Tauri Build Check (windows-latest)
  • GitHub Check: Tauri Build Check (ubuntu-22.04)
🔇 Additional comments (19)
.github/release-drafter-config.yml (3)

2-2: Quoting/style standardization looks good.

Double‑quoting and category/title consistency are fine and YAML‑safe.

Also applies to: 5-5, 7-8, 10-10, 12-12, 14-14, 16-16, 18-18


21-21: Change template looks correct.

Ordering and variables are valid. No issues spotted.


26-26: Confirmed — Release Drafter supports $CHANGES and $CONTRIBUTORS

$CHANGES expands to the markdown list of merged PRs (formatted by your change-template and categories) and $CONTRIBUTORS to a comma-separated list of contributors (PR authors, commit authors/committers). Caveats: exclude-contributors, no-contributors-template, change-template and change-title-escapes can alter the output.

sync-microservice/.gitignore (1)

143-144: No-op formatting (trailing newline) — LGTM.

Keeps file POSIX‑friendly; no functional change.

.gitignore (2)

3-3: Good: ignore .env — add '.env' to any .dockerignore and ensure .env.example stays tracked

No .dockerignore files were found by the check; .env.example is listed in .gitignore but git reports it is not ignored (so it’s tracked). If you use Docker, add '.env' to your .dockerignore(s) and keep .env.example committed as the template.


33-33: frontend/dist ignored — safe to keep ignored

tauri.conf.json sets frontendDist: ../dist and runs "npm run build" before packaging, and CI uses the tauri action (/.github/workflows/build-and-release.yml) to build — no workflow or deploy config requires committed frontend/dist.

frontend/src/features/imageSelectors.ts (1)

5-7: LGTM — no behavioral change.

Block body with explicit return is fine; selector still returns state.images.images.

frontend/src/pages/PersonImages/PersonImages.tsx (1)

116-116: LGTM — MediaView now receives images.

Matches the new MediaViewProps contract.

frontend/src/api/apiEndpoints.ts (1)

7-7: LGTM — endpoint added.

searchForFaces path aligns with backend route naming.

backend/app/database/faces.py (1)

147-151: Confirmed — thumbnailPath exists; no change required. The images table declares "thumbnailPath" (backend/app/database/images.py — CREATE TABLE includes thumbnailPath TEXT) and INSERT/SELECT usage is consistent, so i.thumbnailPath in backend/app/database/faces.py is valid.

backend/app/models/FaceDetector.py (1)

22-22: Verify DB schema and update detect_faces signature

faces.image_id is expected to be INTEGER in the DB (schema not found in repo). Confirm the DB schema; if image_id is INTEGER, apply:

-    def detect_faces(self, image_id: str, image_path: str, forSearch: bool = False):
+    def detect_faces(self, image_id: int, image_path: str, for_search: bool = False):

Also update internal references to for_search. Location: backend/app/models/FaceDetector.py:22

frontend/src/types/Media.ts (1)

27-27: All MediaView usages include the required images prop — no changes needed.

Verified call sites: frontend/src/pages/Home/Home.tsx:81, frontend/src/pages/AITagging/AITagging.tsx:68, frontend/src/pages/PersonImages/PersonImages.tsx:116.

frontend/src/pages/AITagging/AITagging.tsx (1)

68-68: LGTM: correct MediaView usage with images prop.

frontend/src/api/api-functions/face_clusters.ts (2)

14-16: LGTM: request type matches backend schema.


44-52: LGTM: API wrapper is consistent with axios config and endpoints.

frontend/src/app/store.ts (1)

4-4: Store integration looks correct.

Reducer registration and import are consistent with the new slice.

Also applies to: 18-19

frontend/src/features/searchSlice.ts (1)

1-36: LGTM: minimal, clear search state management.

frontend/src/components/Dialog/FaceSearchDialog.tsx (1)

18-18: Import confirmed — barrel re-exports fetchSearchedFaces
frontend/src/api/api-functions/index.ts re-exports './face_clusters' and frontend/src/api/api-functions/face_clusters.ts defines export const fetchSearchedFaces — no change required.

frontend/src/pages/Home/Home.tsx (1)

27-28: Verify re-fetch on re-enabling the query

Home uses usePictoQuery({ queryKey: ['images'], queryFn: fetchAllImages, enabled: !isSearchActive }); toggling enabled false→true normally triggers a fetch — confirm the hook implementation passes the enabled flag through to react-query's useQuery and that QueryClient defaultOptions (staleTime/cacheTime/refetchOnMount, etc.) are not configured in a way that suppresses the network refetch.

Location: frontend/src/pages/Home/Home.tsx (lines 24–28)

Comment on lines 18 to 19
- title: "Others:"
labels: []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Confirm behavior of “Others” category with empty labels.

An empty labels array may result in the category never receiving items. If your intent is to collect uncategorized PRs under “Others,” verify Release Drafter’s behavior and adjust if needed.

Would you like me to check the docs for the recommended way to surface uncategorized changes?


🌐 Web query:

In Release Drafter, how are uncategorized pull requests handled, and is there a supported way to force them under a custom "Others" category? Is an empty labels list valid for this?

💡 Result:

Short answer: Pull requests that don’t match any configured category are listed under the generated "Uncategorized" section. Release Drafter does not provide a built‑in way to say “put all uncategorized PRs into my custom ‘Others’ category” — categories are matched by label(s) you list, so to capture PRs you must give a category one or more labels that those PRs actually have. An empty labels list (labels: []) won’t act as a catch‑all (it doesn't match "no label") — categories work by matching specified labels. [1][2]

Sources:

  • Release Drafter docs: categories are label‑based (how PRs are grouped). [1]
  • Release Drafter docs: include/exclude by labels (you must specify labels to include/exclude). [2]

Action: Fix 'Others' category — empty labels won't capture uncategorized PRs

Release Drafter places PRs that don't match any category into a generated "Uncategorized" section; a category with labels: [] does not act as a catch‑all. Either add one or more labels to the "Others" category (and apply them to PRs) or remove the category and rely on "Uncategorized".

File: .github/release-drafter-config.yml (lines 18-19)

🤖 Prompt for AI Agents
In .github/release-drafter-config.yml around lines 18 to 19, the "Others"
category is defined with an empty labels array which does not act as a catch‑all
for uncategorized PRs; either populate the labels array with one or more labels
you will apply to PRs (e.g., "other", "misc") or remove the "Others" category
entirely and rely on Release Drafter's generated "Uncategorized" section; update
the file accordingly and ensure any chosen labels are added to your repository
so they can be applied to PRs.

Comment on lines 137 to 208
def get_all_face_embeddings():
conn = sqlite3.connect(DATABASE_PATH)
cursor = conn.cursor()

try:
cursor.execute(
"""
SELECT
f.embeddings,
f.bbox,
i.id,
i.path,
i.folder_id,
i.thumbnailPath,
i.metadata,
i.isTagged,
m.name as tag_name
FROM faces f
JOIN images i ON f.image_id=i.id
LEFT JOIN image_classes ic ON i.id = ic.image_id
LEFT JOIN mappings m ON ic.class_id = m.class_id
"""
)
results = cursor.fetchall()

images_dict = {}
for (
embeddings,
bbox,
image_id,
path,
folder_id,
thumbnail_path,
metadata,
is_tagged,
tag_name,
) in results:
if image_id not in images_dict:
try:
embeddings_json = json.loads(embeddings)
bbox_json = json.loads(bbox)
except json.JSONDecodeError:
continue
images_dict[image_id] = {
"embeddings": embeddings_json,
"bbox": bbox_json,
"id": image_id,
"path": path,
"folder_id": folder_id,
"thumbnailPath": thumbnail_path,
"metadata": metadata,
"isTagged": bool(is_tagged),
"tags": [],
}

# Add tag if it exists
if tag_name:
images_dict[image_id]["tags"].append(tag_name)

# Convert to list and set tags to None if empty
images = []
for image_data in images_dict.values():
if not image_data["tags"]:
image_data["tags"] = None
images.append(image_data)

# Sort by path
images.sort(key=lambda x: x["path"])
return images
finally:
conn.close()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix JSON None handling and accumulate per-image faces.

Current code:

  • json.loads on None can raise TypeError (not caught).
  • Only the first face per image is kept; others are dropped.
  • Returns singular "bbox" and sets tags to None, which misaligns with frontend Image type (bboxes?: array).

Apply this diff to parse safely, aggregate all faces per image, prevent duplicate tags, and align with bboxes array:

 def get_all_face_embeddings():
@@
-        results = cursor.fetchall()
-
-        images_dict = {}
-        for (
-            embeddings,
-            bbox,
-            image_id,
-            path,
-            folder_id,
-            thumbnail_path,
-            metadata,
-            is_tagged,
-            tag_name,
-        ) in results:
-            if image_id not in images_dict:
-                try:
-                    embeddings_json = json.loads(embeddings)
-                    bbox_json = json.loads(bbox)
-                except json.JSONDecodeError:
-                    continue
-                images_dict[image_id] = {
-                    "embeddings": embeddings_json,
-                    "bbox": bbox_json,
-                    "id": image_id,
-                    "path": path,
-                    "folder_id": folder_id,
-                    "thumbnailPath": thumbnail_path,
-                    "metadata": metadata,
-                    "isTagged": bool(is_tagged),
-                    "tags": [],
-                }
-
-            # Add tag if it exists
-            if tag_name:
-                images_dict[image_id]["tags"].append(tag_name)
+        results = cursor.fetchall()
+
+        images_dict = {}
+        for row in results:
+            (
+                embeddings,
+                bbox,
+                image_id,
+                path,
+                folder_id,
+                thumbnail_path,
+                metadata,
+                is_tagged,
+                tag_name,
+            ) = row
+
+            # Parse JSON fields safely
+            try:
+                embeddings_json = json.loads(embeddings) if embeddings else None
+            except (json.JSONDecodeError, TypeError):
+                continue
+            try:
+                bbox_json = json.loads(bbox) if bbox else None
+            except (json.JSONDecodeError, TypeError):
+                bbox_json = None
+
+            # Ensure image entry
+            entry = images_dict.get(image_id)
+            if not entry:
+                entry = {
+                    "id": image_id,
+                    "path": path,
+                    "folder_id": folder_id,
+                    "thumbnailPath": thumbnail_path,
+                    "metadata": metadata,
+                    "isTagged": bool(is_tagged),
+                    "tags": [],
+                    "embeddings": [],
+                    "bboxes": [],
+                }
+                images_dict[image_id] = entry
+
+            # Accumulate faces for this image
+            if embeddings_json is not None:
+                if (
+                    isinstance(embeddings_json, list)
+                    and embeddings_json
+                    and isinstance(embeddings_json[0], (int, float))
+                ):
+                    entry["embeddings"].append(embeddings_json)
+                else:
+                    entry["embeddings"].extend(embeddings_json)
+            if bbox_json is not None:
+                entry["bboxes"].append(bbox_json)
+            if tag_name and tag_name not in entry["tags"]:
+                entry["tags"].append(tag_name)
@@
-        images = []
-        for image_data in images_dict.values():
-            if not image_data["tags"]:
-                image_data["tags"] = None
-            images.append(image_data)
+        images = list(images_dict.values())
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_all_face_embeddings():
conn = sqlite3.connect(DATABASE_PATH)
cursor = conn.cursor()
try:
cursor.execute(
"""
SELECT
f.embeddings,
f.bbox,
i.id,
i.path,
i.folder_id,
i.thumbnailPath,
i.metadata,
i.isTagged,
m.name as tag_name
FROM faces f
JOIN images i ON f.image_id=i.id
LEFT JOIN image_classes ic ON i.id = ic.image_id
LEFT JOIN mappings m ON ic.class_id = m.class_id
"""
)
results = cursor.fetchall()
images_dict = {}
for (
embeddings,
bbox,
image_id,
path,
folder_id,
thumbnail_path,
metadata,
is_tagged,
tag_name,
) in results:
if image_id not in images_dict:
try:
embeddings_json = json.loads(embeddings)
bbox_json = json.loads(bbox)
except json.JSONDecodeError:
continue
images_dict[image_id] = {
"embeddings": embeddings_json,
"bbox": bbox_json,
"id": image_id,
"path": path,
"folder_id": folder_id,
"thumbnailPath": thumbnail_path,
"metadata": metadata,
"isTagged": bool(is_tagged),
"tags": [],
}
# Add tag if it exists
if tag_name:
images_dict[image_id]["tags"].append(tag_name)
# Convert to list and set tags to None if empty
images = []
for image_data in images_dict.values():
if not image_data["tags"]:
image_data["tags"] = None
images.append(image_data)
# Sort by path
images.sort(key=lambda x: x["path"])
return images
finally:
conn.close()
def get_all_face_embeddings():
conn = sqlite3.connect(DATABASE_PATH)
cursor = conn.cursor()
try:
cursor.execute(
"""
SELECT
f.embeddings,
f.bbox,
i.id,
i.path,
i.folder_id,
i.thumbnailPath,
i.metadata,
i.isTagged,
m.name as tag_name
FROM faces f
JOIN images i ON f.image_id=i.id
LEFT JOIN image_classes ic ON i.id = ic.image_id
LEFT JOIN mappings m ON ic.class_id = m.class_id
"""
)
results = cursor.fetchall()
images_dict = {}
for row in results:
(
embeddings,
bbox,
image_id,
path,
folder_id,
thumbnail_path,
metadata,
is_tagged,
tag_name,
) = row
# Parse JSON fields safely
try:
embeddings_json = json.loads(embeddings) if embeddings else None
except (json.JSONDecodeError, TypeError):
continue
try:
bbox_json = json.loads(bbox) if bbox else None
except (json.JSONDecodeError, TypeError):
bbox_json = None
# Ensure image entry
entry = images_dict.get(image_id)
if not entry:
entry = {
"id": image_id,
"path": path,
"folder_id": folder_id,
"thumbnailPath": thumbnail_path,
"metadata": metadata,
"isTagged": bool(is_tagged),
"tags": [],
"embeddings": [],
"bboxes": [],
}
images_dict[image_id] = entry
# Accumulate faces for this image
if embeddings_json is not None:
if (
isinstance(embeddings_json, list)
and embeddings_json
and isinstance(embeddings_json[0], (int, float))
):
entry["embeddings"].append(embeddings_json)
else:
entry["embeddings"].extend(embeddings_json)
if bbox_json is not None:
entry["bboxes"].append(bbox_json)
if tag_name and tag_name not in entry["tags"]:
entry["tags"].append(tag_name)
images = list(images_dict.values())
# Sort by path
images.sort(key=lambda x: x["path"])
return images
finally:
conn.close()
🤖 Prompt for AI Agents
In backend/app/database/faces.py around lines 137 to 208, the function currently
calls json.loads on fields that may be None (raising TypeError), keeps only the
first face per image (dropping others), and returns a singular "bbox" which
conflicts with the frontend expecting an optional bboxes array; to fix: treat
embeddings and bbox as optional (check for None before json.loads, parse only
when not None, or set to None/empty), change the per-image structure to
accumulate a list of face entries (e.g., store each face as an item with its
embeddings and bbox inside a "bboxes" or "faces" array), append every face
encountered for the same image_id rather than overwriting, ensure tag names are
added de-duplicated (use set or check before append), and when returning convert
empty tag lists or empty bboxes arrays to None to match the frontend optional
typing; keep other image fields unchanged and close the DB connection in the
finally block as before.

Comment on lines 40 to 49
class ImageData(BaseModel):
id: str
path: str
folder_id: str
thumbnailPath: str
metadata: str
isTagged: bool
tags: Optional[List[str]] = None
bboxes: BoundingBox

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix ImageData.bboxes type to match payload (list) and frontend.

DB returns multiple bboxes per image; current model has a single BoundingBox, causing validation errors.

Apply this diff:

 class ImageData(BaseModel):
     id: str
     path: str
     folder_id: str
     thumbnailPath: str
     metadata: str
     isTagged: bool
     tags: Optional[List[str]] = None
-    bboxes: BoundingBox
+    bboxes: Optional[List[BoundingBox]] = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class ImageData(BaseModel):
id: str
path: str
folder_id: str
thumbnailPath: str
metadata: str
isTagged: bool
tags: Optional[List[str]] = None
bboxes: BoundingBox
class ImageData(BaseModel):
id: str
path: str
folder_id: str
thumbnailPath: str
metadata: str
isTagged: bool
tags: Optional[List[str]] = None
bboxes: Optional[List[BoundingBox]] = None
🤖 Prompt for AI Agents
In backend/app/routes/face_clusters.py around lines 40 to 49, the ImageData
model defines bboxes as a single BoundingBox but the payload and frontend send
multiple bounding boxes; change the type of ImageData.bboxes to
List[BoundingBox] (or Optional[List[BoundingBox]] if the field can be absent)
and ensure List is imported from typing; update any default value or validation
accordingly so Pydantic accepts an array of bounding boxes.

Comment on lines 235 to 239
@router.post(
"/face-search",
responses={code: {"model": ErrorResponse} for code in [400, 500]},
)
def face_tagging(payload: AddSingleImageRequest):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Expose a typed success schema for /face-search.

Add response_model=GetAllImagesResponse so OpenAPI and clients get the proper schema.

Apply this diff:

 @router.post(
     "/face-search",
+    response_model=GetAllImagesResponse,
     responses={code: {"model": ErrorResponse} for code in [400, 500]},
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@router.post(
"/face-search",
responses={code: {"model": ErrorResponse} for code in [400, 500]},
)
def face_tagging(payload: AddSingleImageRequest):
@router.post(
"/face-search",
response_model=GetAllImagesResponse,
responses={code: {"model": ErrorResponse} for code in [400, 500]},
)
def face_tagging(payload: AddSingleImageRequest):
🤖 Prompt for AI Agents
backend/app/routes/face_clusters.py around lines 235 to 239, the POST route
decorator for "/face-search" lacks a typed success schema; update the
router.post decorator to include response_model=GetAllImagesResponse so OpenAPI
and clients receive the proper success schema, and ensure GetAllImagesResponse
is imported from the appropriate module at the top of the file.

Comment on lines 275 to 293
for image in images:
max_similarity = 0
similarity = FaceNet_util_cosine_similarity(
new_embedding, image["embeddings"]
)
max_similarity = max(max_similarity, similarity)
if max_similarity >= CONFIDENCE_PERCENT:
matches.append(
ImageData(
id=image["id"],
path=image["path"],
folder_id=image["folder_id"],
thumbnailPath=image["thumbnailPath"],
metadata=image["metadata"],
isTagged=image["isTagged"],
tags=image["tags"],
bboxes=image["bbox"],
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Compute similarity across all face embeddings per image; fix logic bug and potential crash.

image["embeddings"] is a list; the current code feeds the whole list to cosine similarity. Take the max similarity across embeddings and serialize bboxes properly.

Apply this diff:

-            for image in images:
-                max_similarity = 0
-                similarity = FaceNet_util_cosine_similarity(
-                    new_embedding, image["embeddings"]
-                )
-                max_similarity = max(max_similarity, similarity)
-                if max_similarity >= CONFIDENCE_PERCENT:
-                    matches.append(
-                        ImageData(
-                            id=image["id"],
-                            path=image["path"],
-                            folder_id=image["folder_id"],
-                            thumbnailPath=image["thumbnailPath"],
-                            metadata=image["metadata"],
-                            isTagged=image["isTagged"],
-                            tags=image["tags"],
-                            bboxes=image["bbox"],
-                        )
-                    )
+            for image in images:
+                embeddings = image.get("embeddings") or []
+                if not embeddings:
+                    continue
+                max_similarity = max(
+                    FaceNet_util_cosine_similarity(new_embedding, emb)
+                    for emb in embeddings
+                )
+                if max_similarity >= CONFIDENCE_PERCENT:
+                    matches.append(
+                        ImageData(
+                            id=image["id"],
+                            path=image["path"],
+                            folder_id=image["folder_id"],
+                            thumbnailPath=image["thumbnailPath"],
+                            metadata=image["metadata"],
+                            isTagged=image["isTagged"],
+                            tags=image["tags"],
+                            bboxes=[BoundingBox(**bb) for bb in (image.get("bbox") or [])],
+                        )
+                    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for image in images:
max_similarity = 0
similarity = FaceNet_util_cosine_similarity(
new_embedding, image["embeddings"]
)
max_similarity = max(max_similarity, similarity)
if max_similarity >= CONFIDENCE_PERCENT:
matches.append(
ImageData(
id=image["id"],
path=image["path"],
folder_id=image["folder_id"],
thumbnailPath=image["thumbnailPath"],
metadata=image["metadata"],
isTagged=image["isTagged"],
tags=image["tags"],
bboxes=image["bbox"],
)
)
for image in images:
embeddings = image.get("embeddings") or []
if not embeddings:
continue
max_similarity = max(
FaceNet_util_cosine_similarity(new_embedding, emb)
for emb in embeddings
)
if max_similarity >= CONFIDENCE_PERCENT:
matches.append(
ImageData(
id=image["id"],
path=image["path"],
folder_id=image["folder_id"],
thumbnailPath=image["thumbnailPath"],
metadata=image["metadata"],
isTagged=image["isTagged"],
tags=image["tags"],
bboxes=[BoundingBox(**bb) for bb in (image.get("bbox") or [])],
)
)
🤖 Prompt for AI Agents
In backend/app/routes/face_clusters.py around lines 275 to 293, the code
incorrectly passes image["embeddings"] (a list) directly into
FaceNet_util_cosine_similarity and sets max_similarity only against a single
computed value, and also may return non-serializable bbox data; iterate over
image["embeddings"], compute similarity for each embedding, track the maximum
similarity across them, then use that max to decide matches; additionally ensure
the bboxes are converted/serialized to the expected JSON-safe format (e.g.,
list/dict of primitives or explicit serialization) before constructing ImageData
so the response doesn't crash.

Comment on lines 36 to 39
class FaceTaggingResponse(BaseModel):
success: bool
message: str
data: dict
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Response model is too generic and requires fields that may be absent.

  • message should be optional (like other responses in this module).
  • data: dict is underspecified and makes client typing brittle.
-class FaceTaggingResponse(BaseModel):
-    success: bool
-    message: str
-    data: dict
+class FaceTaggingResponse(BaseModel):
+    success: bool
+    message: Optional[str] = None
+    error: Optional[str] = None
+    data: Optional[List[dict]] = None

If you have a concrete image shape, we can replace List[dict] with a typed model for better OpenAPI generation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class FaceTaggingResponse(BaseModel):
success: bool
message: str
data: dict
class FaceTaggingResponse(BaseModel):
success: bool
message: Optional[str] = None
error: Optional[str] = None
data: Optional[List[dict]] = None
🤖 Prompt for AI Agents
In backend/app/schemas/images.py around lines 36 to 39, the FaceTaggingResponse
model is too generic and requires fields that may be absent; change message to
Optional[str] to match other responses and replace data: dict with a typed
structure (e.g., data: Optional[List[ImageTag]] or Optional[List[FaceTag]]),
adding a new Pydantic model (ImageTag/FaceTag) describing the expected fields
for each tagged face (bounding box, label, confidence, etc. as applicable) so
OpenAPI and clients get concrete types and avoid brittle dicts.

Comment on lines 1038 to 1096
"/face-clusters/face-search": {
"post": {
"tags": [
"Face Clusters"
],
"summary": "Face Tagging",
"operationId": "face_tagging_face_clusters_face_search_post",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AddSingleImageRequest"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {}
}
}
},
"400": {
"description": "Bad Request",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"500": {
"description": "Internal Server Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
}
}
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Define a concrete 200 response schema for face-search.

Current "schema": {} degrades clients and typing. Reference an existing response or add a specific one (e.g., data: ImageData[]).

-        "summary": "Face Tagging",
+        "summary": "Face Search",
...
-          "200": {
-            "description": "Successful Response",
-            "content": {
-              "application/json": {
-                "schema": {}
-              }
-            }
-          },
+          "200": {
+            "description": "Successful Response",
+            "content": {
+              "application/json": {
+                "schema": { "$ref": "#/components/schemas/GetAllImagesResponse" }
+              }
+            }
+          },

If the backend returns a different shape, point it to a dedicated FaceSearchResponse component instead.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"/face-clusters/face-search": {
"post": {
"tags": [
"Face Clusters"
],
"summary": "Face Tagging",
"operationId": "face_tagging_face_clusters_face_search_post",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AddSingleImageRequest"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {}
}
}
},
"400": {
"description": "Bad Request",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"500": {
"description": "Internal Server Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
}
}
},
"/face-clusters/face-search": {
"post": {
"tags": [
"Face Clusters"
],
"summary": "Face Search",
"operationId": "face_tagging_face_clusters_face_search_post",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AddSingleImageRequest"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/GetAllImagesResponse" }
}
}
},
"400": {
"description": "Bad Request",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"500": {
"description": "Internal Server Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/app__schemas__face_clusters__ErrorResponse"
}
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
}
}
},
🤖 Prompt for AI Agents
In docs/backend/backend_python/openapi.json around lines 1038 to 1096, the 200
response for the POST /face-clusters/face-search currently has an empty schema
("schema": {}), which prevents clients from getting proper typings; replace the
empty schema with a concrete response schema — either reference an existing
component (e.g., "$ref": "#/components/schemas/ImageData[]") or create and
reference a new component like "FaceSearchResponse" that matches the actual
backend payload (for example an object with a data array of ImageData and any
metadata), then update components/schemas accordingly so the 200 response points
to that component.

Comment on lines 1291 to 1303
"AddSingleImageRequest": {
"properties": {
"path": {
"type": "string",
"title": "Path"
}
},
"type": "object",
"required": [
"path"
],
"title": "AddSingleImageRequest"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

🧩 Analysis chain

LGTM: AddSingleImageRequest schema aligns with frontend request.

If the backend also defines a similarly named model elsewhere, it can cause confusion. Run to check duplicates:


🏁 Script executed:

#!/bin/bash
rg -nP "class\s+AddSingleImageRequest\b" -C2 backend

Length of output: 445


LGTM — AddSingleImageRequest schema aligns with frontend; duplicate backend model found
AddSingleImageRequest is defined in backend/app/schemas/images.py and backend/app/schemas/test.py — consolidate to a single source (remove the test duplicate or import the model from images.py).

🤖 Prompt for AI Agents
In docs/backend/backend_python/openapi.json around lines 1291 to 1303, the
AddSingleImageRequest schema is duplicated in backend/app/schemas/images.py and
backend/app/schemas/test.py; consolidate by removing the duplicate definition in
backend/app/schemas/test.py and updating any imports/usages to import
AddSingleImageRequest from backend/app/schemas/images.py (or alternatively move
the canonical model to a shared module and update both places to import that
single source), then regenerate/openapi.json to reflect the single canonical
schema.

}
return null;
}, [images, currentViewIndex]);
console.log(currentViewIndex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick

Remove debug logging.

Leftover console.log will spam logs.

Apply this diff:

-  console.log(currentViewIndex);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log(currentViewIndex);
🤖 Prompt for AI Agents
In frontend/src/components/Media/MediaView.tsx around line 38, remove the
leftover debug logging statement "console.log(currentViewIndex)" to prevent
spamming logs in production; simply delete that line or replace it with a
conditional debug logger (e.g., use a feature-flagged logger or debug level) if
runtime inspection is still needed.

Comment on lines 80 to 82
{isImageViewOpen && (
<MediaView images={displayImages} onClose={handleCloseMediaView} />
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Align MediaView navigation with displayImages length.

Once you apply the MediaView navigation fix, this usage will be consistent even when displayImages is a subset (e.g., search results).


🏁 Script executed:

#!/bin/bash
# Sanity: confirm MediaView is only passed displayImages across pages after this PR.
rg -n -C2 --type=tsx "MediaView\s*\(\s*.*images="

Length of output: 27


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) JSX usages of MediaView with images prop:"
rg -n -C2 "<MediaView\b[^>]*\bimages\s*=" || true
echo
echo "2) All references to 'MediaView':"
rg -n -C2 "MediaView" || true
echo
echo "3) Possible component definitions for MediaView:"
rg -n -C2 "function\s+MediaView\b" || true
rg -n -C2 "const\s+MediaView\b" || true
rg -n -C2 "class\s+MediaView\b" || true
rg -n -C2 "export\s+(default\s+)?function\s+MediaView\b" || true
rg -n -C2 "export\s+const\s+MediaView\b" || true
echo
echo "4) Occurrences of 'displayImages':"
rg -n -C2 "\bdisplayImages\b" || true

Length of output: 8932


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "---- frontend/src/components/Media/MediaView.tsx ----"
sed -n '1,240p' frontend/src/components/Media/MediaView.tsx || true

Length of output: 5783


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Files with image-related actions/selectors:"
rg -n --hidden "setCurrentViewIndex|nextImage|previousImage|closeImageView|selectCurrentViewIndex|setImages" || true
echo
echo "---- frontend/src/features/imageSlice.ts ----"
sed -n '1,240p' frontend/src/features/imageSlice.ts || true
echo
echo "---- frontend/src/features/imageSelectors.ts ----"
sed -n '1,240p' frontend/src/features/imageSelectors.ts || true
echo
echo "---- frontend/src/pages/Home/Home.tsx ----"
sed -n '1,240p' frontend/src/pages/Home/Home.tsx || true
echo
echo "---- frontend/src/pages/PersonImages/PersonImages.tsx ----"
sed -n '1,240p' frontend/src/pages/PersonImages/PersonImages.tsx || true
echo
echo "---- frontend/src/pages/AITagging/AITagging.tsx ----"
sed -n '1,240p' frontend/src/pages/AITagging/AITagging.tsx || true

Length of output: 16893


Fix MediaView navigation to use the passed images prop (don’t rely on global state.images)

MediaView reads selectCurrentViewIndex and dispatches nextImage/previousImage against the global image slice while Home sometimes passes displayImages = searchResults; when displayImages ≠ state.images navigation will be out-of-sync. Update either by making MediaView manage its own index using the images prop, or by changing the navigation actions to operate on the passed image context (IDs or a view-specific list) before opening the viewer.

Files to update: frontend/src/components/Media/MediaView.tsx; frontend/src/pages/Home/Home.tsx; frontend/src/features/imageSlice.ts.

🤖 Prompt for AI Agents
In frontend/src/pages/Home/Home.tsx around lines 80-82 and in
frontend/src/components/Media/MediaView.tsx and
frontend/src/features/imageSlice.ts: MediaView currently reads
selectCurrentViewIndex and dispatches nextImage/previousImage against the global
image slice which desyncs when Home passes displayImages (e.g., searchResults).
Change MediaView to be self-contained: accept images (already passed) and an
optional initialIndex prop, maintain its own local index state, and implement
next/previous by updating that local index (and calling any provided
onChange/onClose callbacks) instead of dispatching global navigation actions;
remove/selectively replace any use of selectCurrentViewIndex and
nextImage/previousImage in MediaView. In Home.tsx ensure you pass initialIndex
when opening the viewer if needed (e.g., index of clicked image). No functional
change required to imageSlice itself other than not relying on it from
MediaView; if there are UI tests or other consumers that expect global
navigation, keep those actions but do not use them from the MediaView.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants