-
Notifications
You must be signed in to change notification settings - Fork 572
[Feat] Implemented the search feature by uploading a image -2nd PR #524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit a81cbac. revert
WalkthroughAdds a face-search feature: backend POST /face-clusters/face-search with embedding comparison and a DB retrieval function; FaceDetector signature gains a forSearch flag; frontend adds API, Redux search slice, FaceSearchDialog UI, MediaView now accepts images prop; assorted repo template/ignore/workflow edits. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Navbar
participant Dialog as FaceSearchDialog
participant FE_API as Frontend API
participant BE as Backend /face-search
participant FD as FaceDetector
participant DB as DB (faces/images)
User->>Navbar: open face search
Navbar->>Dialog: show dialog
Dialog->>User: pick image file
User-->>Dialog: file path
Dialog->>FE_API: fetchSearchedFaces({path})
FE_API->>BE: POST /face-clusters/face-search
BE->>FD: detect_faces(image_id, image_path, forSearch=true)
FD-->>BE: embedding(s)
BE->>DB: get_all_face_embeddings()
DB-->>BE: stored embeddings + metadata
BE->>BE: compare embeddings (cosine >= CONFIDENCE_PERCENT)
BE-->>FE_API: matches list (ImageData[])
FE_API-->>Dialog: response
Dialog->>Store: dispatch startSearch / setResults
Dialog->>User: show results
sequenceDiagram
participant Home
participant Store
participant MediaView
note over Home,Store: when search.active == true
Home->>Store: select search.images
Home->>MediaView: open with images={displayImages}
MediaView->>MediaView: memoize currentImage from images prop
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/routes/face_clusters.py (1)
235-282: Blocker: Path-based file read enables arbitrary server file access. Accept an uploaded image (UploadFile) and persist to a temp file.Current API reads any server path sent by the client. This is a security risk and contradicts “uploading an image.” Switch to UploadFile, validate content-type, write to a temp file, and clean it up.
Apply this diff to the endpoint (function signature, validation, response_model, temp-file handling, and robust cleanup):
@router.post( - "/face-search", - responses={code: {"model": ErrorResponse} for code in [400, 500]}, + "/face-search", + response_model=GetAllImagesResponse, + responses={code: {"model": ErrorResponse} for code in [400, 500]}, ) -def face_tagging(payload: AddSingleImageRequest): - image_path = payload.path - if not os.path.isfile(image_path): - raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, - detail=ErrorResponse( - success=False, - error="Invalid file path", - message="The provided path is not a valid file", - ).model_dump(), - ) - - fd = FaceDetector() - fn = FaceNet(DEFAULT_FACENET_MODEL) - try: +async def face_tagging(file: UploadFile = File(...)): + if not file.content_type or not file.content_type.startswith("image/"): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=ErrorResponse( + success=False, + error="Invalid file", + message="Only image uploads are supported", + ).model_dump(), + ) + temp_path = None + fd = None + fn = None + try: + # Persist upload to a temporary file for OpenCV + suffix = os.path.splitext(file.filename or "")[1] or ".jpg" + with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp: + temp_path = tmp.name + tmp.write(await file.read()) + + fd = FaceDetector() + fn = FaceNet(DEFAULT_FACENET_MODEL) matches = [] - image_id = str(uuid.uuid4()) - result = fd.detect_faces(image_id, image_path, forSearch=True) + image_id = str(uuid.uuid4()) # used only for logging in detector + result = fd.detect_faces(image_id, temp_path, forSearch=True) if not result or result["num_faces"] == 0: return GetAllImagesResponse(success=True, message=f"Successfully retrieved {len(matches)} images", data=[]) process_face = result["processed_faces"][0] new_embedding = fn.get_embedding(process_face) images = get_all_face_embeddings() if len(images) == 0: return GetAllImagesResponse(success=True, message=f"Successfully retrieved {len(matches)} images", data=[]) else: for image in images: - max_similarity = 0 similarity = FaceNet_util_cosine_similarity(new_embedding, image["embeddings"]) - max_similarity = max(max_similarity, similarity) - if max_similarity >= CONFIDENCE_PERCENT: - matches.append(ImageData(id=image["id"], path=image["path"], folder_id=image["folder_id"], thumbnailPath=image["thumbnailPath"], metadata=image["metadata"], isTagged=image["isTagged"], tags=image["tags"], bboxes=image["bbox"])) + if similarity >= CONFIDENCE_PERCENT: + matches.append(ImageData(id=image["id"], path=image["path"], folder_id=image["folder_id"], thumbnailPath=image["thumbnailPath"], metadata=image["metadata"], isTagged=image["isTagged"], tags=image["tags"], bbox=image["bbox"])) return GetAllImagesResponse( success=True, message=f"Successfully retrieved {len(matches)} images", data=matches, ) - finally: - fd.close() - fn.close() + finally: + if fd: + fd.close() + if fn: + fn.close() + if temp_path and os.path.exists(temp_path): + try: + os.unlink(temp_path) + except OSError: + passAdd these imports outside this hunk:
from fastapi import UploadFile, File import tempfile
🧹 Nitpick comments (25)
.github/workflows/update-project-structure.yml (3)
7-9: Add concurrency to avoid race conditions on rapid successive merges.Prevents overlapping runs from clobbering each other when multiple merges happen close together.
Apply this diff (top-level, alongside name/on/jobs):
name: Auto-Update Project Structure +concurrency: + group: update-project-structure + cancel-in-progress: true
15-19: Explicitly check out the target branch for deterministic output.If you keep pull_request (not recommended), you must check out main; with push (recommended), this is optional but harmless.
- name: Checkout code uses: actions/checkout@v4 with: + ref: main fetch-depth: 0
11-13: Tighten permissions to least privilege (adjust per chosen approach).If you keep direct commits (not recommended), contents: write is sufficient. If you adopt the PR flow, pull-requests: write is needed; otherwise it can be dropped.
.github/release-drafter-config.yml (1)
18-19: Empty “Others” category is ineffective.A category with
labels: []will never match any PR. Remove it or assign labels; fallback uncategorized items will still appear under Release Drafter’s default “Other Changes”.Apply:
- - title: "Others:" - labels: [].gitignore (1)
3-3: Ignore nested .env files across packages/appsOnly .env.example at repo root was found; no nested .env or .env.* files detected. Recommend updating .gitignore to cover workspace/package envs:
-.env +.env +.env.* +**/.env +**/.env.*frontend/src/hooks/selectFile.ts (3)
3-5: Rename option type for clarity and consistency.Use "File" terminology instead of "Folder" to match the hook’s purpose.
-interface UseFolderPickerOptions { +interface UseFilePickerOptions { title?: string; } -export const useFile = ( - options: UseFolderPickerOptions = {}, +export const useFile = ( + options: UseFilePickerOptions = {}, ): UseFilePickerReturn => {Also applies to: 11-13
21-25: Support common image formats (webp, tiff, heic, avif).Broaden accepted formats to cover frequent cases from phones and modern exports.
{ name: 'Images', - extensions: ['png', 'jpg', 'jpeg', 'gif', 'bmp'], + extensions: ['png', 'jpg', 'jpeg', 'gif', 'bmp', 'webp', 'tiff', 'heic', 'avif'], },
15-33: Simplify return logic by narrowing the type.With multiple: false, the dialog returns string | null; you can cast and return directly.
- const pickSingleFile = useCallback(async (): Promise<( - string) | null> => { + const pickSingleFile = useCallback(async (): Promise<string | null> => { try { - const selected = await open({ + const selected = (await open({ multiple: false, filters: [ { name: 'Images', extensions: ['png', 'jpg', 'jpeg', 'gif', 'bmp'], }, ], title, - }); - - if (selected && typeof selected === 'string') { - return selected - } - return null; + })) as string | null; + return selected; } catch (error) { console.error('Error picking File:', error); return null; } }, [title]);backend/app/schemas/images.py (2)
6-8: Validate the input path minimally.Enforce non-empty string to catch trivial bad requests. If desired, switch to FilePath later.
-class AddSingleImageRequest(BaseModel): - path: str +class AddSingleImageRequest(BaseModel): + path: str # consider: Field(..., min_length=1)Add import if you adopt Field:
from pydantic import BaseModel, Field
36-39: Strongly type the response data instead ofdict.A concrete model improves validation and OpenAPI for clients.
-class FaceTaggingResponse(BaseModel): +class FaceTaggingResponse(BaseModel): success: bool message: str - data: dict + data: FaceSearchDataDefine the model (add near the top of this file):
class FaceSearchData(BaseModel): # Adjust fields to what the endpoint actually returns images: List[str] # or a typed Image model if available # matches: Optional[List[MatchResult]] = None # if applicablebackend/app/config/settings.py (1)
7-7: Clarify units or rename CONFIDENCE_PERCENTName implies 0–100 but value is 0.6; it's used as a cosine-similarity threshold in backend/app/routes/face_clusters.py (if max_similarity >= CONFIDENCE_PERCENT). Either rename to CONFIDENCE_THRESHOLD (preferred) or document the unit at the definition.
Location: backend/app/config/settings.py (definition); backend/app/routes/face_clusters.py (usage ~line 271).
-CONFIDENCE_PERCENT = 0.6 +CONFIDENCE_PERCENT = 0.6 # Confidence threshold in [0.0, 1.0]docs/backend/backend_python/openapi.json (2)
1038-1096: Face-search endpoint: summary/operationId misnamed and 200 schema is undefined.
- Summary reads “Face Tagging”; use “Face Search” for clarity.
operationIdshould align with the route purpose.- 200 response uses
{}; please reference a concrete schema (e.g.,GetAllImagesResponse) to avoid breaking codegen and docs.Apply this diff:
- "summary": "Face Tagging", - "operationId": "face_tagging_face_clusters_face_search_post", + "summary": "Face Search", + "operationId": "face_search_face_clusters_face_search_post", ... - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": {} - } - } - }, + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/GetAllImagesResponse" + } + } + } + },
1291-1303: Request schema LGTM; consider a name aligned with search.
AddSingleImageRequestworks, butFaceSearchRequestwould better convey intent if this model is specific to search.frontend/src/features/searchSlice.ts (1)
1-37: Solid search slice; clear results on start to avoid stale UI.When a new search begins, consider clearing previous results immediately so the grid doesn’t momentarily show stale images.
reducers: { startSearch(state, action: PayloadAction<string>) { state.active = true; state.queryImage = action.payload; + state.images = []; },frontend/src/components/Navigation/Navbar/Navbar.tsx (2)
15-18: Type the selector with RootState.Avoid
anyto keep selectors type-safe.+import { RootState } from '@/app/store'; ... - const searchState = useSelector((state: any) => state.search); + const searchState = useSelector((state: RootState) => state.search);
41-49: Add aria-label to the clear button for accessibility.Improves screen-reader usability.
- <button + <button + aria-label="Clear face search" onClick={() => dispatch(clearSearch())} className="absolute -top-1 -right-1 flex h-3 w-3 items-center justify-center rounded-full bg-red-600 text-[10px] leading-none text-white" > ✕ </button>frontend/src/pages/Home/Home.tsx (1)
30-51: Avoid shadowing ‘images’ inside effect.Rename the local variable to prevent confusion with the Redux selector.
- } else if (isSuccess) { - const images = data?.data as Image[]; - dispatch(setImages(images)); + } else if (isSuccess) { + const fetchedImages = data?.data as Image[]; + dispatch(setImages(fetchedImages)); dispatch(hideLoader()); }backend/app/routes/face_clusters.py (6)
40-49: Field/name mismatch: use singularbbox(BoundingBox) instead of pluralbboxes.The DB returns one bbox per row; the model uses
bboxes: BoundingBoxand you passimage["bbox"]. Rename tobboxfor accuracy.class ImageData(BaseModel): @@ - bboxes: BoundingBox + bbox: BoundingBoxAnd when constructing:
- matches.append(ImageData(id=image["id"], path=image["path"], folder_id=image["folder_id"], thumbnailPath=image["thumbnailPath"], metadata=image["metadata"], isTagged=image["isTagged"], tags=image["tags"], bboxes=image["bbox"])) + matches.append(ImageData(id=image["id"], path=image["path"], folder_id=image["folder_id"], thumbnailPath=image["thumbnailPath"], metadata=image["metadata"], isTagged=image["isTagged"], tags=image["tags"], bbox=image["bbox"]))Also applies to: 272-272
252-252: Avoid loading FaceNet twice; reuseFaceDetector.facenet.Saves memory and startup latency by using the detector’s model for the single embedding call.
- fn = FaceNet(DEFAULT_FACENET_MODEL) @@ - new_embedding = fn.get_embedding(process_face) + new_embedding = fd.facenet.get_embedding(process_face) @@ - fn.close() + # no separate FaceNet instance to closeAlso applies to: 261-261, 281-281
268-273: Simplify similarity logic.
max_similarityis redundant; comparesimilaritydirectly.- max_similarity = 0 - similarity = FaceNet_util_cosine_similarity(new_embedding, image["embeddings"]) - max_similarity = max(max_similarity, similarity) - if max_similarity >= CONFIDENCE_PERCENT: + similarity = FaceNet_util_cosine_similarity(new_embedding, image["embeddings"]) + if similarity >= CONFIDENCE_PERCENT: matches.append(ImageData(id=image["id"], path=image["path"], folder_id=image["folder_id"], thumbnailPath=image["thumbnailPath"], metadata=image["metadata"], isTagged=image["isTagged"], tags=image["tags"], bbox=image["bbox"]))
12-12: Remove leftover “# Add this import” inline comment.Noise in production code.
- db_get_images_by_cluster_id, # Add this import + db_get_images_by_cluster_id,
57-57:loggeris unused.Either use it for diagnostics or drop it.
33-55: Prefer reusing shared schemas to avoid duplication.
ImageDataandGetAllImagesResponseexist inapp/routes/images.py. Either import and extend with optionalbbox, or move these models toapp/schemasso both routes share them.frontend/src/components/Media/MediaView.tsx (2)
38-38: Remove stray console.log.Avoid noisy logs in production UI.
- console.log(currentViewIndex); + // debug: currentViewIndex
31-37: Guard against stale/out-of-bounds index whenimageschanges.Clamp or reset
currentViewIndexto keep the viewer usable when the list updates.// Add after computing totalImages useEffect(() => { if (currentViewIndex >= totalImages && totalImages > 0) { dispatch(setCurrentViewIndex(totalImages - 1)); } if (currentViewIndex < 0 && totalImages > 0) { dispatch(setCurrentViewIndex(0)); } }, [currentViewIndex, totalImages, dispatch]);Also applies to: 100-104
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/public/photo.pngis excluded by!**/*.png
📒 Files selected for processing (25)
.github/ISSUE_TEMPLATE/bug.yml(0 hunks).github/ISSUE_TEMPLATE/feature.yml(0 hunks).github/release-drafter-config.yml(2 hunks).github/workflows/update-project-structure.yml(1 hunks).gitignore(2 hunks)backend/app/config/settings.py(1 hunks)backend/app/database/faces.py(1 hunks)backend/app/models/FaceDetector.py(2 hunks)backend/app/routes/face_clusters.py(3 hunks)backend/app/routes/facetagging.py(0 hunks)backend/app/schemas/images.py(2 hunks)docs/backend/backend_python/openapi.json(2 hunks)frontend/src/api/api-functions/face_clusters.ts(2 hunks)frontend/src/api/apiEndpoints.ts(1 hunks)frontend/src/app/store.ts(2 hunks)frontend/src/components/Dialog/FaceSearchDialog.tsx(1 hunks)frontend/src/components/Media/MediaView.tsx(2 hunks)frontend/src/components/Navigation/Navbar/Navbar.tsx(1 hunks)frontend/src/features/imageSelectors.ts(1 hunks)frontend/src/features/searchSlice.ts(1 hunks)frontend/src/hooks/selectFile.ts(1 hunks)frontend/src/pages/AITagging/AITagging.tsx(1 hunks)frontend/src/pages/Home/Home.tsx(2 hunks)frontend/src/types/Media.ts(1 hunks)sync-microservice/.gitignore(1 hunks)
💤 Files with no reviewable changes (3)
- .github/ISSUE_TEMPLATE/feature.yml
- backend/app/routes/facetagging.py
- .github/ISSUE_TEMPLATE/bug.yml
🧰 Additional context used
🧬 Code graph analysis (10)
frontend/src/api/api-functions/face_clusters.ts (3)
frontend/src/types/API.ts (1)
APIResponse(1-8)frontend/src/api/axiosConfig.ts (1)
apiClient(5-12)frontend/src/api/apiEndpoints.ts (1)
faceClustersEndpoints(5-10)
backend/app/database/faces.py (1)
backend/test.py (1)
get_all_face_embeddings(8-32)
frontend/src/features/imageSelectors.ts (1)
frontend/src/app/store.ts (1)
RootState(22-22)
frontend/src/components/Dialog/FaceSearchDialog.tsx (7)
frontend/src/hooks/selectFile.ts (1)
useFile(11-40)frontend/src/hooks/useQueryExtension.ts (1)
usePictoMutation(26-78)frontend/src/api/api-functions/face_clusters.ts (1)
fetchSearchedFaces(44-52)frontend/src/types/Media.ts (1)
Image(1-10)frontend/src/features/loaderSlice.ts (2)
hideLoader(21-24)showLoader(17-20)frontend/src/features/searchSlice.ts (3)
setResults(24-26)clearSearch(27-31)startSearch(20-23)frontend/src/features/infoDialogSlice.ts (1)
showInfoDialog(16-30)
frontend/src/features/searchSlice.ts (1)
frontend/src/types/Media.ts (1)
Image(1-10)
frontend/src/components/Navigation/Navbar/Navbar.tsx (3)
frontend/src/features/onboardingSelectors.ts (2)
selectName(15-15)selectAvatar(13-13)frontend/src/features/searchSlice.ts (1)
clearSearch(27-31)frontend/src/components/Dialog/FaceSearchDialog.tsx (1)
FaceSearchDialog(20-124)
frontend/src/components/Media/MediaView.tsx (2)
frontend/src/types/Media.ts (1)
MediaViewProps(25-29)frontend/src/features/imageSelectors.ts (1)
selectCurrentViewIndex(8-9)
backend/app/routes/face_clusters.py (6)
backend/app/database/faces.py (1)
get_all_face_embeddings(136-204)backend/app/models/FaceDetector.py (3)
FaceDetector(11-75)detect_faces(22-63)close(65-75)backend/app/models/FaceNet.py (2)
FaceNet(8-25)get_embedding(16-21)backend/app/schemas/images.py (2)
AddSingleImageRequest(6-7)ErrorResponse(52-55)backend/app/utils/FaceNet.py (1)
FaceNet_util_cosine_similarity(20-23)backend/app/routes/images.py (2)
ImageData(11-18)GetAllImagesResponse(21-24)
frontend/src/pages/AITagging/AITagging.tsx (1)
frontend/src/components/Media/MediaView.tsx (1)
MediaView(26-192)
frontend/src/pages/Home/Home.tsx (6)
frontend/src/features/imageSelectors.ts (2)
selectIsImageViewOpen(24-27)selectImages(5-7)frontend/src/app/store.ts (1)
RootState(22-22)frontend/src/features/loaderSlice.ts (2)
showLoader(17-20)hideLoader(21-24)frontend/src/features/infoDialogSlice.ts (1)
showInfoDialog(16-30)frontend/src/features/imageSlice.ts (1)
setImages(22-26)frontend/src/components/Media/MediaView.tsx (1)
MediaView(26-192)
🔇 Additional comments (22)
.github/release-drafter-config.yml (6)
2-2: Quoting change LGTM.Double-quoting the tag template is fine and keeps YAML consistent.
10-13: Bug Fixes category looks good.Standard "bug" label should work as expected.
14-17: Docs category looks good."documentation" label grouping is fine.
21-21: Change template LGTM.Concise and standard format.
30-30: $CONTRIBUTORS is supported by Release Drafter — it's a comma-separated list of PR authors, commit authors, and committers and is valid in templates/header/footer; leaving$CONTRIBUTORSin .github/release-drafter-config.yml is correct.
5-9: No action required — labels presentBoth "UI" and "enhancement" labels were found by the verification script, so Release Drafter's "Features" category will be populated.
sync-microservice/.gitignore (1)
143-143: LGTM — Ignore local sync env directoryConfirmed no files under sync-microservice/.sync-env are tracked or present.
.gitignore (1)
33-33: LGTM: Ignore built frontend artifactsConfirmed: .gitignore contains "frontend/dist" (line 33) and git shows no tracked files under frontend/dist.
frontend/src/pages/AITagging/AITagging.tsx (1)
68-69: LGTM: MediaView now correctly receives images via props.This aligns with the updated MediaView API and keeps indices consistent with the rendered grid.
frontend/src/types/Media.ts (1)
25-29: LGTM: MediaViewProps now requires images.Matches changes in MediaView and consuming pages.
frontend/src/features/imageSelectors.ts (1)
5-7: LGTM: No functional change.Refactor to block body preserves behavior.
frontend/src/app/store.ts (2)
4-4: LGTM: search slice wired into the store.
18-19: LGTM: reducer map includes search.frontend/src/api/apiEndpoints.ts (1)
7-7: Backend route confirmed — verify trailing-slash handling. Backend exposes POST /face-clusters/face-search (backend/main.py includes face_clusters_router with prefix "/face-clusters"; backend/app/routes/face_clusters.py defines router.post("/face-search"); docs/backend/backend_python/openapi.json contains "/face-clusters/face-search"). Frontend uses '/face-clusters/face-search' (frontend/src/api/apiEndpoints.ts). OpenAPI has no trailing-slash variant — verify whether the server redirects/accepts '/face-clusters/face-search/' or normalize requests.frontend/src/api/api-functions/face_clusters.ts (2)
14-16: Request type addition looks good.Matches the backend payload
{ path: string }.
44-52: Verify backend response shape for face-search endpointfrontend/src/api/api-functions/face_clusters.ts (lines 44–52) expects { success, message?, data: Image[] } with images under data; I couldn't find the backend route in the repo — provide the backend route/file or an example response payload so I can verify.
backend/app/models/FaceDetector.py (1)
22-64: Confirm how embeddings reach the face-search matcher.
detect_facescomputesembeddingsbut doesn’t return them. If the route needs the query embeddings, either:
- return them (e.g., as lists via
[e.tolist() for e in embeddings]), or- recompute in the route from
processed_faces.If you opt to return them, minimal change:
return { "ids": f"{class_ids}", "processed_faces": processed_faces, "num_faces": len(embeddings), + # Optional: return embeddings as lists for JSON safety if ever serialized + # "embeddings": [e.tolist() for e in embeddings], }frontend/src/components/Dialog/FaceSearchDialog.tsx (2)
26-56: Good UX: loader and result handling.Success and error flows are handled cleanly, including no‑result UX.
18-18: No change required — fetchSearchedFaces is re-exported from the api-functions barrel. index.ts exports './face_clusters' and fetchSearchedFaces is defined/exported in frontend/src/api/api-functions/face_clusters.ts.frontend/src/pages/Home/Home.tsx (1)
56-62: Title logic/readiness looks good.Dynamic title and
displayImagesselection are correct and align with search state.frontend/src/components/Media/MediaView.tsx (1)
49-56: Verify Redux index semantics still align with the passedimagesprop.Actions like
nextImage/previousImagelikely assume the same dataset length as before. Confirm that the slice logic usesimages.lengthfrom this prop path or is otherwise kept in sync.Also applies to: 72-76, 168-179
backend/app/routes/face_clusters.py (1)
6-6: Verify CONFIDENCE_PERCENT scaling and DEFAULT_FACENET_MODEL path
- Ensure CONFIDENCE_PERCENT is a float scaled for cosine similarity (0..1), not 0–100.
- Ensure DEFAULT_FACENET_MODEL resolves to an existing, readable ONNX file path in the deployed environment (absolute or correct relative path).
Imported in backend/app/routes/face_clusters.py — app/config was not found in the repo, so manual verification of app/config/settings or environment is required.
| on: | ||
| pull_request: | ||
| types: [closed] | ||
| branches: [main] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trigger on push to main instead of pull_request closed (forks will lack write perms).
Using pull_request on forks yields a read‑only GITHUB_TOKEN; pushing changes will fail. Triggering on push to main ensures write perms and reflects the merged state reliably.
Apply this diff:
on:
- pull_request:
- types: [closed]
- branches: [main]
+ push:
+ branches: [main]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| pull_request: | |
| types: [closed] | |
| branches: [main] | |
| on: | |
| push: | |
| branches: [main] |
🤖 Prompt for AI Agents
.github/workflows/update-project-structure.yml lines 2-5: the workflow currently
triggers on pull_request types: [closed] which prevents workflows from having
write perms on forked PRs; change the trigger to run on push to main by
replacing the pull_request block with an on: push block targeting branches:
[main] (i.e., remove the pull_request stanza and add on: push with branches:
[main]).
| fetch-depth: 0 | ||
|
|
||
| - name: Generate project structure file | ||
| run: tree -a -I 'node_modules|.git' -f > project_structure.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure tree is installed (not guaranteed on ubuntu-latest).
Without this step the job can fail at runtime.
Insert before generating the file:
+ - name: Install tree
+ run: sudo apt-get update && sudo apt-get install -y tree
+
- name: Generate project structure file
run: tree -a -I 'node_modules|.git' -f > project_structure.txt📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: tree -a -I 'node_modules|.git' -f > project_structure.txt | |
| - name: Install tree | |
| run: sudo apt-get update && sudo apt-get install -y tree | |
| - name: Generate project structure file | |
| run: tree -a -I 'node_modules|.git' -f > project_structure.txt |
🤖 Prompt for AI Agents
.github/workflows/update-project-structure.yml around line 21: the workflow
directly runs "tree" which may not be installed on ubuntu-latest; add a
preceding job step that installs the tree utility (for the ubuntu runner use
apt-get update && apt-get install -y tree or the distro-appropriate package
manager) so the subsequent "tree -a -I 'node_modules|.git' -f >
project_structure.txt" command cannot fail due to a missing binary.
| - name: Commit and push changes | ||
| uses: stefanzweifel/git-auto-commit-action@v5 | ||
| with: | ||
| commit_message: "Update project structure (auto)" | ||
| branch: main | ||
| commit_user_name: "CI Bot" | ||
| commit_user_email: "ci-bot@example.com" | ||
| commit_author: "CI Bot <ci-bot@example.com>" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid direct pushes to protected main; open a PR instead.
Direct commits can be blocked by branch protections and bypass required checks. Use a PR for compliance and reliability.
Replace the auto‑commit step with a PR creation step:
- - name: Commit and push changes
- uses: stefanzweifel/git-auto-commit-action@v5
- with:
- commit_message: "Update project structure (auto)"
- branch: main
- commit_user_name: "CI Bot"
- commit_user_email: "ci-bot@example.com"
- commit_author: "CI Bot <ci-bot@example.com>"
+ - name: Create PR with project structure update
+ uses: peter-evans/create-pull-request@v6
+ with:
+ commit-message: "chore(ci): Update project structure (auto)"
+ branch: ci/update-project-structure
+ title: "chore(ci): Update project structure (auto)"
+ body: "Automated update of project structure after push to main."
+ author: "CI Bot <ci-bot@example.com>"
+ committer: "CI Bot <ci-bot@example.com>"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
.github/workflows/update-project-structure.yml around lines 23 to 30: the
workflow currently commits and pushes directly to main which can be blocked by
branch protections; replace this step so the workflow pushes changes to a new
branch and opens a pull request instead (for example, create a unique branch
name, use git to push to that branch, then invoke an action such as
peter-evans/create-pull-request to create the PR), ensure the push uses
GITHUB_TOKEN or a dedicated PAT with repo permissions and adjust the workflow
inputs (commit_message/branch) accordingly so CI creates a PR rather than
directly committing to main.
| def get_all_face_embeddings(): | ||
| conn = sqlite3.connect(DATABASE_PATH) | ||
| cursor = conn.cursor() | ||
|
|
||
| try: | ||
| cursor.execute(""" | ||
| SELECT | ||
| f.embeddings, | ||
| f.bbox, | ||
| i.id, | ||
| i.path, | ||
| i.folder_id, | ||
| i.thumbnailPath, | ||
| i.metadata, | ||
| i.isTagged, | ||
| m.name as tag_name | ||
| FROM faces f | ||
| JOIN images i ON f.image_id=i.id | ||
| LEFT JOIN image_classes ic ON i.id = ic.image_id | ||
| LEFT JOIN mappings m ON ic.class_id = m.class_id | ||
| """) | ||
| results = cursor.fetchall() | ||
|
|
||
| images_dict = {} | ||
| for ( | ||
| embeddings, | ||
| bbox, | ||
| image_id, | ||
| path, | ||
| folder_id, | ||
| thumbnail_path, | ||
| metadata, | ||
| is_tagged, | ||
| tag_name, | ||
| ) in results: | ||
| if image_id not in images_dict: | ||
| try: | ||
| embeddings_json = json.loads(embeddings) | ||
| bbox_json = json.loads(bbox) | ||
| except json.JSONDecodeError: | ||
| continue; | ||
| images_dict[image_id] = { | ||
| "embeddings": embeddings_json, | ||
| "bbox": bbox_json, | ||
| "id": image_id, | ||
| "path": path, | ||
| "folder_id": folder_id, | ||
| "thumbnailPath": thumbnail_path, | ||
| "metadata": metadata, | ||
| "isTagged": bool(is_tagged), | ||
| "tags": [], | ||
| } | ||
|
|
||
| # Add tag if it exists | ||
| if tag_name: | ||
| images_dict[image_id]["tags"].append(tag_name) | ||
|
|
||
| # Convert to list and set tags to None if empty | ||
| images = [] | ||
| for image_data in images_dict.values(): | ||
| if not image_data["tags"]: | ||
| image_data["tags"] = None | ||
| images.append(image_data) | ||
|
|
||
| # Sort by path | ||
| images.sort(key=lambda x: x["path"]) | ||
| return images | ||
| finally: | ||
| conn.close() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Embeddings aggregation bug: only first face per image kept; duplicates possible due to tag joins; bbox/JSON None handling.
- Current logic initializes per-image data once and ignores subsequent faces, dropping embeddings/bboxes for additional faces.
- LEFT JOIN with tags duplicates face rows; without
face_idyou’ll append duplicates if you start aggregating. json.loads(bbox)will raise on NULL (TypeError), not caught.- Output key is
bbox(singular) whereas frontend types expectbboxes?: [...].
Apply this refactor to correctly aggregate and dedupe faces per image and handle NULLs:
def get_all_face_embeddings():
conn = sqlite3.connect(DATABASE_PATH)
cursor = conn.cursor()
try:
- cursor.execute("""
- SELECT
- f.embeddings,
- f.bbox,
- i.id,
- i.path,
- i.folder_id,
- i.thumbnailPath,
- i.metadata,
- i.isTagged,
- m.name as tag_name
- FROM faces f
- JOIN images i ON f.image_id=i.id
- LEFT JOIN image_classes ic ON i.id = ic.image_id
- LEFT JOIN mappings m ON ic.class_id = m.class_id
- """)
+ cursor.execute("""
+ SELECT
+ f.face_id,
+ f.embeddings,
+ f.bbox,
+ i.id,
+ i.path,
+ i.folder_id,
+ i.thumbnailPath,
+ i.metadata,
+ i.isTagged,
+ m.name as tag_name
+ FROM faces f
+ JOIN images i ON f.image_id = i.id
+ LEFT JOIN image_classes ic ON i.id = ic.image_id
+ LEFT JOIN mappings m ON ic.class_id = m.class_id
+ """)
results = cursor.fetchall()
- images_dict = {}
- for (
- embeddings,
- bbox,
- image_id,
- path,
- folder_id,
- thumbnail_path,
- metadata,
- is_tagged,
- tag_name,
- ) in results:
- if image_id not in images_dict:
- try:
- embeddings_json = json.loads(embeddings)
- bbox_json = json.loads(bbox)
- except json.JSONDecodeError:
- continue;
- images_dict[image_id] = {
- "embeddings": embeddings_json,
- "bbox": bbox_json,
- "id": image_id,
- "path": path,
- "folder_id": folder_id,
- "thumbnailPath": thumbnail_path,
- "metadata": metadata,
- "isTagged": bool(is_tagged),
- "tags": [],
- }
-
- # Add tag if it exists
- if tag_name:
- images_dict[image_id]["tags"].append(tag_name)
+ images_dict = {}
+ seen_faces = {} # image_id -> set(face_id)
+ for (
+ face_id,
+ embeddings_json_text,
+ bbox_json_text,
+ image_id,
+ path,
+ folder_id,
+ thumbnail_path,
+ metadata,
+ is_tagged,
+ tag_name,
+ ) in results:
+ if image_id not in images_dict:
+ images_dict[image_id] = {
+ "embeddings": [], # list[list[float]]
+ "bboxes": [], # list[dict]
+ "id": image_id,
+ "path": path,
+ "folder_id": folder_id,
+ "thumbnailPath": thumbnail_path,
+ "metadata": metadata,
+ "isTagged": bool(is_tagged),
+ "tags": set(),
+ }
+ seen_faces[image_id] = set()
+
+ # Deduplicate the same face across tag join rows
+ if face_id in seen_faces[image_id]:
+ if tag_name:
+ images_dict[image_id]["tags"].add(tag_name)
+ continue
+
+ # Parse embedding vector
+ try:
+ emb_vec = json.loads(embeddings_json_text)
+ except (json.JSONDecodeError, TypeError):
+ continue
+ images_dict[image_id]["embeddings"].append(emb_vec)
+
+ # Parse bbox if present
+ if bbox_json_text:
+ try:
+ bbox_obj = json.loads(bbox_json_text)
+ except (json.JSONDecodeError, TypeError):
+ bbox_obj = None
+ else:
+ bbox_obj = None
+ images_dict[image_id]["bboxes"].append(bbox_obj)
+
+ if tag_name:
+ images_dict[image_id]["tags"].add(tag_name)
+ seen_faces[image_id].add(face_id)
- # Convert to list and set tags to None if empty
+ # Convert to list and set tags to None if empty
images = []
for image_data in images_dict.values():
- if not image_data["tags"]:
- image_data["tags"] = None
+ # finalize tags set -> list or None
+ tags_set = image_data.pop("tags", set())
+ image_data["tags"] = list(tags_set) if tags_set else None
images.append(image_data)Notes:
- Exposes
bboxes(plural) and a list of embeddings per image. - Prevents duplicates from tag join rows.
- Handles NULL/invalid JSON gracefully.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_all_face_embeddings(): | |
| conn = sqlite3.connect(DATABASE_PATH) | |
| cursor = conn.cursor() | |
| try: | |
| cursor.execute(""" | |
| SELECT | |
| f.embeddings, | |
| f.bbox, | |
| i.id, | |
| i.path, | |
| i.folder_id, | |
| i.thumbnailPath, | |
| i.metadata, | |
| i.isTagged, | |
| m.name as tag_name | |
| FROM faces f | |
| JOIN images i ON f.image_id=i.id | |
| LEFT JOIN image_classes ic ON i.id = ic.image_id | |
| LEFT JOIN mappings m ON ic.class_id = m.class_id | |
| """) | |
| results = cursor.fetchall() | |
| images_dict = {} | |
| for ( | |
| embeddings, | |
| bbox, | |
| image_id, | |
| path, | |
| folder_id, | |
| thumbnail_path, | |
| metadata, | |
| is_tagged, | |
| tag_name, | |
| ) in results: | |
| if image_id not in images_dict: | |
| try: | |
| embeddings_json = json.loads(embeddings) | |
| bbox_json = json.loads(bbox) | |
| except json.JSONDecodeError: | |
| continue; | |
| images_dict[image_id] = { | |
| "embeddings": embeddings_json, | |
| "bbox": bbox_json, | |
| "id": image_id, | |
| "path": path, | |
| "folder_id": folder_id, | |
| "thumbnailPath": thumbnail_path, | |
| "metadata": metadata, | |
| "isTagged": bool(is_tagged), | |
| "tags": [], | |
| } | |
| # Add tag if it exists | |
| if tag_name: | |
| images_dict[image_id]["tags"].append(tag_name) | |
| # Convert to list and set tags to None if empty | |
| images = [] | |
| for image_data in images_dict.values(): | |
| if not image_data["tags"]: | |
| image_data["tags"] = None | |
| images.append(image_data) | |
| # Sort by path | |
| images.sort(key=lambda x: x["path"]) | |
| return images | |
| finally: | |
| conn.close() | |
| def get_all_face_embeddings(): | |
| conn = sqlite3.connect(DATABASE_PATH) | |
| cursor = conn.cursor() | |
| try: | |
| cursor.execute(""" | |
| SELECT | |
| f.face_id, | |
| f.embeddings, | |
| f.bbox, | |
| i.id, | |
| i.path, | |
| i.folder_id, | |
| i.thumbnailPath, | |
| i.metadata, | |
| i.isTagged, | |
| m.name as tag_name | |
| FROM faces f | |
| JOIN images i ON f.image_id = i.id | |
| LEFT JOIN image_classes ic ON i.id = ic.image_id | |
| LEFT JOIN mappings m ON ic.class_id = m.class_id | |
| """) | |
| results = cursor.fetchall() | |
| images_dict = {} | |
| seen_faces = {} # image_id -> set(face_id) | |
| for ( | |
| face_id, | |
| embeddings_json_text, | |
| bbox_json_text, | |
| image_id, | |
| path, | |
| folder_id, | |
| thumbnail_path, | |
| metadata, | |
| is_tagged, | |
| tag_name, | |
| ) in results: | |
| if image_id not in images_dict: | |
| images_dict[image_id] = { | |
| "embeddings": [], # list[list[float]] | |
| "bboxes": [], # list[dict] | |
| "id": image_id, | |
| "path": path, | |
| "folder_id": folder_id, | |
| "thumbnailPath": thumbnail_path, | |
| "metadata": metadata, | |
| "isTagged": bool(is_tagged), | |
| "tags": set(), | |
| } | |
| seen_faces[image_id] = set() | |
| # Deduplicate the same face across tag join rows | |
| if face_id in seen_faces[image_id]: | |
| if tag_name: | |
| images_dict[image_id]["tags"].add(tag_name) | |
| continue | |
| # Parse embedding vector | |
| try: | |
| emb_vec = json.loads(embeddings_json_text) | |
| except (json.JSONDecodeError, TypeError): | |
| continue | |
| images_dict[image_id]["embeddings"].append(emb_vec) | |
| # Parse bbox if present | |
| if bbox_json_text: | |
| try: | |
| bbox_obj = json.loads(bbox_json_text) | |
| except (json.JSONDecodeError, TypeError): | |
| bbox_obj = None | |
| else: | |
| bbox_obj = None | |
| images_dict[image_id]["bboxes"].append(bbox_obj) | |
| if tag_name: | |
| images_dict[image_id]["tags"].add(tag_name) | |
| seen_faces[image_id].add(face_id) | |
| # Convert to list and set tags to None if empty | |
| images = [] | |
| for image_data in images_dict.values(): | |
| # finalize tags set -> list or None | |
| tags_set = image_data.pop("tags", set()) | |
| image_data["tags"] = list(tags_set) if tags_set else None | |
| images.append(image_data) | |
| # Sort by path | |
| images.sort(key=lambda x: x["path"]) | |
| return images | |
| finally: | |
| conn.close() |
🤖 Prompt for AI Agents
In backend/app/database/faces.py around lines 136–205, the current loop only
initializes one face per image (dropping subsequent faces), can produce
duplicate rows because tags are joined without face_id, treats NULL bbox
incorrectly (json.loads will raise TypeError) and returns key "bbox" when
frontend expects "bboxes". Fix by updating the query to SELECT f.id AS face_id,
f.embeddings, f.bbox, ...; build images_dict keyed by image_id and track seen
face_ids per image to avoid duplicates; for each row parse embeddings and bbox
with a safe helper that returns None on None or on JSONDecodeError/TypeError and
skip only that face element rather than the whole image; append each parsed
embedding to an "embeddings" list and each parsed bbox to a "bboxes" list on the
image entry; dedupe tags when appending and set tags to None (and
bboxes/embeddings to None) if empty; keep isTagged as bool, sort images by path
and return the list with the bboxes plural key.
rahulharpal1603
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! @tushar1977
Thank you for your patience and regular follow-ups. Next steps are that you will open another PR for the webcam feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
frontend/src/components/Dialog/FaceSearchDialog.tsx (6)
28-31: Add proper typing to mutation to avoid castsAvoid using
as Image[]. Type the mutation or theonSuccesscallback sodata.datais inferred asImage[].Example (adjust to your actual APIResponse/BackendRes types):
const { mutate: getSearchImages } = usePictoMutation<APIResponse<Image[]>, unknown, string>({ mutationFn: (path) => fetchSearchedFaces({ path }), onSuccess: (data /*: APIResponse<Image[]> */) => { const result = data.data; ... }, });
73-75: Remove redundant onClick on DialogTrigger buttonDialogTrigger toggles
openviaonOpenChange. The extra onClick is unnecessary.- <Button - onClick={() => setIsDialogOpen(true)} - variant="ghost" + <Button + variant="ghost" size="icon" className="h-8 w-8 p-1" >
106-117: ‘Use Webcam’ is a no-op; either disable or show “Coming soon”Current handler does nothing, which is confusing. Show an info dialog or disable.
Option A: Show info dialog
- <Button - onClick={() => {}} - disabled={false} + <Button + onClick={() => + dispatch( + showInfoDialog({ + title: 'Coming soon', + message: 'Webcam face search will be available in a future update.', + variant: 'info', + }), + ) + } className="flex h-32 flex-col items-center justify-center gap-2 p-0" variant="outline" >Option B: Disable the button
- <Button - onClick={() => {}} - disabled={false} + <Button + disabled className="flex h-32 flex-col items-center justify-center gap-2 p-0" variant="outline" >
95-95: Remove redundant disabled={false} propsThey add noise without effect.
- disabled={false} + /* enabled by default */Also applies to: 108-108
60-69: Navigate to HOME only after a file is selected (confirm intended UX)Navigating before selection will route to Home even if the user cancels. If that’s unintended, move navigation inside the success branch.
- const handlePickFile = async () => { - navigate(`/${ROUTES.HOME}`); - setIsDialogOpen(false); - const filePath = await pickSingleFile(); + const handlePickFile = async () => { + const filePath = await pickSingleFile(); if (filePath) { + navigate(`/${ROUTES.HOME}`); + setIsDialogOpen(false); dispatch(startSearch(filePath)); dispatch(showLoader('Searching faces...')); getSearchImages(filePath); } };
83-90: Consider externalizing user-facing stringsIf you have i18n, move these strings to translation files to keep UI text consistent and localizable.
Also applies to: 120-124
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
frontend/src/components/Dialog/FaceSearchDialog.tsx(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/src/components/Dialog/FaceSearchDialog.tsx (8)
frontend/src/hooks/selectFile.ts (1)
useFile(11-39)frontend/src/hooks/useQueryExtension.ts (1)
usePictoMutation(26-78)frontend/src/api/api-functions/face_clusters.ts (1)
fetchSearchedFaces(44-52)frontend/src/types/Media.ts (1)
Image(1-10)frontend/src/features/loaderSlice.ts (2)
hideLoader(21-24)showLoader(17-20)frontend/src/features/searchSlice.ts (3)
setResults(24-26)clearSearch(27-31)startSearch(20-23)frontend/src/features/infoDialogSlice.ts (1)
showInfoDialog(16-30)frontend/src/constants/routes.ts (1)
ROUTES(1-11)
🔇 Additional comments (1)
frontend/src/components/Dialog/FaceSearchDialog.tsx (1)
18-18: Import path OK — fetchSearchedFaces is re-exported from the barrelfetchSearchedFaces is defined in frontend/src/api/api-functions/face_clusters.ts (line 44) and re-exported in frontend/src/api/api-functions/index.ts, so importing from '@/api/api-functions' is correct.
| import { usePictoMutation } from '@/hooks/useQueryExtension'; | ||
| import { fetchSearchedFaces } from '@/api/api-functions'; | ||
| import { showInfoDialog } from '@/features/infoDialogSlice'; | ||
| import { useNavigate } from 'react-router'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix import source for useNavigate
useNavigate is exported from react-router-dom, not react-router. This will fail at compile/runtime.
-import { useNavigate } from 'react-router';
+import { useNavigate } from 'react-router-dom';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { useNavigate } from 'react-router'; | |
| import { useNavigate } from 'react-router-dom'; |
🤖 Prompt for AI Agents
In frontend/src/components/Dialog/FaceSearchDialog.tsx around line 20, the
import currently pulls useNavigate from 'react-router' which is incorrect;
change the import source to 'react-router-dom' so the statement imports
useNavigate from 'react-router-dom' instead; update any other imports in the
file if they reference react-router to use react-router-dom where appropriate
and run a quick build to confirm the compile error is resolved.
| onError: () => { | ||
| dispatch(hideLoader()); | ||
| dispatch( | ||
| showInfoDialog({ | ||
| title: 'Search Failed', | ||
| message: 'There was an error while searching for faces.', | ||
| variant: 'error', | ||
| }), | ||
| ); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear search state on error to avoid stale “active” search
If the request fails, searchSlice.active remains true and previous images may linger. Clear it on error.
onError: () => {
dispatch(hideLoader());
+ dispatch(clearSearch());
dispatch(
showInfoDialog({
title: 'Search Failed',
message: 'There was an error while searching for faces.',
variant: 'error',
}),
);
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onError: () => { | |
| dispatch(hideLoader()); | |
| dispatch( | |
| showInfoDialog({ | |
| title: 'Search Failed', | |
| message: 'There was an error while searching for faces.', | |
| variant: 'error', | |
| }), | |
| ); | |
| }, | |
| onError: () => { | |
| dispatch(hideLoader()); | |
| dispatch(clearSearch()); | |
| dispatch( | |
| showInfoDialog({ | |
| title: 'Search Failed', | |
| message: 'There was an error while searching for faces.', | |
| variant: 'error', | |
| }), | |
| ); | |
| }, |
🤖 Prompt for AI Agents
In frontend/src/components/Dialog/FaceSearchDialog.tsx around lines 48 to 57,
the onError handler hides the loader and shows an error dialog but does not
reset the search slice’s active flag, leaving previous images/stale state;
update the onError handler to also dispatch the search action that clears or
sets active to false (e.g., dispatch(searchSlice.actions.clearActive()) or
dispatch(setSearchActive(false))) — import the appropriate action from the
search slice at the top if missing and call it before or after hiding the loader
so the search state is reliably cleared on error.
This PR implements a new API route to search a tagged face and deleting the old file of routes/facetagging.py
Issue ID :- #502
Summary by CodeRabbit
New Features
Documentation
Chores