-
Notifications
You must be signed in to change notification settings - Fork 566
Fix/dependency error #594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/dependency error #594
Conversation
This reverts commit 52a8a57. good
…og after selecting a file
- Updated Tauri and related plugins to their latest versions for improved functionality and security. - Changed loading message in WebcamComponent from 'Searching for images...' to 'Searching faces...' to better reflect the operation being performed.
|
|
WalkthroughThis PR adds dual-input face search (path or base64), refactors face-tagging to route by input_type, centralizes face search logic in a new util perform_face_search, updates schemas/OpenAPI, and adds frontend webcam capture with related UI and dependency changes. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Dialog
participant Webcam
participant FrontendAPI
participant Backend
rect rgb(200,220,255)
Note over User,Backend: Path-based flow
User->>Dialog: Select file & submit
Dialog->>FrontendAPI: POST /face-clusters/face-search?input_type=path (FaceSearchRequest with path)
FrontendAPI->>Backend: Validate path, call perform_face_search(image_path)
Backend->>Backend: detect faces -> extract embedding -> compare embeddings
Backend-->>FrontendAPI: matched images
FrontendAPI-->>Dialog: show results
end
rect rgb(220,255,200)
Note over User,Backend: Webcam/base64 flow
User->>Dialog: Open Webcam
Dialog->>Webcam: isOpen=true
User->>Webcam: Capture photo
Webcam->>FrontendAPI: POST /face-clusters/face-search?input_type=base64 (FaceSearchRequest with base64_data)
FrontendAPI->>Backend: decode base64 -> write temp file -> call perform_face_search(temp_path)
Backend->>Backend: detect faces -> extract embedding -> compare embeddings
Backend-->>FrontendAPI: matched images
FrontendAPI-->>Webcam: show results or info dialog
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/backend/backend_python/openapi.json (1)
1038-1072: Update /face-clusters/face-search and align Image schema to match frontend contract.The 200 response at lines 1074–1113 (and line 1038–1072) returns empty schema instead of referencing GetAllImagesResponse. The backend's ImageData schema (lines 1949–1998) omits the optional
bboxesfield that the frontend's Image interface expects, creating a contract drift risk.
- Set 200 response to
$ref: "#/components/schemas/GetAllImagesResponse"at both endpoints- Add
bboxesfield to ImageData schema:"bboxes": {"type": "array", "items": {"type": "object", "properties": {"x": {"type": "number"}, "y": {"type": "number"}, "width": {"type": "number"}, "height": {"type": "number"}}}}- After updating the FastAPI route's response_model, regenerate this OpenAPI file to prevent manual divergence
🧹 Nitpick comments (7)
frontend/src/components/Navigation/Navbar/Navbar.tsx (1)
36-40: LGTM! Consider improving type safety.The updated logic correctly handles both data URLs (from webcam capture) and file paths, directly aligning with the PR's dual-input face search objective. The defensive optional chaining on line 37 is good practice.
However, consider improving type safety:
searchStateon line 14 is typed asany, which bypasses TypeScript's type checking forqueryImage.Apply this diff to properly type the search state:
- const searchState = useSelector((state: any) => state.search); + const searchState = useSelector((state: RootState) => state.search);Ensure
RootStateis imported from your Redux store configuration, and that the search slice has a proper type definition withqueryImage: string | undefined.frontend/src-tauri/Info.plist (1)
1-8: LGTM with optional improvement suggestion.The Info.plist correctly includes the required
NSCameraUsageDescriptionfor macOS camera permissions.Consider making the usage description more user-friendly and specific:
- <string>Request camera access for WebRTC</string> + <string>PictoPy needs camera access to search for faces using your webcam</string>frontend/src/api/api-functions/face_clusters.ts (1)
58-66: Consider consolidating duplicate API call logic.The
fetchSearchedFacesBase64function is nearly identical tofetchSearchedFaces(lines 48-56), differing only in the request type and endpoint.To reduce duplication, consider a generic helper:
async function postFaceSearch<T>( endpoint: string, request: T, ): Promise<APIResponse> { const response = await apiClient.post<APIResponse>(endpoint, request); return response.data; } export const fetchSearchedFaces = async ( request: FetchSearchedFacesRequest, ): Promise<APIResponse> => postFaceSearch(faceClustersEndpoints.searchForFaces, request); export const fetchSearchedFacesBase64 = async ( request: FetchSearchedFacesBase64Request, ): Promise<APIResponse> => postFaceSearch(faceClustersEndpoints.searchForFacesBase64, request);frontend/src/components/Dialog/FaceSearchDialog.tsx (1)
62-79: Small UI/robustness cleanups.
- Optional: rely on DialogTrigger to open the dialog; drop the Button onClick to avoid duplication.
- Optional: pre-check for mediaDevices support to fail fast before calling getUserMedia.
- Optional: drop redundant disabled={false} on buttons.
- <DialogTrigger asChild> - <Button - onClick={() => setIsDialogOpen(true)} - variant="ghost" - size="icon" - className="h-8 w-8 cursor-pointer p-1" - > + <DialogTrigger asChild> + <Button variant="ghost" size="icon" className="h-8 w-8 cursor-pointer p-1"> ... </Button> </DialogTrigger>+ const handleWebCam = async () => { + if (!navigator.mediaDevices?.getUserMedia) { + dispatch(showInfoDialog({ + title: 'Webcam Not Supported', + message: 'This device or context does not support camera access.', + variant: 'error', + })); + return; + } try { const stream = await navigator.mediaDevices.getUserMedia({ video: true }); stream.getTracks().forEach((track) => track.stop()); ... } catch (error) { ... } };- <Button onClick={handlePickFile} disabled={false} ... + <Button onClick={handlePickFile} ... ... - <Button onClick={handleWebCam} disabled={false} ... + <Button onClick={handleWebCam} ...Please confirm your UI guidelines prefer controlled Dialog with DialogTrigger; if so, the above change is safe.
Also applies to: 92-103, 116-141
backend/app/routes/face_clusters.py (1)
212-217: Tighten API contract and base64 handling.
- Add a concrete response_model for consistent docs/validation.
- Enforce payload size on decoded bytes, return 413, and use validate=True for strict base64.
- Optional: detect image format from bytes; keep temp files safer via NamedTemporaryFile.
-@router.post( - "/face-search", - responses={code: {"model": ErrorResponse} for code in [400, 500]}, -) -def face_tagging( +from app.schemas.images import FaceSearchRequest, InputType, GetAllImagesResponse # adjust import to actual location + +@router.post( + "/face-search", + response_model=GetAllImagesResponse, + responses={code: {"model": ErrorResponse} for code in [400, 413, 500]}, +) +def face_tagging( payload: FaceSearchRequest, input_type: Annotated[ InputType, Query(description="Choose input type: 'path' or 'base64'") ] = InputType.path, ):- MAX_B64_LEN = 14_000_000 # 10MB - if len(base64_data) > MAX_B64_LEN: - raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, + MAX_BYTES = 10 * 1024 * 1024 # 10 MiB + try: + image_bytes = base64.b64decode(base64_data.split(",")[-1], validate=True) + except (Base64Error, ValueError): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, detail=ErrorResponse( success=False, - error="Payload too large", - message="Base64 image exceeds maximum allowed size.", + error="Invalid base64 data", + message="The provided base64 image data is malformed or invalid.", ).model_dump(), ) - try: - image_bytes = base64.b64decode(base64_data.split(",")[-1]) - except (Base64Error, ValueError): + if len(image_bytes) > MAX_BYTES: raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, + status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE, detail=ErrorResponse( success=False, - error="Invalid base64 data", - message="The provided base64 image data is malformed or invalid.", + error="Payload too large", + message="Image exceeds maximum allowed size (10 MiB).", ).model_dump(), )- format_match = ( - base64_data.split(";")[0].split("/")[-1] if ";" in base64_data else "jpeg" - ) - extension = ( - format_match - if format_match in ["jpeg", "jpg", "png", "gif", "webp"] - else "jpeg" - ) - image_id = str(uuid.uuid4())[:8] - temp_dir = "temp_uploads" - os.makedirs(temp_dir, exist_ok=True) - local_image_path = os.path.join(temp_dir, f"{image_id}.{extension}") - - with open(local_image_path, "wb") as f: - f.write(image_bytes) - - image_path = local_image_path + # Optional: safer temp file & extension detection + # from tempfile import NamedTemporaryFile + # import imghdr + # kind = imghdr.what(None, h=image_bytes) or "jpeg" + # with NamedTemporaryFile(prefix="face_", suffix=f".{kind}", delete=False, dir="temp_uploads") as tf: + # tf.write(image_bytes) + # image_path = tf.name- if input_type == InputType.base64 and image_path and os.path.exists(image_path): - os.remove(image_path) + if input_type == InputType.base64 and image_path and os.path.exists(image_path): + try: + os.remove(image_path) + except Exception: + logger.warning("Failed to remove temp file: %s", image_path)Confirm the actual location/type name for GetAllImagesResponse and whether imghdr/Pillow is preferred in this project.
Also applies to: 255-267, 277-294, 298-299
backend/app/utils/faceSearch.py (2)
67-69: Only the first detected face from the query image is used.If multiple faces are present in the query, consider evaluating all and taking the best match, or exposing a parameter to select the face index.
18-23: Optional: Align ID types with DB schema.DB returns integer ids; ImageData.id and ImageData.folder_id are str. Pydantic may coerce, but keeping them int avoids surprises downstream and matches existing routes.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
frontend/package-lock.jsonis excluded by!**/package-lock.jsonfrontend/src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
backend/app/routes/face_clusters.py(3 hunks)backend/app/schemas/images.py(1 hunks)backend/app/utils/faceSearch.py(1 hunks)docs/backend/backend_python/openapi.json(4 hunks)frontend/package.json(1 hunks)frontend/src-tauri/Cargo.toml(2 hunks)frontend/src-tauri/Entitlements.plist(1 hunks)frontend/src-tauri/Info.plist(1 hunks)frontend/src-tauri/tauri.conf.json(2 hunks)frontend/src/api/api-functions/face_clusters.ts(2 hunks)frontend/src/api/apiEndpoints.ts(1 hunks)frontend/src/components/Dialog/FaceSearchDialog.tsx(3 hunks)frontend/src/components/Navigation/Navbar/Navbar.tsx(1 hunks)frontend/src/components/WebCam/WebCamComponent.tsx(1 hunks)package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
frontend/src/api/api-functions/face_clusters.ts (3)
frontend/src/types/API.ts (1)
APIResponse(1-8)frontend/src/api/axiosConfig.ts (1)
apiClient(5-12)frontend/src/api/apiEndpoints.ts (1)
faceClustersEndpoints(5-11)
frontend/src/components/WebCam/WebCamComponent.tsx (6)
frontend/src/hooks/useQueryExtension.ts (1)
usePictoMutation(26-78)frontend/src/api/api-functions/face_clusters.ts (1)
fetchSearchedFacesBase64(58-66)frontend/src/hooks/useMutationFeedback.tsx (1)
useMutationFeedback(60-135)frontend/src/types/Media.ts (1)
Image(13-22)frontend/src/features/searchSlice.ts (3)
setResults(24-26)clearSearch(27-31)startSearch(20-23)frontend/src/features/infoDialogSlice.ts (1)
showInfoDialog(16-30)
frontend/src/components/Dialog/FaceSearchDialog.tsx (4)
frontend/src/constants/routes.ts (1)
ROUTES(1-11)frontend/src/features/infoDialogSlice.ts (1)
showInfoDialog(16-30)frontend/src/features/searchSlice.ts (1)
startSearch(20-23)frontend/src/features/loaderSlice.ts (1)
showLoader(17-20)
backend/app/routes/face_clusters.py (3)
backend/app/schemas/images.py (3)
FaceSearchRequest(11-13)InputType(6-8)ErrorResponse(59-62)backend/app/utils/faceSearch.py (1)
perform_face_search(35-106)backend/app/schemas/face_clusters.py (1)
ErrorResponse(23-26)
backend/app/utils/faceSearch.py (4)
backend/app/database/faces.py (1)
get_all_face_embeddings(144-216)backend/app/models/FaceDetector.py (2)
FaceDetector(15-79)detect_faces(26-67)backend/app/models/FaceNet.py (2)
FaceNet(12-29)get_embedding(20-25)backend/app/utils/FaceNet.py (1)
FaceNet_util_cosine_similarity(20-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Backend Tests
🔇 Additional comments (10)
package.json (1)
20-20: LGTM!The trailing newline addition is a minor formatting improvement with no functional impact.
backend/app/schemas/images.py (2)
1-1: LGTM!The
Enumimport is correctly added to support the newInputTypeenum.
6-9: LGTM!The
InputTypeenum correctly defines the two supported input types for face search requests.frontend/src-tauri/Entitlements.plist (1)
1-8: Verify app functionality with sandbox enabled.Enabling macOS app sandboxing is a security best practice, but it restricts certain capabilities (file system access, network access, etc.). Ensure all app features, especially file operations and backend communication, work correctly with sandboxing enabled.
Test the following scenarios with the sandboxed build:
- Folder selection and file access
- Backend communication (localhost connections)
- Webcam access
- File thumbnails and image processing
frontend/src/api/apiEndpoints.ts (1)
7-8: LGTM!The query parameter-based routing (
input_type=pathvsinput_type=base64) provides a clean separation between file path and base64 input methods while using the same endpoint.frontend/src-tauri/tauri.conf.json (2)
29-30: LGTM!The macOS entitlements configuration correctly references the new
Entitlements.plistfile.
66-66: CSP updates support webcam feature appropriately.The Content Security Policy changes enable necessary functionality:
data:andasset:inimg-srcfor base64-encoded images and local assetsmedia-src 'self' blob: data:for webcam video streamingipc:moved fromdefault-srctoconnect-srcfor more granular controlThese changes are appropriate for the webcam capture feature while maintaining reasonable security boundaries.
frontend/src/api/api-functions/face_clusters.ts (1)
18-20: LGTM!The
FetchSearchedFacesBase64Requestinterface correctly defines the base64 input structure for face search requests.frontend/package.json (1)
64-64: react-webcam@7.2.0 is compatible with React 19.react-webcam@7.2.0 declares peer dependencies for "react": ">=16.2.0" and "react-dom": ">=16.2.0", so React 19.1.0 satisfies this constraint. No issues expected.
backend/app/utils/faceSearch.py (1)
82-83: Confirm threshold units (percent vs cosine).CONFIDENCE_PERCENT sounds like 0–100 but cosine similarity is [-1..1]. If CONFIDENCE_PERCENT is 70 or 80, every match will be rejected. Ensure it’s normalized (e.g., 0.7) or convert on-the-fly.
If it’s an integer percent, adjust comparison:
- if best_sim >= CONFIDENCE_PERCENT: + threshold = CONFIDENCE_PERCENT / 100.0 + if best_sim >= threshold: ...
| class FaceSearchRequest(BaseModel): | ||
| path: Optional[str] = None | ||
| base64_data: Optional[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation to ensure exactly one input field is provided.
Both path and base64_data are optional, which allows invalid states where neither or both fields are provided. This will cause runtime errors or ambiguous behavior.
Apply this diff to add a Pydantic validator:
+from pydantic import field_validator, model_validator
from enum import Enum
from pydantic import BaseModel
from typing import Optional, List, Union
class InputType(str, Enum):
path = "path"
base64 = "base64"
class FaceSearchRequest(BaseModel):
path: Optional[str] = None
base64_data: Optional[str] = None
+
+ @model_validator(mode='after')
+ def validate_exactly_one_input(self):
+ if self.path is None and self.base64_data is None:
+ raise ValueError("Either 'path' or 'base64_data' must be provided")
+ if self.path is not None and self.base64_data is not None:
+ raise ValueError("Only one of 'path' or 'base64_data' can be provided")
+ return selfCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/app/schemas/images.py around lines 11 to 13, the FaceSearchRequest
model currently allows invalid states where both or neither of path and
base64_data are set; add a Pydantic root validator (or @validator with
always=True) that checks exactly one of path or base64_data is provided and
raises a ValueError with a clear message if not; ensure the validator returns
the values unchanged so normal validation proceeds.
| class ImageData(BaseModel): | ||
| id: str | ||
| path: str | ||
| folder_id: str | ||
| thumbnailPath: str | ||
| metadata: Dict[str, Any] | ||
| isTagged: bool | ||
| tags: Optional[List[str]] = None | ||
| bboxes: BoundingBox | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bboxes type mismatch: DB provides a list, model expects a single object.
get_all_face_embeddings() returns “bbox” as a JSON array per image; constructing ImageData with that list into a BoundingBox field will fail validation. Use a list type.
class ImageData(BaseModel):
@@
- tags: Optional[List[str]] = None
- bboxes: BoundingBox
+ tags: Optional[List[str]] = None
+ bboxes: List[BoundingBox]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class ImageData(BaseModel): | |
| id: str | |
| path: str | |
| folder_id: str | |
| thumbnailPath: str | |
| metadata: Dict[str, Any] | |
| isTagged: bool | |
| tags: Optional[List[str]] = None | |
| bboxes: BoundingBox | |
| class ImageData(BaseModel): | |
| id: str | |
| path: str | |
| folder_id: str | |
| thumbnailPath: str | |
| metadata: Dict[str, Any] | |
| isTagged: bool | |
| tags: Optional[List[str]] = None | |
| bboxes: List[BoundingBox] |
🤖 Prompt for AI Agents
In backend/app/utils/faceSearch.py around lines 18 to 27, the ImageData Pydantic
model defines bboxes: BoundingBox but the DB returns a JSON array of bounding
boxes per image; change the field type to a list (e.g., List[BoundingBox]) — or
Optional[List[BoundingBox]] if empty lists/nulls are possible — and import
typing.List (and Optional if used) so Pydantic accepts the array during
construction.
| fd = FaceDetector() | ||
| fn = FaceNet(DEFAULT_FACENET_MODEL) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialization outside try causes uncaught failures and missed cleanup. Move model init inside the try.
If FaceDetector()/FaceNet() raise (missing weights/providers), the exception bypasses your handler and finally won’t run. Initialize inside the try to ensure proper failure reporting and cleanup.
def perform_face_search(image_path: str) -> GetAllImagesResponse:
@@
- fd = FaceDetector()
- fn = FaceNet(DEFAULT_FACENET_MODEL)
-
- try:
- matches = []
+ fd = fn = None
+ try:
+ # Initialize heavy models inside the guarded block
+ fd = FaceDetector()
+ fn = FaceNet(DEFAULT_FACENET_MODEL)
+ matches = []
@@
finally:
- if "fd" in locals() and fd is not None:
+ if "fd" in locals() and fd is not None:
fd.close()
- if "fn" in locals() and fn is not None:
+ if "fn" in locals() and fn is not None:
fn.close()Also consider pooling/caching these heavy objects (singleton or app-level DI) to avoid per-request model spins and duplicate FaceNet sessions (FaceDetector already owns one).
Also applies to: 102-106
🤖 Prompt for AI Agents
In backend/app/utils/faceSearch.py around lines 45-47 (and similarly 102-106),
FaceDetector() and FaceNet(...) are instantiated outside the try block which
lets constructor failures bypass your exception handler and prevents the finally
cleanup from running; move both initializations into the try so any exceptions
are caught and the finally block always executes, wrap each construction in the
same try that performs processing, and ensure any partially-created resources
are cleaned in finally (or use context managers). Additionally, refactor to
reuse heavy objects (create a singleton or app-level pooled instances for
FaceDetector/FaceNet so you don't recreate sessions per request) to avoid
duplicate FaceNet sessions and expensive repeated model loads.
| for image in images: | ||
| similarity = FaceNet_util_cosine_similarity( | ||
| new_embedding, image["embeddings"] | ||
| ) | ||
| if similarity >= CONFIDENCE_PERCENT: | ||
| matches.append( | ||
| ImageData( | ||
| id=image["id"], | ||
| path=image["path"], | ||
| folder_id=image["folder_id"], | ||
| thumbnailPath=image["thumbnailPath"], | ||
| metadata=image["metadata"], | ||
| isTagged=image["isTagged"], | ||
| tags=image["tags"], | ||
| bboxes=image["bbox"], | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong similarity computation against stored embeddings; also loses bbox-face mapping.
image["embeddings"] is a list-of-embeddings per image (and image["bbox"] parallels it). Current code compares against the entire list, which is incorrect and may error. Compute best similarity over faces and return the corresponding bbox.
- for image in images:
- similarity = FaceNet_util_cosine_similarity(
- new_embedding, image["embeddings"]
- )
- if similarity >= CONFIDENCE_PERCENT:
- matches.append(
- ImageData(
- id=image["id"],
- path=image["path"],
- folder_id=image["folder_id"],
- thumbnailPath=image["thumbnailPath"],
- metadata=image["metadata"],
- isTagged=image["isTagged"],
- tags=image["tags"],
- bboxes=image["bbox"],
- )
- )
+ for image in images:
+ embeddings = image.get("embeddings") or []
+ bboxes = image.get("bbox") or []
+
+ best_sim = -1.0
+ best_idx = None
+
+ if embeddings and isinstance(embeddings[0], (list, tuple)):
+ for idx, emb in enumerate(embeddings):
+ sim = FaceNet_util_cosine_similarity(new_embedding, emb)
+ if sim > best_sim:
+ best_sim, best_idx = sim, idx
+ else:
+ # Single embedding stored
+ best_sim = FaceNet_util_cosine_similarity(new_embedding, embeddings)
+ best_idx = 0
+
+ if best_sim >= CONFIDENCE_PERCENT:
+ selected_bbox = []
+ if isinstance(bboxes, list) and best_idx is not None and best_idx < len(bboxes):
+ selected_bbox = [bboxes[best_idx]]
+ matches.append(
+ ImageData(
+ id=image["id"],
+ path=image["path"],
+ folder_id=image["folder_id"],
+ thumbnailPath=image["thumbnailPath"],
+ metadata=image["metadata"],
+ isTagged=image["isTagged"],
+ tags=image["tags"],
+ bboxes=selected_bbox,
+ )
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for image in images: | |
| similarity = FaceNet_util_cosine_similarity( | |
| new_embedding, image["embeddings"] | |
| ) | |
| if similarity >= CONFIDENCE_PERCENT: | |
| matches.append( | |
| ImageData( | |
| id=image["id"], | |
| path=image["path"], | |
| folder_id=image["folder_id"], | |
| thumbnailPath=image["thumbnailPath"], | |
| metadata=image["metadata"], | |
| isTagged=image["isTagged"], | |
| tags=image["tags"], | |
| bboxes=image["bbox"], | |
| ) | |
| ) | |
| for image in images: | |
| embeddings = image.get("embeddings") or [] | |
| bboxes = image.get("bbox") or [] | |
| best_sim = -1.0 | |
| best_idx = None | |
| if embeddings and isinstance(embeddings[0], (list, tuple)): | |
| for idx, emb in enumerate(embeddings): | |
| sim = FaceNet_util_cosine_similarity(new_embedding, emb) | |
| if sim > best_sim: | |
| best_sim, best_idx = sim, idx | |
| else: | |
| # Single embedding stored | |
| best_sim = FaceNet_util_cosine_similarity(new_embedding, embeddings) | |
| best_idx = 0 | |
| if best_sim >= CONFIDENCE_PERCENT: | |
| selected_bbox = [] | |
| if isinstance(bboxes, list) and best_idx is not None and best_idx < len(bboxes): | |
| selected_bbox = [bboxes[best_idx]] | |
| matches.append( | |
| ImageData( | |
| id=image["id"], | |
| path=image["path"], | |
| folder_id=image["folder_id"], | |
| thumbnailPath=image["thumbnailPath"], | |
| metadata=image["metadata"], | |
| isTagged=image["isTagged"], | |
| tags=image["tags"], | |
| bboxes=selected_bbox, | |
| ) | |
| ) |
🤖 Prompt for AI Agents
In backend/app/utils/faceSearch.py around lines 78 to 94, the code incorrectly
passes image["embeddings"] (a list of per-face embeddings) into the similarity
function and loses the mapping to image["bbox"]; instead iterate the stored
embeddings for each image, compute similarity between new_embedding and each
stored embedding, track the highest similarity and its index, and if the best
similarity >= CONFIDENCE_PERCENT append ImageData using the corresponding bbox
(image["bbox"][best_index]) (handle empty embeddings/bboxes safely and ensure
bbox indexing matches).
| @@ -11,17 +11,12 @@ edition = "2021" | |||
| tauri-build = { version = "2.0.6", features = [] } | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify version matrix and gate updater for mobile.
- tauri = "2.9.1" with tauri-build = "2.0.6" may be incompatible. Align tauri-build to the same minor track.
- tauri-plugin-updater is now unconditional; it can break android/ios builds. Gate it behind cfg(not(android|ios)) like before.
[dependencies]
-tauri = { version = "2.9.1", features = ["protocol-asset", "devtools"] }
+tauri = { version = "2.9.1", features = ["protocol-asset", "devtools"] }
...
-tauri-plugin-updater = "2.9.0"
+## move updater to target-specific deps below-[build-dependencies]
-tauri-build = { version = "2.0.6", features = [] }
+[build-dependencies]
+# Align with tauri core
+tauri-build = { version = "2.9.1", features = [] } [target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies]
+tauri-plugin-updater = "2.9.0"Also applies to: 14-15, 31-37, 38-42
🌐 Web query:
What tauri-build version is compatible with Tauri 2.9.1?
💡 Result:
tauri-build 2.5.1. [1][2]
Sources:
[1] Tauri v2.9.1 release notes (dependencies include tauri-build@2.5.1).
[2] tauri 2.9.1 Cargo.toml on docs.rs (build-dependencies.tauri-build = "2.5.1").
🌐 Web query:
Is tauri-plugin-updater supported on Android/iOS in Tauri v2?
💡 Result:
Short answer: No — tauri-plugin-updater in the Tauri v2 plugins does not support Android or iOS. The plugin's docs and package manifest explicitly exclude mobile targets (android/ios). [1][2]
References:
- tauri-plugin-updater docs (platform table shows Android/iOS unsupported). [1]
- Plugin repository / Cargo config excludes android and ios targets. [2]
Update tauri-build to 2.5.1 and gate tauri-plugin-updater behind mobile check.
tauri-build 2.5.1 is compatible with Tauri 2.9.1, not 2.0.6. tauri-plugin-updater does not support Android or iOS in Tauri v2, so it must be gated behind cfg(not(any(target_os = "android", target_os = "ios"))).
Lines to fix:
- Line 11:
tauri-build = { version = "2.5.1", features = [] } - Move
tauri-plugin-updaterdependency to the target-specific section for non-mobile platforms
🤖 Prompt for AI Agents
In frontend/src-tauri/Cargo.toml around line 11, update the tauri-build
dependency and gate the updater plugin for non-mobile platforms: change the
tauri-build entry on line 11 to use version 2.5.1, and move the
tauri-plugin-updater dependency out of the universal dependencies into a
target-specific section that only compiles when not on Android or iOS (use
cfg(not(any(target_os = "android", target_os = "ios")))), ensuring the plugin is
only included for desktop builds compatible with Tauri 2.9.1.
| navigate(`/${ROUTES.HOME}`); | ||
| setIsDialogOpen(false); | ||
| const filePath = await pickSingleFile(); | ||
| if (filePath) { | ||
| setIsDialogOpen(false); | ||
| dispatch(startSearch(filePath)); | ||
| dispatch(showLoader('Searching faces...')); | ||
| getSearchImages(filePath); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Navigate after a successful file pick (prevents stray route changes on cancel).
Currently you navigate to HOME before the file picker resolves. If the user cancels, they’re redirected with no action taken. Move navigate and closing logic inside the truthy branch.
- const handlePickFile = async () => {
- navigate(`/${ROUTES.HOME}`);
- const filePath = await pickSingleFile();
- if (filePath) {
- setIsDialogOpen(false);
- dispatch(startSearch(filePath));
- dispatch(showLoader('Searching faces...'));
- getSearchImages(filePath);
- }
- };
+ const handlePickFile = async () => {
+ const filePath = await pickSingleFile();
+ if (filePath) {
+ navigate(`/${ROUTES.HOME}`);
+ setIsDialogOpen(false);
+ dispatch(startSearch(filePath));
+ dispatch(showLoader('Searching faces...'));
+ getSearchImages(filePath);
+ }
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| navigate(`/${ROUTES.HOME}`); | |
| setIsDialogOpen(false); | |
| const filePath = await pickSingleFile(); | |
| if (filePath) { | |
| setIsDialogOpen(false); | |
| dispatch(startSearch(filePath)); | |
| dispatch(showLoader('Searching faces...')); | |
| getSearchImages(filePath); | |
| } | |
| }; | |
| const handlePickFile = async () => { | |
| const filePath = await pickSingleFile(); | |
| if (filePath) { | |
| navigate(`/${ROUTES.HOME}`); | |
| setIsDialogOpen(false); | |
| dispatch(startSearch(filePath)); | |
| dispatch(showLoader('Searching faces...')); | |
| getSearchImages(filePath); | |
| } | |
| }; |
🤖 Prompt for AI Agents
In frontend/src/components/Dialog/FaceSearchDialog.tsx around lines 81 to 89,
currently navigate(`/${ROUTES.HOME}`) runs before the file picker resolves which
redirects users even if they cancel; move the navigate call (and any
dialog-closing logic) inside the if (filePath) block so navigation and closing
happen only when a filePath is returned, then perform
dispatch(startSearch(filePath)), dispatch(showLoader('Searching faces...')) and
getSearchImages(filePath) as before.
| @@ -0,0 +1,174 @@ | |||
| import { useState, useRef, useCallback } from 'react'; | |||
| import { useMutationFeedback } from '../../hooks/useMutationFeedback.tsx'; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Prevent double dialogs and handle null capture; minor import cleanup.
- Disable the generic “Success” dialog for search; you already handle results/no‑match.
- Guard getScreenshot() returning null to avoid a blank dialog state.
- Optional: drop the .tsx extension in the import for consistency.
-import { useMutationFeedback } from '../../hooks/useMutationFeedback.tsx';
+import { useMutationFeedback } from '../../hooks/useMutationFeedback';- useMutationFeedback(getSearchImagesBase64, {
- showLoading: true,
- loadingMessage: 'Searching faces...',
- errorTitle: 'Search Error',
- errorMessage: 'Failed to search images. Please try again.',
- onSuccess: () => {
+ useMutationFeedback(getSearchImagesBase64, {
+ showLoading: true,
+ loadingMessage: 'Searching faces...',
+ showSuccess: false,
+ errorTitle: 'Search Error',
+ errorMessage: 'Failed to search images. Please try again.',
+ onSuccess: () => {
const result = getSearchImagesBase64.data?.data as Image[];
...
},
});- const capture = useCallback(() => {
- if (webcamRef.current) {
- const imageSrc = webcamRef.current.getScreenshot();
- setCapturedImageUrl(imageSrc);
- setShowCamera(false);
- }
- }, [webcamRef]);
+ const capture = useCallback(() => {
+ if (!webcamRef.current) return;
+ const imageSrc = webcamRef.current.getScreenshot();
+ if (!imageSrc) {
+ dispatch(
+ showInfoDialog({
+ title: 'Capture Failed',
+ message: 'Could not capture a photo. Please try again.',
+ variant: 'error',
+ }),
+ );
+ return;
+ }
+ setCapturedImageUrl(imageSrc);
+ setShowCamera(false);
+ }, [dispatch]);- const handleSearchCapturedImage = () => {
- onClose();
+ const handleSearchCapturedImage = () => {
if (capturedImageUrl) {
dispatch(startSearch(capturedImageUrl));
getSearchImagesBase64.mutate(capturedImageUrl);
+ onClose();
} else {
...
}
};If your lint rules enforce explicit extensions for ESM imports, keep the extension and ignore that part of the diff.
Also applies to: 40-63, 65-72, 78-93
I'll verify the issues mentioned in the review comment by examining the actual codebase.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find and examine WebCamComponent.tsx
fd -type f -name "WebCamComponent.tsx"
Length of output: 170
🏁 Script executed:
#!/bin/bash
# Get file content if it exists
find . -name "WebCamComponent.tsx" -type f | head -1 | xargs catLength of output: 5276
🏁 Script executed:
#!/bin/bash
# Find and examine useMutationFeedback hook
find . -name "*useMutationFeedback*" -type f | xargs cat 2>/dev/nullLength of output: 2918
🏁 Script executed:
#!/bin/bash
# Check for ESLint or similar lint configuration
find . -maxdepth 2 \( -name ".eslintrc*" -o -name "eslint.config.*" -o -name "tsconfig.json" \) -type f | xargs cat 2>/dev/null | head -50Length of output: 1362
Prevent double dialogs, handle null capture, and fix dialog close timing.
The useMutationFeedback hook will show a success dialog for search operations unless explicitly disabled. The capture function doesn't guard against null screenshots, risking blank state. The dialog closes before validating the capture, allowing failed operations to exit cleanly without feedback.
-import { useMutationFeedback } from '../../hooks/useMutationFeedback.tsx';
+import { useMutationFeedback } from '../../hooks/useMutationFeedback'; useMutationFeedback(getSearchImagesBase64, {
showLoading: true,
loadingMessage: 'Searching faces...',
+ showSuccess: false,
errorTitle: 'Search Error',
errorMessage: 'Failed to search images. Please try again.',
onSuccess: () => { const capture = useCallback(() => {
- if (webcamRef.current) {
- const imageSrc = webcamRef.current.getScreenshot();
- setCapturedImageUrl(imageSrc);
- setShowCamera(false);
- }
- }, [webcamRef]);
+ if (!webcamRef.current) return;
+ const imageSrc = webcamRef.current.getScreenshot();
+ if (!imageSrc) {
+ dispatch(
+ showInfoDialog({
+ title: 'Capture Failed',
+ message: 'Could not capture a photo. Please try again.',
+ variant: 'error',
+ }),
+ );
+ return;
+ }
+ setCapturedImageUrl(imageSrc);
+ setShowCamera(false);
+ }, [dispatch]); const handleSearchCapturedImage = () => {
- onClose();
if (capturedImageUrl) {
dispatch(startSearch(capturedImageUrl));
getSearchImagesBase64.mutate(capturedImageUrl);
+ onClose();
} else {
dispatch(
showInfoDialog({
title: 'Capture Failed',
message: 'An unexpected error occurred during capture.',
variant: 'error',
}),
);
- handleClose();
+ return;
}
};Committable suggestion skipped: line range outside the PR's diff.
…and settled callbacks
|
|
1 similar comment
|
|
Summary by CodeRabbit
New Features
Bug Fixes / Reliability
Chores