Revert "Chat refactor new db final (with use chat) (#1456)"#1475
Revert "Chat refactor new db final (with use chat) (#1456)"#1475
Conversation
This reverts commit 9dc96a4.
📝 WalkthroughWalkthroughRefactors the desktop chat subsystem: replaces legacy chat transport/useChat2 with new hooks (useChatLogic, useChatQueries), introduces new chat rendering components (ChatMessage, MessageContent), updates message types/markdown parsing, changes mention extraction, and shifts persistence/queries to chat groups. Removes v2 chat DB schema/ops and related plugin commands. Local LLM progress now streams via tool-call deltas. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant ChatView
participant useChatLogic
participant DB as dbCommands
participant Model as modelProvider/streamText
participant Tools as MCP/Hypr tools
User->>ChatView: Submit message
ChatView->>useChatLogic: handleSubmit(message, mentions, selection)
useChatLogic->>DB: upsertChatMessage(User)
useChatLogic->>Model: streamText(conversationHistory, tools, abortSignal)
Model-->>useChatLogic: text-delta chunks
useChatLogic->>DB: upsertChatMessage(Assistant partial)
Model-->>useChatLogic: tool-call / tool-result / tool-error
useChatLogic->>Tools: invoke tool
Tools-->>useChatLogic: result/error
useChatLogic->>DB: upsert tool messages
useChatLogic-->>ChatView: messages, isGenerating/isStreamingText
ChatView-->>User: Render ChatMessage/MessageContent
sequenceDiagram
autonumber
participant UI as EditorArea/Chat
participant LocalLLM as plugins/local-llm server
note over LocalLLM: New progress delivery
UI->>LocalLLM: chat_completions(stream=true)
LocalLLM-->>UI: delta(text)
LocalLLM-->>UI: delta(tool_calls: progress_update(progress=x.y))
UI->>UI: Update progress from stream
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.ts (1)
86-88: Prevent leaked event listeners when replacing controls.Removing the node without invoking its
cleanupleavesmousedownandscrolllisteners registered from prior runs. Callcleanupif present before removing.Use this pattern here:
const prev = document.querySelector(".ai-edit-controls") as any; if (prev) { typeof prev.cleanup === "function" ? prev.cleanup() : prev.remove(); }plugins/local-llm/src/ext.rs (1)
49-51: Suspicious hardcoded models dir ("ttt") — likely a stray tmp pathThis will shift installs/migrations into an unexpected folder and orphan existing models. Use a stable name.
Apply:
fn models_dir(&self) -> PathBuf { - self.path().app_data_dir().unwrap().join("ttt") + self.path().app_data_dir().unwrap().join("models") }apps/desktop/src/components/right-panel/utils/markdown-parser.ts (1)
3-61: Preserve code-fence backticks and language header in parseMarkdownBlocks (apps/desktop/src/components/right-panel/utils/markdown-parser.ts:3-61)Current parser strips the opening/closing ``` and the language tag, causing fenced blocks to render as plain text — apply the patch below.
export const parseMarkdownBlocks = (text: string): MessagePart[] => { const parts: MessagePart[] = []; let currentIndex = 0; let inMarkdownBlock = false; - let markdownStart = -1; + let markdownStart = -1; + let currentLang = ""; @@ - for (let i = 0; i < text.length - 2; i++) { + for (let i = 0; i < text.length - 2; i++) { if (text.slice(i, i + 3) === "```") { if (!inMarkdownBlock) { - // Starting a markdown block - if (i > currentIndex) { - // Add text before markdown - const textContent = text.slice(currentIndex, i).trim(); - if (textContent) { - parts.push({ - type: "text", - content: textContent, - }); - } - } - markdownStart = i + 3; - inMarkdownBlock = true; - i += 2; // Skip the ``` + // Add text before the fence + if (i > currentIndex) { + const textContent = text.slice(currentIndex, i).trim(); + if (textContent) parts.push({ type: "text", content: textContent }); + } + // Capture optional language header up to EOL + const headerEnd = text.indexOf("\n", i + 3); + const fenceEnd = headerEnd === -1 ? text.length : headerEnd; + currentLang = text.slice(i + 3, fenceEnd).trim(); + markdownStart = fenceEnd + 1; + inMarkdownBlock = true; + i = fenceEnd - 1; // for-loop will i++ } else { - // Ending a markdown block - const markdownContent = text.slice(markdownStart, i); - parts.push({ - type: "markdown", - content: markdownContent, - isComplete: true, - }); - inMarkdownBlock = false; - currentIndex = i + 3; - i += 2; // Skip the ``` + // Ending a markdown block + const markdownContent = text.slice(markdownStart, i); + const fenced = + "```" + (currentLang ? currentLang + "\n" : "\n") + markdownContent + "\n```"; + parts.push({ type: "markdown", content: fenced, isComplete: true }); + inMarkdownBlock = false; + currentLang = ""; + currentIndex = i + 3; + i += 2; // Skip the ``` } } } @@ if (inMarkdownBlock) { - // Incomplete markdown block - const markdownContent = text.slice(markdownStart); - parts.push({ - type: "markdown", - content: markdownContent, - isComplete: false, - }); + // Incomplete markdown block: keep opening fence so it's rendered as code + const markdownContent = text.slice(markdownStart); + const fenced = "```" + (currentLang ? currentLang + "\n" : "\n") + markdownContent; + parts.push({ type: "markdown", content: fenced, isComplete: false }); } else if (currentIndex < text.length) { // Remaining text const remainingText = text.slice(currentIndex).trim(); if (remainingText) { parts.push({ type: "text", content: remainingText, }); } }
🧹 Nitpick comments (23)
.gitignore (2)
19-19: Clarify intended scope/case for CLAUDE.md ignore.As written, this ignores any path segment named “CLAUDE.md” (case‑sensitive). If you meant only the repo root, prefix with “/”. If you want case‑insensitive coverage, add variants or a bracketed pattern.
Option A (root‑only):
-CLAUDE.md +/CLAUDE.mdOption B (case‑insensitive, anywhere):
-CLAUDE.md +[Cc][Ll][Aa][Uu][Dd][Ee].md
19-19: Drop trailing whitespace.There’s a trailing space after CLAUDE.md; not harmful in .gitignore but it’s noisy and may trip linters.
-CLAUDE.md +CLAUDE.mdapps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.ts (1)
32-34: Remove unused variable; inline the existence check to satisfy “no unused variables.”
sessionStoreis never used after assignment. Inline the check to avoid violating the repo guideline.Apply this diff:
- const sessionStore = sessions[sessionId]; - if (!sessionStore) { + if (!sessions[sessionId]) { return { success: false, error: "Session not found" }; - } + }Also, confirm this stricter early-return behavior (failing when a session store is absent) is intentional and won’t block valid edits that previously succeeded.
plugins/local-llm/src/ext.rs (1)
124-126: Guard divide-by-zero and clamp progress to [0,100]If total_size is ever 0, this panics; casting f64→i8 also truncates. Round and clamp.
Please confirm hypr_download_interface never reports Progress with total_size == 0. If unsure, I can propose a minimal clamp that avoids new error handling/logging.
apps/desktop/src/components/right-panel/components/chat/markdown-card.tsx (1)
113-113: Spacing tweak looks good.Using mt-4/mb-4 reads well; alternatively, my-4 would be equivalent and a tad shorter.
-<div className="mt-4 mb-4 border border-neutral-200 rounded-lg bg-white overflow-hidden"> +<div className="my-4 border border-neutral-200 rounded-lg bg-white overflow-hidden">apps/desktop/src/components/right-panel/utils/chat-utils.ts (4)
48-56: Parallelize I/O to cut latency.These awaits are independent; fetch them together to reduce round‑trip time.
- const refetchResult = await sessionData?.refetch(); - let freshSessionData = refetchResult?.data; - - const { type } = await connectorCommands.getLlmConnection(); - - const participants = sessionId ? await dbCommands.sessionListParticipants(sessionId) : []; - - const calendarEvent = sessionId ? await dbCommands.sessionGetEvent(sessionId) : null; + const [refetchResult, llmConnection, participants, calendarEvent] = await Promise.all([ + sessionData?.refetch?.() ?? Promise.resolve(undefined), + connectorCommands.getLlmConnection(), + sessionId ? dbCommands.sessionListParticipants(sessionId) : Promise.resolve([]), + sessionId ? dbCommands.sessionGetEvent(sessionId) : Promise.resolve(null), + ]); + const freshSessionData = refetchResult?.data; + const { type } = llmConnection;
72-80: Make model checks cleaner.Use a Set for O(1) membership and easier maintenance.
- const toolEnabled = !!( - modelId === "gpt-4.1" - || modelId === "openai/gpt-4.1" - || modelId === "anthropic/claude-sonnet-4" - || modelId === "openai/gpt-4o" - || modelId === "gpt-4o" - || modelId === "openai/gpt-5" - || (apiBase && apiBase.includes("pro.hyprnote.com")) - ); + const TOOL_MODELS = new Set([ + "gpt-4.1", + "openai/gpt-4.1", + "anthropic/claude-sonnet-4", + "openai/gpt-4o", + "gpt-4o", + "openai/gpt-5", + ]); + const toolEnabled = TOOL_MODELS.has(modelId ?? "") || (!!apiBase && apiBase.includes("pro.hyprnote.com"));
167-175: Avoid innerHTML for text extraction.Safer to parse HTML with DOMParser and read textContent.
- if (session.enhanced_memo_html && session.enhanced_memo_html.trim() !== "") { - const div = document.createElement("div"); - div.innerHTML = session.enhanced_memo_html; - briefContent = (div.textContent || div.innerText || "").slice(0, 200) + "..."; - } else if (session.raw_memo_html && session.raw_memo_html.trim() !== "") { - const div = document.createElement("div"); - div.innerHTML = session.raw_memo_html; - briefContent = (div.textContent || div.innerText || "").slice(0, 200) + "..."; - } + if (session.enhanced_memo_html && session.enhanced_memo_html.trim() !== "") { + const doc = new DOMParser().parseFromString(session.enhanced_memo_html, "text/html"); + briefContent = (doc.body.textContent || "").slice(0, 200) + "..."; + } else if (session.raw_memo_html && session.raw_memo_html.trim() !== "") { + const doc = new DOMParser().parseFromString(session.raw_memo_html, "text/html"); + briefContent = (doc.body.textContent || "").slice(0, 200) + "..."; + }Also applies to: 171-175
141-146: Guard against “undefined” littering human summary.Only include fields that exist to avoid “Name: undefined”.
- let humanContent = ""; - humanContent += "Name: " + humanData?.full_name + "\n"; - humanContent += "Email: " + humanData?.email + "\n"; - humanContent += "Job Title: " + humanData?.job_title + "\n"; - humanContent += "LinkedIn: " + humanData?.linkedin_username + "\n"; + const parts: string[] = []; + if (humanData?.full_name) parts.push("Name: " + humanData.full_name); + if (humanData?.email) parts.push("Email: " + humanData.email); + if (humanData?.job_title) parts.push("Job Title: " + humanData.job_title); + if (humanData?.linkedin_username) parts.push("LinkedIn: " + humanData.linkedin_username); + let humanContent = parts.join("\n") + (parts.length ? "\n" : "");apps/desktop/src/components/right-panel/components/chat/chat-input.tsx (2)
241-245: Also use DOMParser here to strip HTML.- const stripHtml = (html: string): string => { - const temp = document.createElement("div"); - temp.innerHTML = html; - return temp.textContent || temp.innerText || ""; - }; + const stripHtml = (html: string): string => { + const doc = new DOMParser().parseFromString(html, "text/html"); + return doc.body.textContent || ""; + };
464-468: Simplify disabled prop.- <Button - size="icon" - onClick={isGenerating ? onStop : handleSubmit} - disabled={isGenerating ? false : (!inputValue.trim())} - > + <Button + size="icon" + onClick={isGenerating ? onStop : handleSubmit} + disabled={!isGenerating && !inputValue.trim()} + >apps/desktop/src/components/right-panel/components/chat/chat-message.tsx (1)
27-33: Wrap timestamp in a semantic element.Improves a11y and allows consumers to read machine time.
- <div className="text-xs text-neutral-500 mt-1 text-right"> - {message.timestamp.toLocaleTimeString([], { - hour: "2-digit", - minute: "2-digit", - })} - </div> + <time + className="text-xs text-neutral-500 mt-1 text-right block" + dateTime={message.timestamp.toISOString()} + > + {message.timestamp.toLocaleTimeString([], { hour: "2-digit", minute: "2-digit" })} + </time>apps/desktop/src/components/right-panel/views/chat-view.tsx (2)
92-101: Remove unnecessary async.handleNewChat doesn’t await anything; drop async for clarity.
- const handleNewChat = async () => { + const handleNewChat = () => { if (!sessionId || !userId) { return; } setCurrentChatGroupId(null); setMessages([]); setHasChatStarted(false); setInputValue(""); };
115-117: Unused parameter in handleSelectChat.chatId is unused and will trip linters. Remove or prefix with underscore.
- const handleSelectChat = (chatId: string) => { + const handleSelectChat = () => { setShowHistory(false); };apps/desktop/src/components/right-panel/components/chat/chat-messages-view.tsx (1)
47-49: Browser timer typing: prefer ReturnType.NodeJS.Timeout is wrong in the browser (can cause TS friction under DOM libs).
- const thinkingTimeoutRef = useRef<NodeJS.Timeout | null>(null); + const thinkingTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);apps/desktop/src/components/right-panel/components/chat/message-content.tsx (1)
198-206: Optional: tolerate object toolDetails without try/catch repetition.Minor: you already parse toolDetails upstream; you can drop JSON.parse path and branch only on typeof string. Not blocking.
apps/desktop/src/components/right-panel/hooks/useChatLogic.ts (7)
196-201: Avoid invalidate+sleep; use llmConnectionQuery.refetch to get a fresh connection.Removes a timing race and puts the llmConnectionQuery prop to use.
- await queryClient.invalidateQueries({ queryKey: ["llm-connection"] }); - await new Promise(resolve => setTimeout(resolve, 100)); - - const llmConnection = await connectorCommands.getLlmConnection(); + const refetched = await llmConnectionQuery?.refetch?.(); + const llmConnection = refetched?.data ?? await connectorCommands.getLlmConnection();
415-432: Persist AI text before tool messages deterministically.This save is not awaited, so DB ordering can interleave with subsequent tool-start writes.
- saveAiText(); + await saveAiText(); currentAiTextMessageId = null; // Reset
356-360: Use consistent created_at timestamps tied to the UI message.Created_at uses Date.now() instead of the message’s timestamp, which can skew ordering. Track the timestamp of the current AI text message and reuse it for persistence.
@@ - let currentAiTextMessageId: string | null = null; + let currentAiTextMessageId: string | null = null; + let currentAiTextMessageTimestamp: Date | null = null; @@ - if (didInitializeAiResponse && lastMessage && lastMessage.type === "text-delta") { + if (didInitializeAiResponse && lastMessage && lastMessage.type === "text-delta") { + currentAiTextMessageTimestamp = lastMessage.timestamp; @@ - const newTextMessage: Message = { + const newTextMessage: Message = { id: crypto.randomUUID(), content: aiResponse, isUser: false, - timestamp: new Date(), + timestamp: (currentAiTextMessageTimestamp = new Date()), type: "text-delta", parts, }; @@ - created_at: new Date().toISOString(), + created_at: (currentAiTextMessageTimestamp ?? new Date()).toISOString(), @@ - currentAiTextMessageId = null; // Reset + currentAiTextMessageId = null; // Reset + currentAiTextMessageTimestamp = null; @@ - await dbCommands.upsertChatMessage({ + await dbCommands.upsertChatMessage({ id: currentAiTextMessageId, group_id: groupId, - created_at: new Date().toISOString(), + created_at: (currentAiTextMessageTimestamp ?? new Date()).toISOString(), role: "Assistant", type: "text-delta", content: aiResponse.trim(), tool_details: null, });Also applies to: 374-385, 396-403, 420-432, 517-523
218-219: Nit: Typo in variable name.Use enabledServers for readability.
- const enabledSevers = mcpServers.filter((server) => server.enabled); + const enabledServers = mcpServers.filter((server) => server.enabled); @@ - for (const server of enabledSevers) { + for (const server of enabledServers) {Also applies to: 248-248
379-395: O(n^2) parsing work while streaming.Re-parsing the full aiResponse on every text chunk scales poorly for long outputs. Consider incremental parsing or deferring full parse until finish-step, and only minimally parse the trailing segment per chunk to keep UI responsive.
209-215: Tool enablement condition: prefer a data-driven allowlist.Hard-coding model IDs is brittle. Suggest a Set of allowed modelIds and checking membership; easier to update and test.
- const shouldUseTools = model.modelId === "gpt-4.1" || model.modelId === "openai/gpt-4.1" - || model.modelId === "anthropic/claude-sonnet-4" - || model.modelId === "openai/gpt-4o" - || model.modelId === "gpt-4o" - || apiBase?.includes("pro.hyprnote.com") - || model.modelId === "openai/gpt-5"; + const toolModels = new Set([ + "gpt-4.1", + "openai/gpt-4.1", + "anthropic/claude-sonnet-4", + "openai/gpt-4o", + "gpt-4o", + "openai/gpt-5", + ]); + const shouldUseTools = toolModels.has(model.modelId) || apiBase?.includes("pro.hyprnote.com");
282-296: Schema text for mcpToolsArray: ensure it’s serializable.tool.inputSchema here is likely a Zod object; passing it into prompts may stringify to [object Object]. Consider converting via .toJSONSchema() or summarizing fields before embedding in system prompts.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (11)
plugins/db/js/bindings.gen.tsis excluded by!**/*.gen.tsplugins/db/permissions/autogenerated/commands/create_conversation.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/autogenerated/commands/create_message_v2.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/autogenerated/commands/list_conversations.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/autogenerated/commands/list_messages_v2.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/autogenerated/commands/update_message_v2_parts.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/autogenerated/reference.mdis excluded by!plugins/**/permissions/**plugins/db/permissions/default.tomlis excluded by!plugins/**/permissions/**plugins/db/permissions/schemas/schema.jsonis excluded by!plugins/**/permissions/**plugins/local-llm/js/bindings.gen.tsis excluded by!**/*.gen.tspnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (36)
.gitignore(1 hunks)apps/desktop/src/components/editor-area/index.tsx(0 hunks)apps/desktop/src/components/right-panel/components/chat/chat-input.tsx(1 hunks)apps/desktop/src/components/right-panel/components/chat/chat-message.tsx(1 hunks)apps/desktop/src/components/right-panel/components/chat/chat-messages-view.tsx(4 hunks)apps/desktop/src/components/right-panel/components/chat/index.ts(1 hunks)apps/desktop/src/components/right-panel/components/chat/markdown-card.tsx(1 hunks)apps/desktop/src/components/right-panel/components/chat/message-content.tsx(1 hunks)apps/desktop/src/components/right-panel/components/chat/ui-message.tsx(0 hunks)apps/desktop/src/components/right-panel/hooks/useChat2.ts(0 hunks)apps/desktop/src/components/right-panel/hooks/useChatLogic.ts(1 hunks)apps/desktop/src/components/right-panel/hooks/useChatQueries.ts(1 hunks)apps/desktop/src/components/right-panel/hooks/useChatQueries2.ts(0 hunks)apps/desktop/src/components/right-panel/utils/chat-transport.ts(0 hunks)apps/desktop/src/components/right-panel/utils/chat-utils.ts(4 hunks)apps/desktop/src/components/right-panel/utils/markdown-parser.ts(1 hunks)apps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.ts(1 hunks)apps/desktop/src/components/right-panel/views/chat-view.tsx(6 hunks)crates/db-user/src/chat_conversations_migration.sql(0 hunks)crates/db-user/src/chat_conversations_ops.rs(0 hunks)crates/db-user/src/chat_conversations_types.rs(0 hunks)crates/db-user/src/chat_messages_v2_migration.sql(0 hunks)crates/db-user/src/chat_messages_v2_ops.rs(0 hunks)crates/db-user/src/chat_messages_v2_types.rs(0 hunks)crates/db-user/src/lib.rs(2 hunks)crates/db-user/src/sessions_ops.rs(1 hunks)package.json(0 hunks)packages/utils/src/ai.ts(0 hunks)plugins/db/build.rs(0 hunks)plugins/db/src/commands/chats_v2.rs(0 hunks)plugins/db/src/commands/mod.rs(0 hunks)plugins/db/src/lib.rs(0 hunks)plugins/local-llm/src/events.rs(0 hunks)plugins/local-llm/src/ext.rs(1 hunks)plugins/local-llm/src/lib.rs(1 hunks)plugins/local-llm/src/server.rs(9 hunks)
💤 Files with no reviewable changes (18)
- crates/db-user/src/chat_conversations_migration.sql
- package.json
- crates/db-user/src/chat_messages_v2_ops.rs
- plugins/local-llm/src/events.rs
- crates/db-user/src/chat_conversations_ops.rs
- crates/db-user/src/chat_messages_v2_types.rs
- crates/db-user/src/chat_messages_v2_migration.sql
- plugins/db/build.rs
- plugins/db/src/commands/mod.rs
- crates/db-user/src/chat_conversations_types.rs
- plugins/db/src/lib.rs
- apps/desktop/src/components/right-panel/utils/chat-transport.ts
- plugins/db/src/commands/chats_v2.rs
- apps/desktop/src/components/editor-area/index.tsx
- apps/desktop/src/components/right-panel/hooks/useChatQueries2.ts
- apps/desktop/src/components/right-panel/hooks/useChat2.ts
- packages/utils/src/ai.ts
- apps/desktop/src/components/right-panel/components/chat/ui-message.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{js,ts,tsx,rs}
⚙️ CodeRabbit configuration file
**/*.{js,ts,tsx,rs}: 1. Do not add any error handling. Keep the existing one.
2. No unused imports, variables, or functions.
3. For comments, keep it minimal. It should be about "Why", not "What".
Files:
crates/db-user/src/sessions_ops.rsapps/desktop/src/components/right-panel/components/chat/markdown-card.tsxplugins/local-llm/src/ext.rsapps/desktop/src/components/right-panel/hooks/useChatQueries.tsapps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.tsapps/desktop/src/components/right-panel/components/chat/chat-message.tsxapps/desktop/src/components/right-panel/components/chat/chat-input.tsxapps/desktop/src/components/right-panel/hooks/useChatLogic.tsplugins/local-llm/src/lib.rsapps/desktop/src/components/right-panel/views/chat-view.tsxapps/desktop/src/components/right-panel/components/chat/index.tsapps/desktop/src/components/right-panel/utils/markdown-parser.tscrates/db-user/src/lib.rsplugins/local-llm/src/server.rsapps/desktop/src/components/right-panel/components/chat/message-content.tsxapps/desktop/src/components/right-panel/utils/chat-utils.tsapps/desktop/src/components/right-panel/components/chat/chat-messages-view.tsx
🧬 Code graph analysis (10)
crates/db-user/src/sessions_ops.rs (1)
crates/db-core/src/lib.rs (1)
conn(17-22)
plugins/local-llm/src/ext.rs (1)
plugins/local-llm/src/server.rs (2)
new(46-51)new(184-186)
apps/desktop/src/components/right-panel/hooks/useChatQueries.ts (3)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)apps/desktop/src/components/right-panel/utils/markdown-parser.ts (1)
parseMarkdownBlocks(3-62)apps/desktop/src/components/right-panel/hooks/useChatQueries2.ts (1)
useChatQueries2(15-175)
apps/desktop/src/components/right-panel/components/chat/chat-message.tsx (2)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)apps/desktop/src/components/right-panel/components/chat/message-content.tsx (1)
MessageContent(198-357)
apps/desktop/src/components/right-panel/hooks/useChatLogic.ts (11)
apps/desktop/src/components/right-panel/types/chat-types.ts (2)
ActiveEntityInfo(3-6)Message(20-20)apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)packages/utils/src/contexts/sessions.tsx (1)
useSessions(33-47)apps/desktop/src/hooks/use-license.ts (1)
useLicense(9-105)apps/desktop/src/contexts/right-panel.tsx (1)
SelectionData(10-16)apps/desktop/src/components/pro-gate-modal/service.ts (1)
showProGateModal(6-27)apps/desktop/src/components/right-panel/utils/mcp-http-wrapper.ts (1)
buildVercelToolsFromMcp(5-31)apps/desktop/src/components/right-panel/utils/tools/search_session_multi_keywords.ts (1)
createSearchSessionTool(5-59)apps/desktop/src/components/right-panel/utils/tools/search_session_date_range.ts (1)
createSearchSessionDateRangeTool(5-82)apps/desktop/src/components/right-panel/utils/chat-utils.ts (1)
prepareMessageHistory(36-223)apps/desktop/src/components/right-panel/utils/markdown-parser.ts (1)
parseMarkdownBlocks(3-62)
apps/desktop/src/components/right-panel/views/chat-view.tsx (4)
apps/desktop/src/components/right-panel/components/chat/types.ts (2)
Message(7-15)ChatSession(17-22)apps/desktop/src/components/right-panel/hooks/useActiveEntity.ts (1)
useActiveEntity(14-80)apps/desktop/src/components/right-panel/hooks/useChatQueries.ts (1)
useChatQueries(19-175)apps/desktop/src/components/right-panel/hooks/useChatLogic.ts (1)
useChatLogic(52-622)
apps/desktop/src/components/right-panel/utils/markdown-parser.ts (1)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
MessagePart(1-5)
apps/desktop/src/components/right-panel/components/chat/message-content.tsx (2)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)apps/desktop/src/components/right-panel/components/chat/markdown-card.tsx (1)
MarkdownCard(15-140)
apps/desktop/src/components/right-panel/utils/chat-utils.ts (2)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)apps/desktop/src/contexts/right-panel.tsx (1)
SelectionData(10-16)
apps/desktop/src/components/right-panel/components/chat/chat-messages-view.tsx (2)
apps/desktop/src/components/right-panel/components/chat/types.ts (1)
Message(7-15)apps/desktop/src/components/right-panel/components/chat/chat-message.tsx (1)
ChatMessage(11-48)
🪛 ast-grep (0.38.6)
apps/desktop/src/components/right-panel/utils/chat-utils.ts
[warning] 168-168: Direct HTML content assignment detected. Modifying innerHTML, outerHTML, or using document.write with unsanitized content can lead to XSS vulnerabilities. Use secure alternatives like textContent or sanitize HTML with libraries like DOMPurify.
Context: div.innerHTML = session.enhanced_memo_html
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://www.dhairyashah.dev/posts/why-innerhtml-is-a-bad-idea-and-how-to-avoid-it/
- https://cwe.mitre.org/data/definitions/79.html
(unsafe-html-content-assignment)
[warning] 172-172: Direct HTML content assignment detected. Modifying innerHTML, outerHTML, or using document.write with unsanitized content can lead to XSS vulnerabilities. Use secure alternatives like textContent or sanitize HTML with libraries like DOMPurify.
Context: div.innerHTML = session.raw_memo_html
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://www.dhairyashah.dev/posts/why-innerhtml-is-a-bad-idea-and-how-to-avoid-it/
- https://cwe.mitre.org/data/definitions/79.html
(unsafe-html-content-assignment)
[warning] 168-168: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: div.innerHTML = session.enhanced_memo_html
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
[warning] 172-172: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: div.innerHTML = session.raw_memo_html
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html
(dom-content-modification)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: ci (macos, macos-14)
- GitHub Check: ci (windows, windows-latest)
🔇 Additional comments (18)
.gitignore (1)
1-19: No action needed —.cursor/removal appears safe.
Checks show no tracked.cursorpaths and no.cursordirectories in the working tree.apps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.ts (1)
58-63: Sanitize HTML before calling editor.insertContenteditor.insertContent enforces the editor schema but is not a full XSS sanitizer — it may not remove <script> or unsafe on* attributes. Ensure cleanedHtml is produced by a proven sanitizer (e.g., DOMPurify) before calling insertContent; also validate/sanitize server-side, enforce a strict ProseMirror/Tiptap schema (disallow <script> and on* attrs), review custom toDOM/DOMSerializer code, and add a CSP as defense-in-depth.
File: apps/desktop/src/components/right-panel/utils/tools/edit_enhanced_note.ts (lines 58–63 — insertContent(cleanedHtml...)).
plugins/local-llm/src/lib.rs (1)
68-93: Specta setup without event mounting — OKDropping event mounting aligns with emitter removal; switching to non-move closure is appropriate.
plugins/local-llm/src/server.rs (7)
6-10: Import for ChatCompletionRequestMessage — OKNeeded for request message filtering; no unused imports.
46-51: ServerState::new signature simplified — OKMatches usage in ext.rs and internal state fields.
125-163: Sanitize progress_update without cloning the whole request; remove TODOAvoid the extra allocation and replace the TODO with a brief why-comment.
[ suggest_recommended_refactor ]
Apply:- // TODO - let request = { - let mut r = request.clone(); + // Strip internal progress_update calls from user-provided history. + let request = { + let mut r = request; r.messages = r .messages .iter() .filter_map(|m| match m { ChatCompletionRequestMessage::Assistant(am) => { let mut cloned_am = am.clone(); let filtered_tool_calls = cloned_am.tool_calls.as_ref().map(|tc| { tc.iter() .filter(|c| c.function.name != "progress_update") .cloned() .collect::<Vec<_>>() });
169-172: Provider wiring — OKLocalProvider constructed with the new state; mock path untouched.
184-186: LocalProvider::new — OKConstructor now mirrors ServerState removal of emitter.
196-201: Streaming builder integration — OKToken registration preserved; stream factory clean.
240-251: Tool list filtering — OKFilters out progress_update; returns None for empty tool set.
plugins/local-llm/src/ext.rs (1)
193-195: ServerState::new call updated — verifiedSearch found a single call at plugins/local-llm/src/ext.rs:193; no other call sites passing an emitter were found.
crates/db-user/src/lib.rs (1)
132-159: Migrations must be append‑only — verification failed; confirm removed filenames and act.Sandbox diff failed (origin/main produced 0 migrations and the environment lacked required utilities), so I could not confirm whether this PR removed prior migrations.
- If commit #1456 ever ran against a persistent DB: reintroduce any removed migration filenames as no‑op "tombstone" SQL files at their original indices (empty bodies), then add corrective migrations after them. Do not reorder.
- If it never shipped: add a one‑line comment above the MIGRATIONS array documenting that assumption.
- Run this local check and paste the output in the PR:
python - <<'PY' import re,subprocess,sys file='crates/db-user/src/lib.rs' subprocess.run(['git','fetch','origin','main'], check=False) try: main = subprocess.run(['git','show','origin/main:'+file], capture_output=True, text=True, check=True).stdout except subprocess.CalledProcessError: print("ERROR: origin/main:"+file+" not found"); sys.exit(2) pr = open(file).read() pat = re.compile(r'include_str!\("([^"]+)"\)') main_migs = pat.findall(main) pr_migs = pat.findall(pr) print('main count:', len(main_migs)) print('pr count:', len(pr_migs)) for i,(a,b) in enumerate(zip(main_migs, pr_migs), 1): if a!=b: print(f'First divergence at index {i}: main={a} | pr={b}') break missing = [m for m in main_migs if m not in pr_migs] if missing: print('Missing in PR (present on main):', missing) extra = [p for p in pr_migs if p not in main_migs] if extra: print('New in PR:', extra) PYFile: crates/db-user/src/lib.rs (approx. lines 132–159).
apps/desktop/src/components/right-panel/components/chat/chat-input.tsx (2)
156-161: Selection mentions now included — confirm downstream handling.prepareMessageHistory ignores type "selection". Verify templates don’t expect it in mentionedContent.
126-130: Prefer DOMParser over innerHTML for plain-text extraction.
[ suggest_nitpick_refactor ]- const extractPlainText = useCallback((html: string) => { - const div = document.createElement("div"); - div.innerHTML = html; - return div.textContent || div.innerText || ""; - }, []); + const extractPlainText = useCallback((html: string) => { + const doc = new DOMParser().parseFromString(html, "text/html"); + return doc.body.textContent || ""; + }, []);apps/desktop/src/components/right-panel/components/chat/chat-message.tsx (1)
11-46: Component structure LGTM.Clean split between user/assistant branches and reuse of MessageContent.
apps/desktop/src/components/right-panel/components/chat/index.ts (1)
4-9: Re-exports look good; verify callers switched from UIMessageComponent to new surfaces.The new surface (ChatMessage, MessageContent, MarkdownCard) is clean. Please confirm no remaining imports of UIMessageComponent or ui-message in the repo.
Run to verify:
apps/desktop/src/components/right-panel/hooks/useChatLogic.ts (2)
330-331: Confirm stopWhen step count is intentional.stepCountIs(5) may cut off longer generations. Verify this aligns with UX expectations for typical responses.
318-329: Fix prepareMessageHistory call — verify signature and pass a params objectprepareMessageHistory is documented to accept a single params object; passing many positional args is brittle. I couldn't locate its definition in the repo — confirm the function signature and, if it expects an object, apply the diff below.
File: apps/desktop/src/components/right-panel/hooks/useChatLogic.ts (lines ~318-329)
- messages: await prepareMessageHistory( - messages, - content, - mentionedContent, - model.modelId, - mcpToolsArray, - sessionData, - sessionId, - userId, - apiBase, - selectionData, // Pass selectionData to prepareMessageHistory - ), + messages: await prepareMessageHistory({ + messages, + currentUserMessage: content, + mentionedContent, + modelId: model.modelId, + mcpToolsArray, + sessionData, + sessionId, + userId, + apiBase, + selectionData, + }),
| const tools = await mcpClient.tools(); | ||
| for (const [toolName, tool] of Object.entries(tools as Record<string, any>)) { | ||
| newMcpTools[toolName] = dynamicTool({ | ||
| description: tool.description, | ||
| inputSchema: tool.inputSchema || z.any(), | ||
| execute: tool.execute, | ||
| }); | ||
| } |
There was a problem hiding this comment.
SSE MCP tool schemas: normalize JSON Schema to Zod before passing to dynamicTool.
Without conversion, tools that expose JSON Schema may break at runtime.
- newMcpTools[toolName] = dynamicTool({
+ newMcpTools[toolName] = dynamicTool({
description: tool.description,
- inputSchema: tool.inputSchema || z.any(),
+ inputSchema: tool.inputSchema ? jsonSchema(tool.inputSchema as any) : z.any(),
execute: tool.execute,
});Also add the missing import alongside the other ai imports:
import {
dynamicTool,
experimental_createMCPClient,
+ jsonSchema,
modelProvider,
smoothStream,
stepCountIs,
streamText,
tool,
} from "@hypr/utils/ai";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tools = await mcpClient.tools(); | |
| for (const [toolName, tool] of Object.entries(tools as Record<string, any>)) { | |
| newMcpTools[toolName] = dynamicTool({ | |
| description: tool.description, | |
| inputSchema: tool.inputSchema || z.any(), | |
| execute: tool.execute, | |
| }); | |
| } | |
| import { | |
| dynamicTool, | |
| experimental_createMCPClient, | |
| jsonSchema, | |
| modelProvider, | |
| smoothStream, | |
| stepCountIs, | |
| streamText, | |
| tool, | |
| } from "@hypr/utils/ai"; |
| const tools = await mcpClient.tools(); | |
| for (const [toolName, tool] of Object.entries(tools as Record<string, any>)) { | |
| newMcpTools[toolName] = dynamicTool({ | |
| description: tool.description, | |
| inputSchema: tool.inputSchema || z.any(), | |
| execute: tool.execute, | |
| }); | |
| } | |
| const tools = await mcpClient.tools(); | |
| for (const [toolName, tool] of Object.entries(tools as Record<string, any>)) { | |
| newMcpTools[toolName] = dynamicTool({ | |
| description: tool.description, | |
| inputSchema: tool.inputSchema ? jsonSchema(tool.inputSchema as any) : z.any(), | |
| execute: tool.execute, | |
| }); | |
| } |
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useChatLogic.ts around lines
269 to 276, the code passes raw JSON Schema to dynamicTool which can break at
runtime; import a JSON Schema→Zod converter (e.g. add an import alongside the
other ai imports such as: import jsonSchemaToZod from 'json-schema-to-zod') and
normalize tool.inputSchema before calling dynamicTool: if tool.inputSchema is a
JSON Schema convert it via the converter to a Zod schema, otherwise fall back to
z.any(); then pass the resulting Zod schema as inputSchema to dynamicTool.
| @@ -0,0 +1,175 @@ | |||
| import { useQuery } from "@tanstack/react-query"; | |||
| import { useEffect } from "react"; | |||
There was a problem hiding this comment.
Import useRef and React type for refs.
You use React.MutableRefObject but don’t import React types; also need useRef for local prev tracking.
-import { useEffect } from "react";
+import { useEffect, useRef } from "react";
+import type { MutableRefObject } from "react";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { useEffect } from "react"; | |
| import { useEffect, useRef } from "react"; | |
| import type { MutableRefObject } from "react"; |
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useChatQueries.ts around line
2, the file currently only imports useEffect but the code uses useRef and
React.MutableRefObject; update the imports to include useRef and the React types
by importing either React and useRef (e.g., import React, { useEffect, useRef }
from "react") or import useEffect, useRef and the MutableRefObject type (e.g.,
import { useEffect, useRef, type MutableRefObject } from "react"); ensure the
ref types used in the file match the imported React type names.
| import type { Message } from "../types/chat-types"; | ||
| import { parseMarkdownBlocks } from "../utils/markdown-parser"; |
There was a problem hiding this comment.
Wrong Message type import path.
Fix path to the shared chat types under components/chat (or import from the chat barrel).
-import type { Message } from "../types/chat-types";
+import type { Message } from "../components/chat/types";Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useChatQueries.ts around lines
5-6, the Message type import path is incorrect; update the import to reference
the shared chat types under components/chat (e.g., replace import type { Message
} from "../types/chat-types"; with a correct path such as import type { Message
} from "../../chat/types"; or import type { Message } from "../../chat" if the
chat folder exports a barrel).
| interface UseChatQueriesProps { | ||
| sessionId: string | null; | ||
| userId: string | null; | ||
| currentChatGroupId: string | null; | ||
| setCurrentChatGroupId: (id: string | null) => void; | ||
| setMessages: (messages: Message[]) => void; | ||
| setHasChatStarted: (started: boolean) => void; | ||
| isGenerating?: boolean; | ||
| prevIsGenerating?: React.MutableRefObject<boolean>; | ||
| } |
There was a problem hiding this comment.
Props typing: remove isGenerating; type the ref properly.
Caller doesn’t pass isGenerating; relying on it breaks gating. Use a ref (kept in sync by ChatView) and compute transitions locally.
interface UseChatQueriesProps {
sessionId: string | null;
userId: string | null;
currentChatGroupId: string | null;
setCurrentChatGroupId: (id: string | null) => void;
setMessages: (messages: Message[]) => void;
setHasChatStarted: (started: boolean) => void;
- isGenerating?: boolean;
- prevIsGenerating?: React.MutableRefObject<boolean>;
+ prevIsGenerating?: MutableRefObject<boolean>;
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useChatQueries.ts around lines
8 to 17, remove the optional isGenerating prop from UseChatQueriesProps and make
prevIsGenerating a properly typed required ref (prevIsGenerating:
React.MutableRefObject<boolean>) so the hook receives a guaranteed ref from
ChatView; inside the hook, stop relying on an external isGenerating boolean and
instead compute generation-state transitions locally using the provided ref
(which ChatView must keep in sync) to gate behavior.
| export function useChatQueries({ | ||
| sessionId, | ||
| userId, | ||
| currentChatGroupId, | ||
| setCurrentChatGroupId, | ||
| setMessages, | ||
| setHasChatStarted, | ||
| isGenerating, | ||
| prevIsGenerating, | ||
| }: UseChatQueriesProps) { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove isGenerating from params.
Align implementation with ref-based gating.
export function useChatQueries({
sessionId,
userId,
currentChatGroupId,
setCurrentChatGroupId,
setMessages,
setHasChatStarted,
- isGenerating,
prevIsGenerating,
}: UseChatQueriesProps) {
+ const lastIsGeneratingRef = useRef(false);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function useChatQueries({ | |
| sessionId, | |
| userId, | |
| currentChatGroupId, | |
| setCurrentChatGroupId, | |
| setMessages, | |
| setHasChatStarted, | |
| isGenerating, | |
| prevIsGenerating, | |
| }: UseChatQueriesProps) { | |
| export function useChatQueries({ | |
| sessionId, | |
| userId, | |
| currentChatGroupId, | |
| setCurrentChatGroupId, | |
| setMessages, | |
| setHasChatStarted, | |
| prevIsGenerating, | |
| }: UseChatQueriesProps) { | |
| const lastIsGeneratingRef = useRef(false); |
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/hooks/useChatQueries.ts around lines
19 to 28, remove isGenerating from the function parameter list and from the
UseChatQueriesProps type; instead rely on the existing prevIsGenerating
ref-based gating already used in the hook. Update the function signature and all
internal destructuring/usages to drop isGenerating, adjust any logic that
previously checked isGenerating to use the ref (or prevIsGenerating) semantics,
and update all call sites and type imports to stop passing isGenerating. Ensure
TypeScript types and any tests or consumers are updated to reflect the removed
param.
| import { useChatQueries2 } from "../hooks/useChatQueries2"; | ||
| import { useChatLogic } from "../hooks/useChatLogic"; | ||
| import { useChatQueries } from "../hooks/useChatQueries"; | ||
| import type { Message } from "../types/chat-types"; |
There was a problem hiding this comment.
Build-breaking import path for Message.
types live under components/chat. Fix the path to avoid TS compile errors.
Apply:
-import type { Message } from "../types/chat-types";
+import type { Message } from "../components/chat";Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/views/chat-view.tsx around line 19,
the import path for Message is incorrect and breaks the TypeScript build; update
the import to point to the types under components/chat (use the correct relative
path from this file, e.g. ../../chat/types/chat-types) so Message is imported
from the actual location.
| const { chatGroupsQuery, sessionData, getChatGroupId } = useChatQueries({ | ||
| sessionId, | ||
| userId, | ||
| currentConversationId, | ||
| setCurrentConversationId, | ||
| setMessages: () => {}, | ||
| isGenerating: false, | ||
| currentChatGroupId, | ||
| setCurrentChatGroupId, | ||
| setMessages, | ||
| setHasChatStarted, | ||
| prevIsGenerating, | ||
| }); |
There was a problem hiding this comment.
DB sync can clobber in-flight streaming (isGenerating not plumbed into useChatQueries).
useChatQueries’ gating relies on isGenerating, but this component doesn’t pass it (only prevIsGenerating). Result: DB sync may overwrite local streaming output. Minimal fix: keep prevIsGenerating in sync here; paired change updates useChatQueries to consult the ref.
Apply:
} = useChatLogic({
sessionId,
userId,
activeEntity,
messages,
inputValue,
hasChatStarted,
setMessages,
setInputValue,
setHasChatStarted,
getChatGroupId,
sessionData,
chatInputRef,
llmConnectionQuery,
});
+ // Keep generation state in a ref for useChatQueries gating (prevents DB overwrites during streams).
+ useEffect(() => {
+ prevIsGenerating.current = isGenerating;
+ }, [isGenerating]);Companion change required in useChatQueries.ts (see my comment there).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { chatGroupsQuery, sessionData, getChatGroupId } = useChatQueries({ | |
| sessionId, | |
| userId, | |
| currentConversationId, | |
| setCurrentConversationId, | |
| setMessages: () => {}, | |
| isGenerating: false, | |
| currentChatGroupId, | |
| setCurrentChatGroupId, | |
| setMessages, | |
| setHasChatStarted, | |
| prevIsGenerating, | |
| }); | |
| const { chatGroupsQuery, sessionData, getChatGroupId } = useChatQueries({ | |
| sessionId, | |
| userId, | |
| currentChatGroupId, | |
| setCurrentChatGroupId, | |
| setMessages, | |
| setHasChatStarted, | |
| prevIsGenerating, | |
| }); | |
| } = useChatLogic({ | |
| sessionId, | |
| userId, | |
| activeEntity, | |
| messages, | |
| inputValue, | |
| hasChatStarted, | |
| setMessages, | |
| setInputValue, | |
| setHasChatStarted, | |
| getChatGroupId, | |
| sessionData, | |
| chatInputRef, | |
| llmConnectionQuery, | |
| }); | |
| // Keep generation state in a ref for useChatQueries gating (prevents DB overwrites during streams). | |
| useEffect(() => { | |
| prevIsGenerating.current = isGenerating; | |
| }, [isGenerating]); |
🤖 Prompt for AI Agents
In apps/desktop/src/components/right-panel/views/chat-view.tsx around lines 50
to 58, the component calls useChatQueries but only passes prevIsGenerating so
the hook's gating can't see the current isGenerating state and DB sync can
clobber in-flight streaming; update the call to pass the current isGenerating
(and keep prevIsGenerating in sync) so the hook can consult the live generating
state, and ensure you also make the companion change in useChatQueries.ts to
read that prop/ref instead of relying on the previous-only value.
| let mut rows = conn | ||
| .query( | ||
| "SELECT id FROM chat_groups WHERE session_id = ?", | ||
| vec![session_id.clone()], | ||
| ) | ||
| .await?; | ||
|
|
||
| while let Some(row) = rows.next().await? { | ||
| let group_id: String = row.get(0)?; | ||
| conn.execute( | ||
| "DELETE FROM chat_messages WHERE group_id = ?", | ||
| vec![group_id], | ||
| ) | ||
| .await?; | ||
| } | ||
|
|
||
| conn.execute( | ||
| "DELETE FROM chat_groups WHERE session_id = ?", | ||
| vec![session_id.clone()], | ||
| ) | ||
| .await?; | ||
|
|
||
| conn.execute("DELETE FROM sessions WHERE id = ?", vec![session_id]) | ||
| .await?; | ||
|
|
There was a problem hiding this comment.
Make deletion atomic and eliminate TOCTOU races; replace per‑group loop with set‑based deletes inside a transaction.
As written, a concurrent insert into chat_messages between the per‑group deletes and the chat_groups delete can leave orphaned messages if there’s no FK cascade. Also, the row‑by‑row loop increases round trips. Use one set‑based delete and wrap the entire cleanup in a transaction to ensure all-or-nothing semantics.
Apply this diff:
@@
- let mut rows = conn
- .query(
- "SELECT id FROM chat_groups WHERE session_id = ?",
- vec![session_id.clone()],
- )
- .await?;
-
- while let Some(row) = rows.next().await? {
- let group_id: String = row.get(0)?;
- conn.execute(
- "DELETE FROM chat_messages WHERE group_id = ?",
- vec![group_id],
- )
- .await?;
- }
-
- conn.execute(
- "DELETE FROM chat_groups WHERE session_id = ?",
- vec![session_id.clone()],
- )
- .await?;
-
- conn.execute("DELETE FROM sessions WHERE id = ?", vec![session_id])
- .await?;
+ // Atomic, set-based cleanup to prevent orphans in concurrent scenarios.
+ if let Err(e) = async {
+ // Acquire a write lock and operate atomically.
+ conn.execute("BEGIN IMMEDIATE", ()).await?;
+ // Delete all messages for all groups in one statement.
+ conn.execute(
+ "DELETE FROM chat_messages \
+ WHERE group_id IN (SELECT id FROM chat_groups WHERE session_id = ?)",
+ vec![session_id.clone()],
+ ).await?;
+ // Delete all groups for the session.
+ conn.execute(
+ "DELETE FROM chat_groups WHERE session_id = ?",
+ vec![session_id.clone()],
+ ).await?;
+ // Delete the session itself.
+ conn.execute("DELETE FROM sessions WHERE id = ?", vec![session_id.clone()]).await?;
+ conn.execute("COMMIT", ()).await?;
+ Ok::<(), crate::Error>(())
+ }.await {
+ let _ = conn.execute("ROLLBACK", ()).await;
+ return Err(e);
+ }Committable suggestion skipped: line range outside the PR's diff.
| let mut rows = conn | ||
| .query( | ||
| "SELECT id FROM chat_groups WHERE session_id = ?", | ||
| vec![session_id.clone()], | ||
| ) | ||
| .await?; | ||
|
|
||
| while let Some(row) = rows.next().await? { | ||
| let group_id: String = row.get(0)?; | ||
| conn.execute( | ||
| "DELETE FROM chat_messages WHERE group_id = ?", | ||
| vec![group_id], | ||
| ) | ||
| .await?; | ||
| } | ||
|
|
||
| conn.execute( | ||
| "DELETE FROM chat_groups WHERE session_id = ?", | ||
| vec![session_id.clone()], | ||
| ) | ||
| .await?; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
Verify FK cascades and supporting indexes exist for durability and performance.
To make this robust long‑term, ensure:
- chat_messages(group_id) → chat_groups(id) ON DELETE CASCADE
- chat_groups(session_id) → sessions(id) ON DELETE CASCADE
- Indexes on chat_groups(session_id) and chat_messages(group_id)
Run this script to confirm constraints and indexes:
If missing, consider a migration like:
ALTER TABLE chat_groups
ADD CONSTRAINT fk_chat_groups_session
FOREIGN KEY(session_id) REFERENCES sessions(id) ON DELETE CASCADE;
ALTER TABLE chat_messages
ADD CONSTRAINT fk_chat_messages_group
FOREIGN KEY(group_id) REFERENCES chat_groups(id) ON DELETE CASCADE;
CREATE INDEX IF NOT EXISTS idx_chat_groups_session_id ON chat_groups(session_id);
CREATE INDEX IF NOT EXISTS idx_chat_messages_group_id ON chat_messages(group_id);🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Table DDLs (searching for FK cascades)…"
rg -nP -C2 '(?i)create\s+table\s+(chat_groups|chat_messages)|foreign\s+key|on\s+delete\s+cascade'
echo
echo "Indexes on chat_groups/session_id and chat_messages/group_id…"
rg -nP -C1 '(?i)create\s+index.*(chat_groups|chat_messages)|\bON\s+(chat_groups|chat_messages)\s*\('
echo
echo "Migrations touching these tables…"
fd -t f -i -a 'migrat|schema|sql' | xargs -I{} rg -n 'chat_groups|chat_messages' {}Length of output: 9036
🏁 Script executed:
#!/bin/bash
set -euo pipefail
files=(
crates/db-user/src/chat_groups_migration.sql
crates/db-user/src/chat_messages_migration.sql
crates/db-user/src/sessions_migration.sql
)
for f in "${files[@]}"; do
echo "=== $f ==="
if [ -f "$f" ]; then
nl -ba "$f"
else
echo "MISSING: $f"
fi
echo
done
echo "=== rg matches for session_id / group_id / ON DELETE CASCADE / CREATE INDEX ==="
rg -n --hidden --no-ignore -S 'session_id|group_id|ON DELETE CASCADE|CREATE INDEX|CREATE UNIQUE INDEX|INDEX IF NOT EXISTS' || true
echo "=== finished ==="Length of output: 94
Add ON DELETE CASCADE FKs and supporting indexes for chat_groups/chat_messages
- crates/db-user/src/chat_messages_migration.sql declares FOREIGN KEY(group_id) REFERENCES chat_groups(id) but lacks ON DELETE CASCADE — add cascade.
- crates/db-user/src/chat_groups_migration.sql does not define FOREIGN KEY(session_id) → sessions(id) — add FK with ON DELETE CASCADE (or confirm intentional denormalization).
- No index found for chat_groups(session_id) or chat_messages(group_id) — add indexes for durability/performance.
Suggested migration (apply as new migration):
ALTER TABLE chat_groups
ADD CONSTRAINT fk_chat_groups_session
FOREIGN KEY(session_id) REFERENCES sessions(id) ON DELETE CASCADE;
ALTER TABLE chat_messages
ADD CONSTRAINT fk_chat_messages_group
FOREIGN KEY(group_id) REFERENCES chat_groups(id) ON DELETE CASCADE;
CREATE INDEX IF NOT EXISTS idx_chat_groups_session_id ON chat_groups(session_id);
CREATE INDEX IF NOT EXISTS idx_chat_messages_group_id ON chat_messages(group_id);
| StreamEvent::Progress(progress) => { | ||
| Some(Ok(CreateChatCompletionStreamResponse { | ||
| choices: vec![ChatChoiceStream { | ||
| index: 0, | ||
| delta: ChatCompletionStreamResponseDelta { | ||
| tool_calls: Some(vec![ | ||
| ChatCompletionMessageToolCallChunk { | ||
| index: index.try_into().unwrap_or(0), | ||
| id: Some("progress_update".to_string()), | ||
| r#type: Some(ChatCompletionToolType::Function), | ||
| function: Some(FunctionCallStream { | ||
| name: Some("progress_update".to_string()), | ||
| arguments: Some( | ||
| serde_json::to_string(&serde_json::json!({ | ||
| "progress": progress | ||
| })) | ||
| .unwrap(), | ||
| ), | ||
| }), | ||
| }, | ||
| ]), | ||
| ..delta_template | ||
| }, | ||
| finish_reason: None, | ||
| logprobs: None, | ||
| }], | ||
| ..response_template | ||
| })) | ||
| } |
There was a problem hiding this comment.
❓ Verification inconclusive
Streamed tool_calls indexes/id for progress are non‑spec and unstable
Using enumerate() for tool_call index produces event-order indices (including text deltas), which violates OpenAI’s tool_calls indexing (should be position among tool calls). Also, emitting a constant id and varying index will confuse clients that aggregate by (id, index).
Apply a small stateful scan to emit stable indices for tool calls and keep progress as index 0:
- let stream = Box::pin(
- source_stream
- .enumerate()
- .map(move |(index, event)| {
+ let stream = Box::pin(
+ source_stream
+ .scan((0u16, 0u16), move |(tool_idx, progress_idx), event| async move {
let delta_template = empty_stream_response_delta.clone();
let response_template = base_stream_response_template.clone();
- match event {
+ let out = match event {
StreamEvent::Response(llama_response) => match llama_response {
hypr_llama::Response::TextDelta(chunk) => {
- Some(Ok(CreateChatCompletionStreamResponse {
+ Some(Ok(CreateChatCompletionStreamResponse {
choices: vec![ChatChoiceStream {
- index: 0,
+ index: 0,
delta: ChatCompletionStreamResponseDelta {
content: Some(chunk),
..delta_template
},
finish_reason: None,
logprobs: None,
}],
..response_template
}))
}
hypr_llama::Response::Reasoning(_) => None,
hypr_llama::Response::ToolCall { name, arguments } => {
- Some(Ok(CreateChatCompletionStreamResponse {
+ let idx = *tool_idx as u32;
+ *tool_idx = tool_idx.saturating_add(1);
+ Some(Ok(CreateChatCompletionStreamResponse {
choices: vec![ChatChoiceStream {
- index: 0,
+ index: idx,
delta: ChatCompletionStreamResponseDelta {
tool_calls: Some(vec![
ChatCompletionMessageToolCallChunk {
- index: index.try_into().unwrap_or(0),
+ index: idx,
id: Some(uuid::Uuid::new_v4().to_string()),
r#type: Some(ChatCompletionToolType::Function),
function: Some(FunctionCallStream {
name: Some(name),
arguments: Some(
serde_json::to_string(&arguments)
.unwrap_or_default(),
),
}),
},
]),
..delta_template
},
finish_reason: None,
logprobs: None,
}],
..response_template
}))
}
},
StreamEvent::Progress(progress) => {
- Some(Ok(CreateChatCompletionStreamResponse {
+ // keep a fixed index for progress pseudo-tool to avoid colliding with real tool indices
+ Some(Ok(CreateChatCompletionStreamResponse {
choices: vec![ChatChoiceStream {
- index: 0,
+ index: 0,
delta: ChatCompletionStreamResponseDelta {
tool_calls: Some(vec![
ChatCompletionMessageToolCallChunk {
- index: index.try_into().unwrap_or(0),
+ index: 0,
id: Some("progress_update".to_string()),
r#type: Some(ChatCompletionToolType::Function),
function: Some(FunctionCallStream {
name: Some("progress_update".to_string()),
arguments: Some(
serde_json::to_string(&serde_json::json!({
"progress": progress
}))
.unwrap(),
),
}),
},
]),
..delta_template
},
finish_reason: None,
logprobs: None,
}],
..response_template
}))
}
- }
- })
- .filter_map(|x| async move { x }),
+ };
+ out
+ })
+ .filter_map(|x| async move { x }),
);Please validate downstream UI parsing expects tool_call indices per spec and ignores progress_update tool calls as intended.
Reserve index 0 for progress; emit stable tool_call indices with a stateful counter
File: plugins/local-llm/src/server.rs (around lines 511–539)
- enumerate() produces event-order indices (mixing text deltas and tool calls); tool_calls must be indexed by their position among tool calls.
- Replace enumerate/map with a scan that maintains a tool_idx counter incremented only on Response::ToolCall; initialize tool_idx = 1 so progress (index 0) cannot collide with real tool_call indices. Assign a unique id (uuid::Uuid::new_v4()) per real tool_call and emit progress with a fixed id (e.g., "progress_update") at index 0.
- After applying the change, validate downstream UI parsing expects spec-compliant tool_call indices and ignores/handles the "progress_update" pseudo-tool as intended.
🤖 Prompt for AI Agents
In plugins/local-llm/src/server.rs around lines 511–539, the current
enumerate-based indexing mixes event-order indices with tool_call positions
causing collisions with the progress pseudo-tool; change the iteration to a
stateful scan that maintains a tool_idx counter initialized to 1 and only
increments when encountering a Response::ToolCall so real tool_calls get
consecutive indices (tool_idx, increment after use) while progress messages are
emitted with index 0 and id "progress_update"; for each real tool_call generate
a stable unique id using uuid::Uuid::new_v4() and use that id, and ensure
downstream UI parsing is validated to accept spec-compliant tool_call indices
and ignore/handle the "progress_update" pseudo-tool.
This reverts commit 9dc96a4.