-
Notifications
You must be signed in to change notification settings - Fork 1
Sc 12927/implement streaming support for ai answers result #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sc 12927/implement streaming support for ai answers result #94
Conversation
WalkthroughAdds a new AI Answers client with streaming and non‑streaming request handling and sentiment submission; removes the legacy ai-answers-interactions module; apifetch delegates AI Answers requests to the new client; exposes a settings flag and public method to enable streaming. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant apifetch
participant AiAnswersClient as AiAnswers Client
participant Endpoint as AI Answers Endpoint
participant Callback
Client->>apifetch: POST AI Answers request (settings)
apifetch->>AiAnswersClient: delegate request (useAiAnswersStream?)
alt Streaming path
AiAnswersClient->>Endpoint: POST /v2/indices/{sitekey}/conversations?streaming=true
loop SSE-like events
Endpoint-->>AiAnswersClient: data: {"type":"metadata"|"token"|"sources"|"done", ...}
AiAnswersClient->>AiAnswersClient: parse & accumulate conversation_id, tokens, sources
Note right of AiAnswersClient: per-event throttling (100ms)\nimmediate callbacks for sources/done
AiAnswersClient->>Callback: emit partial AiAnswersResponse
end
Endpoint-->>AiAnswersClient: stream closed
AiAnswersClient->>Callback: final AiAnswersResponse (is_streaming_complete=true)
else Non-streaming path
AiAnswersClient->>Endpoint: POST non-streaming endpoint
Endpoint-->>AiAnswersClient: full response JSON
AiAnswersClient->>Callback: complete AiAnswersResponse
end
Note over AiAnswersClient: Errors (HTTP, parse, abrupt disconnect) -> AiAnswersResponse.error
Callback-->>Client: response or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/ai-answers-api.ts(1 hunks)src/ai-answers-interactions-api.ts(0 hunks)src/apifetch.ts(3 hunks)src/index.ts(2 hunks)src/settings.ts(2 hunks)
💤 Files with no reviewable changes (1)
- src/ai-answers-interactions-api.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/apifetch.ts (1)
src/ai-answers-api.ts (1)
executeAiAnswersFetch(59-71)
src/ai-answers-api.ts (4)
src/settings.ts (1)
Settings(35-72)src/apifetch.ts (1)
ApiFetchCallback(50-52)src/api.ts (2)
RESPONSE_SERVER_ERROR(44-44)aiAnswersInteractionsInstance(41-41)src/index.ts (1)
putSentimentClick(142-147)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/ai-answers-api.ts (1)
99-109: Critical: Streaming fetch still bypasses request interceptorsThis issue was flagged in a previous review but remains unresolved. The direct
fetchcall bypasses theapiInstanceinterceptor pipeline, which means configured interceptors (for authentication, private keys, custom headers, etc.) are never applied. This will cause AI Answers requests to fail when interceptors are required.Please route this request through
apiInstanceto ensure the interceptor stack executes. For streaming responses, you may need to useapiInstance.requestconfigured to return the raw Response object, or execute the interceptor chain to produce the finalized headers/request and then call fetch.Also applies to the non-streaming path at lines 319-328.
🧹 Nitpick comments (2)
src/ai-answers-api.ts (2)
383-383: Consider extracting sentiment value mappingThe nested ternary operator is somewhat hard to read. Consider extracting it to a helper function or using a mapping object for better clarity.
const sentimentToNumeric = (sentiment: SentimentValue): number => { const mapping: Record<SentimentValue, number> = { positive: 1, negative: -1, neutral: 0 }; return mapping[sentiment]; }; // Then use: value: sentimentToNumeric(sentimentValue)
374-411: Deduplicate error handling in putSentimentClickThe error handling code at lines 389-396 and 399-408 is nearly identical. Consider extracting it to reduce duplication.
+const createSentimentError = () => + new Error( + JSON.stringify({ + type: RESPONSE_SERVER_ERROR, + message: 'Unable to put sentiment click value.' + }) + ); + export const putSentimentClick = ( apiHostname: string, sitekey: string, conversationId: string, sentimentValue: SentimentValue ): Promise<boolean> => { return new Promise((resolve, reject) => { aiAnswersInteractionsInstance .put(`https://${apiHostname}/v2/indices/${sitekey}/conversations/${conversationId}/rating`, { value: sentimentValue === 'positive' ? 1 : sentimentValue === 'negative' ? -1 : 0 }) .then((response) => { if (response.status === 200) { resolve(true); } else { - reject( - new Error( - JSON.stringify({ - type: RESPONSE_SERVER_ERROR, - message: 'Unable to put sentiment click value.' - }) - ) - ); + reject(createSentimentError()); } }) .catch((error) => { console.error(error); - reject( - new Error( - JSON.stringify({ - type: RESPONSE_SERVER_ERROR, - message: 'Unable to put sentiment click value.' - }) - ) - ); + reject(createSentimentError()); }); }); };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/ai-answers-api.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/ai-answers-api.ts (3)
src/settings.ts (1)
Settings(35-72)src/apifetch.ts (1)
ApiFetchCallback(50-52)src/index.ts (1)
putSentimentClick(142-147)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/ai-answers-api.ts (2)
257-325: AI Answers streaming still bypasses apiInstance interceptors (auth/custom headers).
executeStreamingAiAnswerscallsfetchdirectly with a raw URL and only aContent-Typeheader. This still bypasses the interceptor pipeline wired throughapiInstance/AddSearchClient.setApiRequestInterceptor, so any per-request auth or header mutations configured there will not be applied to streaming AI Answers calls. For setups that depend on interceptors (e.g. API keys, JWT, custom headers), these requests will fail or behave inconsistently with the rest of the client.Consider routing streaming requests through the same interceptor stack, e.g.:
- Either build the request via
apiInstance(or a helper that runs the same request interceptors) and then perform the streaming fetch with the finalized headers/config, or- Introduce a small abstraction that uses
apiInstanceto produce the correct headers/query (including auth) and reuse that for both axios-based and fetch-based calls.This should be done for both streaming and non‑streaming AI Answers to keep behavior consistent with existing API calls.
366-416: Non‑streaming AI Answers still uses barefetchwithout status check or interceptors.Two separate concerns here:
Status handling – The non‑streaming path calls
response.json()without first checkingresponse.ok/response.status. For 4xx/5xx or non‑JSON error payloads this will throw a parse error, which is then surfaced as a generic “invalid server response”, losing the actual HTTP status and message. Adding an explicit status check (and mapping status into the error object) would yield clearer and more debuggable failures.Interceptor bypass – As with the streaming path, this function uses a raw
fetchcall instead ofapiInstance, so any configured request interceptors (auth headers, per-request customization) are not applied. This diverges from how other endpoints are called and can break AI Answers in environments that rely on the interceptor hook.I recommend:
- Checking
response.okand throwing a descriptive error if it is false before callingresponse.json().- Refactoring this path to use
apiInstance.post(or the same interceptor-aware helper you introduce for streaming) so the behavior matches other API calls.
🧹 Nitpick comments (2)
src/ai-answers-api.ts (1)
330-361: Streaming parse/error handling is generally solid, but consider explicitly cancelling the reader on errors.The streaming loop correctly terminates on parse errors (
parseSSEEventthrows,readStreamcatches and rethrows, which flows tohandleErrorand a final error callback). However, theReadableStreamreader is never explicitly cancelled on error, so the underlying connection may remain open until the server closes it.As an incremental hardening step, you could call
reader.cancel()in the error path insidereadStream(or in afinallyblock after the loop) to aggressively tear down the stream when an unrecoverable error occurs. Not strictly required for correctness, but it avoids lingering network resources.src/apifetch.ts (1)
418-452: Fuzzy retry recursion dropscustomFilterObjectandrecommendOptions.In
handleApiResponse, the fuzzy retry path calls:executeApiFetch(apiHostname, sitekey, type, settings, cb, true);This omits
customFilterObjectandrecommendOptions, so the retry request may lose custom filters or recommendation options compared to the initial call. If a consumer combinesfuzzy: 'retry'with a custom filter, the second request will not respect that filter.Consider forwarding the original arguments:
- executeApiFetch(apiHostname, sitekey, type, settings, cb, true); + executeApiFetch( + apiHostname, + sitekey, + type, + settings, + cb, + true, + customFilterObject, + recommendOptions + );This keeps retry semantics consistent with the initial request.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (2)
src/ai-answers-api.ts(1 hunks)src/apifetch.ts(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/apifetch.ts (1)
src/ai-answers-api.ts (2)
executeAiAnswersStreamingFetch(58-65)executeAiAnswersNonStreamingFetch(75-82)
src/ai-answers-api.ts (4)
src/settings.ts (1)
Settings(35-72)src/apifetch.ts (1)
ApiFetchCallback(50-52)src/api.ts (2)
RESPONSE_SERVER_ERROR(44-44)aiAnswersInteractionsInstance(41-41)src/index.ts (1)
putSentimentClick(142-147)
🔇 Additional comments (3)
src/ai-answers-api.ts (1)
442-479: Sentiment rating helper looks good and preserves interaction-instance semantics.The
putSentimentClickhelper correctly reusesaiAnswersInteractionsInstance, mapsSentimentValueto the numeric rating, and surfaces success/failure via a boolean-resolving promise with standardized error payloads. This keeps rating calls on the existing interactions client and avoids the interceptor issues present in the rawfetchcalls.src/apifetch.ts (2)
327-333: Ai‑answers delegation via streaming flag is wired correctly.The early‑return branch for
type === 'ai-answers'cleanly delegates to the new AI Answers API module based onsettings.useAiAnswersStream, and avoids falling through into the generic axios handler. This keeps the existing executeApiFetch contract intact while centralizing AI Answers logic inai-answers-api.ts.
454-470: Unified axios POST/GET handling for non‑ai‑answers types looks correct.The new
handleApiResponse/handleApiErrorfunctions and the conditional POST/GET logic correctly useapiInstanceforsearch,suggest, andautocomplete(honoringsettings.apiMethod) and keeprecommendon GET. This preserves interceptor behavior across these types and centralizes the error mapping to the existing{ error: { response, message } }shape.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/ai-answers-api.ts (2)
366-421: Consider more specific response validation.The non-streaming implementation is solid with proper HTTP status checking (line 383-385). One minor improvement: at line 389, the
if (data.response)check is truthy-based. If the backend returns an incomplete response object (e.g.,{response: {conversation_id: '', answer: '', sources: undefined}}), it would pass through without validation.Consider validating the required fields explicitly:
- if (data.response) { + if (data.response && data.response.conversation_id && data.response.answer && Array.isArray(data.response.sources)) { cb({ conversation_id: data.response.conversation_id, answer: data.response.answer, sources: data.response.sources }); } else {This is optional since the TypeScript types and backend contract should ensure the structure, but it adds runtime safety.
428-484: LGTM! Sentiment submission works correctly.The sentiment conversion and submission logic is sound. The use of
aiAnswersInteractionsInstance(axios) for the PUT request is appropriate since this doesn't require streaming capabilities.Optional refactor: The
sentimentToNumericRatingfunction (lines 428-436) could use an object lookup for slightly cleaner code:const sentimentToNumericRating = (sentimentValue: SentimentValue): number => { const mapping = { positive: 1, negative: -1, neutral: 0 }; return mapping[sentimentValue]; };However, the current implementation is clear and only handles 3 cases, so this is purely a style preference.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
package.json(1 hunks)src/ai-answers-api.ts(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- package.json
🧰 Additional context used
🧬 Code graph analysis (1)
src/ai-answers-api.ts (4)
src/settings.ts (1)
Settings(35-72)src/apifetch.ts (1)
ApiFetchCallback(50-52)src/api.ts (2)
RESPONSE_SERVER_ERROR(44-44)aiAnswersInteractionsInstance(41-41)src/index.ts (1)
putSentimentClick(142-147)
🔇 Additional comments (6)
src/ai-answers-api.ts (6)
1-48: LGTM! Well-structured type definitions.The imports and type definitions are comprehensive and well-documented. The public interfaces (
AiAnswersSource,AiAnswersResponse,SentimentValue) provide a clear contract for consumers.
58-82: LGTM! Clean public API design.The wrapper functions provide a stable public API while keeping implementation details internal. Good separation of concerns.
87-150: LGTM! Solid throttling implementation.The
CallbackThrottlerclass effectively manages callback frequency with proper cleanup. The 100ms threshold is reasonable for streaming updates, and the timeout management prevents resource leaks.
155-252: LGTM! Clean SSE parsing with defensive property access.The streaming state management and SSE event parsing are well-implemented. The use of
|| ''and|| []fallbacks (lines 209, 214, 219) provides graceful handling of missing properties without breaking the stream.
257-325: LGTM! Comprehensive streaming implementation.The streaming execution handles the full lifecycle well:
- Proper HTTP status validation (line 309-311)
- Comprehensive error handling with
handleErrorandhandleUnexpectedDisconnection- Partial data return on disconnection is a good UX choice
- Throttler cleanup is called appropriately
330-361: LGTM! Correct buffer management for streaming.The stream reading logic correctly handles incomplete lines with buffer management (line 348) and properly cleans up the throttler before re-throwing errors (line 356-357).



Summary by CodeRabbit
New Features
Chores