Skip to content

Conversation

@haoAddsearch
Copy link
Collaborator

@haoAddsearch haoAddsearch commented Nov 7, 2025

Summary by CodeRabbit

  • New Features

    • AI-powered answers now support both streaming and non-streaming modes for faster, more flexible responses.
    • Sentiment feedback added so users can rate answers as positive, negative, or neutral.
    • New setting and client toggle to enable or disable AI streaming mode for tailored behavior.
  • Chores

    • Package version bumped to 1.2.1.

@coderabbitai
Copy link

coderabbitai bot commented Nov 7, 2025

Walkthrough

Adds a new AI Answers client with streaming and non‑streaming request handling and sentiment submission; removes the legacy ai-answers-interactions module; apifetch delegates AI Answers requests to the new client; exposes a settings flag and public method to enable streaming.

Changes

Cohort / File(s) Change Summary
New AI Answers Client
src/ai-answers-api.ts
Adds streaming and non-streaming AI Answers client functions (executeAiAnswersStreamingFetch, executeAiAnswersNonStreamingFetch), exported types AiAnswersSource, AiAnswersResponse, SentimentValue, SSE-like streaming parser (metadata/token/sources/done), per-event throttling, and putSentimentClick() sentiment PUT logic.
Removed Legacy Module
src/ai-answers-interactions-api.ts
Removes the previous sentiment submission module and its exported putSentimentClick; logic migrated into src/ai-answers-api.ts.
API layer refactor / delegation
src/apifetch.ts
Delegates AI Answers requests to the new ai-answers-api streaming or non-streaming functions (early return), removes local AI Answers interfaces, and consolidates response/error handlers for non-AI paths.
Public API updates
src/index.ts
Switches SentimentValue import to come from ai-answers-api.ts, removes the local SentimentValue type, and adds useAiAnswersStream(enable: boolean) method on AddSearchClient to toggle the setting.
Settings enhancement
src/settings.ts
Adds useAiAnswersStream?: boolean to Settings and implements useAiAnswersStream(enable: boolean) setter in SettingsManager.
Version bump
package.json
Increments package version from 1.2.0 to 1.2.1.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant apifetch
    participant AiAnswersClient as AiAnswers Client
    participant Endpoint as AI Answers Endpoint
    participant Callback

    Client->>apifetch: POST AI Answers request (settings)
    apifetch->>AiAnswersClient: delegate request (useAiAnswersStream?)

    alt Streaming path
        AiAnswersClient->>Endpoint: POST /v2/indices/{sitekey}/conversations?streaming=true
        loop SSE-like events
            Endpoint-->>AiAnswersClient: data: {"type":"metadata"|"token"|"sources"|"done", ...}
            AiAnswersClient->>AiAnswersClient: parse & accumulate conversation_id, tokens, sources
            Note right of AiAnswersClient: per-event throttling (100ms)\nimmediate callbacks for sources/done
            AiAnswersClient->>Callback: emit partial AiAnswersResponse
        end
        Endpoint-->>AiAnswersClient: stream closed
        AiAnswersClient->>Callback: final AiAnswersResponse (is_streaming_complete=true)
    else Non-streaming path
        AiAnswersClient->>Endpoint: POST non-streaming endpoint
        Endpoint-->>AiAnswersClient: full response JSON
        AiAnswersClient->>Callback: complete AiAnswersResponse
    end

    Note over AiAnswersClient: Errors (HTTP, parse, abrupt disconnect) -> AiAnswersResponse.error
    Callback-->>Client: response or error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Focus review on:
    • Streaming parser and throttling in src/ai-answers-api.ts (event parsing, partial vs final callbacks, abrupt stream termination handling).
    • putSentimentClick() HTTP behavior and numeric mapping.
    • Delegation and early-return behavior in src/apifetch.ts.
    • Public API surface changes in src/index.ts and duplicate method insertions.
    • Settings setter semantics and default handling in src/settings.ts.

Possibly related PRs

Suggested reviewers

  • italo-addsearch
  • kanarupan-addsearch

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: implementing streaming support for AI answers results, which aligns with the primary modifications across multiple files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sc-12927/implement-streaming-support-for-ai-answers-result

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be0768d and cc22d6f.

📒 Files selected for processing (5)
  • src/ai-answers-api.ts (1 hunks)
  • src/ai-answers-interactions-api.ts (0 hunks)
  • src/apifetch.ts (3 hunks)
  • src/index.ts (2 hunks)
  • src/settings.ts (2 hunks)
💤 Files with no reviewable changes (1)
  • src/ai-answers-interactions-api.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/apifetch.ts (1)
src/ai-answers-api.ts (1)
  • executeAiAnswersFetch (59-71)
src/ai-answers-api.ts (4)
src/settings.ts (1)
  • Settings (35-72)
src/apifetch.ts (1)
  • ApiFetchCallback (50-52)
src/api.ts (2)
  • RESPONSE_SERVER_ERROR (44-44)
  • aiAnswersInteractionsInstance (41-41)
src/index.ts (1)
  • putSentimentClick (142-147)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
src/ai-answers-api.ts (1)

99-109: Critical: Streaming fetch still bypasses request interceptors

This issue was flagged in a previous review but remains unresolved. The direct fetch call bypasses the apiInstance interceptor pipeline, which means configured interceptors (for authentication, private keys, custom headers, etc.) are never applied. This will cause AI Answers requests to fail when interceptors are required.

Please route this request through apiInstance to ensure the interceptor stack executes. For streaming responses, you may need to use apiInstance.request configured to return the raw Response object, or execute the interceptor chain to produce the finalized headers/request and then call fetch.

Also applies to the non-streaming path at lines 319-328.

🧹 Nitpick comments (2)
src/ai-answers-api.ts (2)

383-383: Consider extracting sentiment value mapping

The nested ternary operator is somewhat hard to read. Consider extracting it to a helper function or using a mapping object for better clarity.

const sentimentToNumeric = (sentiment: SentimentValue): number => {
  const mapping: Record<SentimentValue, number> = {
    positive: 1,
    negative: -1,
    neutral: 0
  };
  return mapping[sentiment];
};

// Then use:
value: sentimentToNumeric(sentimentValue)

374-411: Deduplicate error handling in putSentimentClick

The error handling code at lines 389-396 and 399-408 is nearly identical. Consider extracting it to reduce duplication.

+const createSentimentError = () => 
+  new Error(
+    JSON.stringify({
+      type: RESPONSE_SERVER_ERROR,
+      message: 'Unable to put sentiment click value.'
+    })
+  );
+
 export const putSentimentClick = (
   apiHostname: string,
   sitekey: string,
   conversationId: string,
   sentimentValue: SentimentValue
 ): Promise<boolean> => {
   return new Promise((resolve, reject) => {
     aiAnswersInteractionsInstance
       .put(`https://${apiHostname}/v2/indices/${sitekey}/conversations/${conversationId}/rating`, {
         value: sentimentValue === 'positive' ? 1 : sentimentValue === 'negative' ? -1 : 0
       })
       .then((response) => {
         if (response.status === 200) {
           resolve(true);
         } else {
-          reject(
-            new Error(
-              JSON.stringify({
-                type: RESPONSE_SERVER_ERROR,
-                message: 'Unable to put sentiment click value.'
-              })
-            )
-          );
+          reject(createSentimentError());
         }
       })
       .catch((error) => {
         console.error(error);
-        reject(
-          new Error(
-            JSON.stringify({
-              type: RESPONSE_SERVER_ERROR,
-              message: 'Unable to put sentiment click value.'
-            })
-          )
-        );
+        reject(createSentimentError());
       });
   });
 };
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc22d6f and 4b0ba3a.

📒 Files selected for processing (1)
  • src/ai-answers-api.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/ai-answers-api.ts (3)
src/settings.ts (1)
  • Settings (35-72)
src/apifetch.ts (1)
  • ApiFetchCallback (50-52)
src/index.ts (1)
  • putSentimentClick (142-147)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/ai-answers-api.ts (2)

257-325: AI Answers streaming still bypasses apiInstance interceptors (auth/custom headers).

executeStreamingAiAnswers calls fetch directly with a raw URL and only a Content-Type header. This still bypasses the interceptor pipeline wired through apiInstance / AddSearchClient.setApiRequestInterceptor, so any per-request auth or header mutations configured there will not be applied to streaming AI Answers calls. For setups that depend on interceptors (e.g. API keys, JWT, custom headers), these requests will fail or behave inconsistently with the rest of the client.

Consider routing streaming requests through the same interceptor stack, e.g.:

  • Either build the request via apiInstance (or a helper that runs the same request interceptors) and then perform the streaming fetch with the finalized headers/config, or
  • Introduce a small abstraction that uses apiInstance to produce the correct headers/query (including auth) and reuse that for both axios-based and fetch-based calls.

This should be done for both streaming and non‑streaming AI Answers to keep behavior consistent with existing API calls.


366-416: Non‑streaming AI Answers still uses bare fetch without status check or interceptors.

Two separate concerns here:

  1. Status handling – The non‑streaming path calls response.json() without first checking response.ok/response.status. For 4xx/5xx or non‑JSON error payloads this will throw a parse error, which is then surfaced as a generic “invalid server response”, losing the actual HTTP status and message. Adding an explicit status check (and mapping status into the error object) would yield clearer and more debuggable failures.

  2. Interceptor bypass – As with the streaming path, this function uses a raw fetch call instead of apiInstance, so any configured request interceptors (auth headers, per-request customization) are not applied. This diverges from how other endpoints are called and can break AI Answers in environments that rely on the interceptor hook.

I recommend:

  • Checking response.ok and throwing a descriptive error if it is false before calling response.json().
  • Refactoring this path to use apiInstance.post (or the same interceptor-aware helper you introduce for streaming) so the behavior matches other API calls.
🧹 Nitpick comments (2)
src/ai-answers-api.ts (1)

330-361: Streaming parse/error handling is generally solid, but consider explicitly cancelling the reader on errors.

The streaming loop correctly terminates on parse errors (parseSSEEvent throws, readStream catches and rethrows, which flows to handleError and a final error callback). However, the ReadableStream reader is never explicitly cancelled on error, so the underlying connection may remain open until the server closes it.

As an incremental hardening step, you could call reader.cancel() in the error path inside readStream (or in a finally block after the loop) to aggressively tear down the stream when an unrecoverable error occurs. Not strictly required for correctness, but it avoids lingering network resources.

src/apifetch.ts (1)

418-452: Fuzzy retry recursion drops customFilterObject and recommendOptions.

In handleApiResponse, the fuzzy retry path calls:

executeApiFetch(apiHostname, sitekey, type, settings, cb, true);

This omits customFilterObject and recommendOptions, so the retry request may lose custom filters or recommendation options compared to the initial call. If a consumer combines fuzzy: 'retry' with a custom filter, the second request will not respect that filter.

Consider forwarding the original arguments:

- executeApiFetch(apiHostname, sitekey, type, settings, cb, true);
+ executeApiFetch(
+   apiHostname,
+   sitekey,
+   type,
+   settings,
+   cb,
+   true,
+   customFilterObject,
+   recommendOptions
+ );

This keeps retry semantics consistent with the initial request.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b0ba3a and 1de69c4.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (2)
  • src/ai-answers-api.ts (1 hunks)
  • src/apifetch.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/apifetch.ts (1)
src/ai-answers-api.ts (2)
  • executeAiAnswersStreamingFetch (58-65)
  • executeAiAnswersNonStreamingFetch (75-82)
src/ai-answers-api.ts (4)
src/settings.ts (1)
  • Settings (35-72)
src/apifetch.ts (1)
  • ApiFetchCallback (50-52)
src/api.ts (2)
  • RESPONSE_SERVER_ERROR (44-44)
  • aiAnswersInteractionsInstance (41-41)
src/index.ts (1)
  • putSentimentClick (142-147)
🔇 Additional comments (3)
src/ai-answers-api.ts (1)

442-479: Sentiment rating helper looks good and preserves interaction-instance semantics.

The putSentimentClick helper correctly reuses aiAnswersInteractionsInstance, maps SentimentValue to the numeric rating, and surfaces success/failure via a boolean-resolving promise with standardized error payloads. This keeps rating calls on the existing interactions client and avoids the interceptor issues present in the raw fetch calls.

src/apifetch.ts (2)

327-333: Ai‑answers delegation via streaming flag is wired correctly.

The early‑return branch for type === 'ai-answers' cleanly delegates to the new AI Answers API module based on settings.useAiAnswersStream, and avoids falling through into the generic axios handler. This keeps the existing executeApiFetch contract intact while centralizing AI Answers logic in ai-answers-api.ts.


454-470: Unified axios POST/GET handling for non‑ai‑answers types looks correct.

The new handleApiResponse / handleApiError functions and the conditional POST/GET logic correctly use apiInstance for search, suggest, and autocomplete (honoring settings.apiMethod) and keep recommend on GET. This preserves interceptor behavior across these types and centralizes the error mapping to the existing { error: { response, message } } shape.

@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/ai-answers-api.ts (2)

366-421: Consider more specific response validation.

The non-streaming implementation is solid with proper HTTP status checking (line 383-385). One minor improvement: at line 389, the if (data.response) check is truthy-based. If the backend returns an incomplete response object (e.g., {response: {conversation_id: '', answer: '', sources: undefined}}), it would pass through without validation.

Consider validating the required fields explicitly:

-    if (data.response) {
+    if (data.response && data.response.conversation_id && data.response.answer && Array.isArray(data.response.sources)) {
       cb({
         conversation_id: data.response.conversation_id,
         answer: data.response.answer,
         sources: data.response.sources
       });
     } else {

This is optional since the TypeScript types and backend contract should ensure the structure, but it adds runtime safety.


428-484: LGTM! Sentiment submission works correctly.

The sentiment conversion and submission logic is sound. The use of aiAnswersInteractionsInstance (axios) for the PUT request is appropriate since this doesn't require streaming capabilities.

Optional refactor: The sentimentToNumericRating function (lines 428-436) could use an object lookup for slightly cleaner code:

const sentimentToNumericRating = (sentimentValue: SentimentValue): number => {
  const mapping = { positive: 1, negative: -1, neutral: 0 };
  return mapping[sentimentValue];
};

However, the current implementation is clear and only handles 3 cases, so this is purely a style preference.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1de69c4 and ccb9e06.

📒 Files selected for processing (2)
  • package.json (1 hunks)
  • src/ai-answers-api.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • package.json
🧰 Additional context used
🧬 Code graph analysis (1)
src/ai-answers-api.ts (4)
src/settings.ts (1)
  • Settings (35-72)
src/apifetch.ts (1)
  • ApiFetchCallback (50-52)
src/api.ts (2)
  • RESPONSE_SERVER_ERROR (44-44)
  • aiAnswersInteractionsInstance (41-41)
src/index.ts (1)
  • putSentimentClick (142-147)
🔇 Additional comments (6)
src/ai-answers-api.ts (6)

1-48: LGTM! Well-structured type definitions.

The imports and type definitions are comprehensive and well-documented. The public interfaces (AiAnswersSource, AiAnswersResponse, SentimentValue) provide a clear contract for consumers.


58-82: LGTM! Clean public API design.

The wrapper functions provide a stable public API while keeping implementation details internal. Good separation of concerns.


87-150: LGTM! Solid throttling implementation.

The CallbackThrottler class effectively manages callback frequency with proper cleanup. The 100ms threshold is reasonable for streaming updates, and the timeout management prevents resource leaks.


155-252: LGTM! Clean SSE parsing with defensive property access.

The streaming state management and SSE event parsing are well-implemented. The use of || '' and || [] fallbacks (lines 209, 214, 219) provides graceful handling of missing properties without breaking the stream.


257-325: LGTM! Comprehensive streaming implementation.

The streaming execution handles the full lifecycle well:

  • Proper HTTP status validation (line 309-311)
  • Comprehensive error handling with handleError and handleUnexpectedDisconnection
  • Partial data return on disconnection is a good UX choice
  • Throttler cleanup is called appropriately

330-361: LGTM! Correct buffer management for streaming.

The stream reading logic correctly handles incomplete lines with buffer management (line 348) and properly cleans up the throttler before re-throwing errors (line 356-357).

@haoAddsearch haoAddsearch merged commit a22e7cb into master Nov 19, 2025
2 checks passed
@haoAddsearch haoAddsearch deleted the sc-12927/implement-streaming-support-for-ai-answers-result branch November 19, 2025 08:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants