-
Notifications
You must be signed in to change notification settings - Fork 190
feat: use LlamaIndexAdapter #302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
WalkthroughThe pull request introduces significant changes to the chat response handling and the generation of suggested questions. The Changes
Possibly related PRs
Poem
Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (1)
Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- templates/components/llamaindex/typescript/streaming/stream.ts (0 hunks)
- templates/types/streaming/express/src/controllers/chat.controller.ts (3 hunks)
- templates/types/streaming/nextjs/app/api/chat/route.ts (3 hunks)
Files not reviewed due to no reviewable changes (1)
- templates/components/llamaindex/typescript/streaming/stream.ts
Additional comments not posted (9)
templates/types/streaming/express/src/controllers/chat.controller.ts (6)
1-1: LGTM!The import statement has been updated correctly to include the
LlamaIndexAdapter.
13-13: LGTM!The import statement for
generateNextQuestionshas been added correctly.
59-59: LGTM!Explicitly typing the chat history as an array of
ChatMessageenhances type safety.
65-65: LGTM!Passing the
chatHistoryas a parameter to thechatfunction is necessary for the chat engine to have access to the chat history.
70-84: LGTM!The
onFinalcallback function enhances the functionality by integrating suggested questions into the chat flow and improving the response handling mechanism. The logic and syntax are correct.
86-89: LGTM!Using the
LlamaIndexAdapter.toDataStreamResponsefunction to convert the chat response into a data stream format is necessary for the client to receive the response as a stream. Passing theonFinalcallback function as a parameter ensures that the final content is processed correctly. The logic and syntax are correct.templates/types/streaming/nextjs/app/api/chat/route.ts (3)
1-1: Importing LlamaIndexAdapter is appropriate.The import statement correctly adds
LlamaIndexAdapter, which is necessary for the new implementation of the response handling.
15-15: Importing generateNextQuestions module.The import of
generateNextQuestionsfrom"./llamaindex/streaming/suggestion"is correct and integrates the suggestion generation functionality.
99-102: Usage of LlamaIndexAdapter.toDataStreamResponse is appropriate.The call to
LlamaIndexAdapter.toDataStreamResponsecorrectly integrates the response with the Vercel AI data stream along with theonFinalcallback.
| const onFinal = (content: string) => { | ||
| chatHistory.push({ role: "assistant", content: content }); | ||
| generateNextQuestions(chatHistory) | ||
| .then((questions: string[]) => { | ||
| if (questions.length > 0) { | ||
| vercelStreamData.appendMessageAnnotation({ | ||
| type: "suggested_questions", | ||
| data: questions, | ||
| }); | ||
| } | ||
| }) | ||
| .finally(() => { | ||
| vercelStreamData.close(); | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure thread safety when modifying chatHistory.
Since generateNextQuestions is asynchronous, concurrent invocations might lead to race conditions when modifying chatHistory. Ensure that chatHistory modifications are thread-safe or consider using immutable data structures.
Add error handling for generateNextQuestions promise.
Currently, if generateNextQuestions(chatHistory) rejects, the error will be silently ignored, making debugging difficult. Consider adding a .catch block to handle potential errors.
Apply the following change to add error handling:
generateNextQuestions(chatHistory)
.then((questions: string[]) => {
if (questions.length > 0) {
vercelStreamData.appendMessageAnnotation({
type: "suggested_questions",
data: questions,
});
}
+ })
+ .catch((error) => {
+ console.error("Error generating next questions:", error);
})
.finally(() => {
vercelStreamData.close();
});Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const onFinal = (content: string) => { | |
| chatHistory.push({ role: "assistant", content: content }); | |
| generateNextQuestions(chatHistory) | |
| .then((questions: string[]) => { | |
| if (questions.length > 0) { | |
| vercelStreamData.appendMessageAnnotation({ | |
| type: "suggested_questions", | |
| data: questions, | |
| }); | |
| } | |
| }) | |
| .finally(() => { | |
| vercelStreamData.close(); | |
| }); | |
| }; | |
| const onFinal = (content: string) => { | |
| chatHistory.push({ role: "assistant", content: content }); | |
| generateNextQuestions(chatHistory) | |
| .then((questions: string[]) => { | |
| if (questions.length > 0) { | |
| vercelStreamData.appendMessageAnnotation({ | |
| type: "suggested_questions", | |
| data: questions, | |
| }); | |
| } | |
| }) | |
| .catch((error) => { | |
| console.error("Error generating next questions:", error); | |
| }) | |
| .finally(() => { | |
| vercelStreamData.close(); | |
| }); | |
| }; |
|
|
||
| // Setup callbacks | ||
| const callbackManager = createCallbackManager(vercelStreamData); | ||
| const chatHistory: ChatMessage[] = messages as ChatMessage[]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type casting between Message[] and ChatMessage[] may cause issues.
Casting messages of type Message[] to ChatMessage[] using as may lead to type safety issues if the structures of Message and ChatMessage differ. Consider explicitly mapping the messages to ensure compatibility.
Apply the following change to map the messages properly:
-const chatHistory: ChatMessage[] = messages as ChatMessage[];
+const chatHistory: ChatMessage[] = messages.map((message) => ({
+ role: message.role,
+ content: message.content,
+}));Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const chatHistory: ChatMessage[] = messages as ChatMessage[]; | |
| const chatHistory: ChatMessage[] = messages.map((message) => ({ | |
| role: message.role, | |
| content: message.content, | |
| })); |
depends on vercel/ai#3064 being released
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores
aidependency to enhance application functionality.