-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patch for using LCEL to stream from LLM #5873
Patch for using LCEL to stream from LLM #5873
Comments
Hey there, @welljsjs! 👋 I'm here to help you with any bugs, questions, or contributions you may have. Let's squash those bugs together! To address the issue with the diff --git a/node_modules/@langchain/core/dist/language_models/llms.js b/node_modules/@langchain/core/dist/language_models/llms.js
index 70466ae..06e0349 100644
--- a/node_modules/@langchain/core/dist/language_models/llms.js
+++ b/node_modules/@langchain/core/dist/language_models/llms.js
@@ -62,7 +62,7 @@ export class BaseLLM extends BaseLanguageModel {
text: "",
});
try {
- for await (const chunk of this._streamResponseChunks(input.toString(), callOptions, runManagers?.[0])) {
+ for await (const chunk of this._streamResponseChunks(prompt.toString(), callOptions, runManagers?.[0])) {
if (!generation) {
generation = chunk;
} This change ensures that the |
@jacoblee93, this one's a bit out of my wheelhouse, could you step in and provide your guidance? |
Thanks for reporting and for the fix! Seems specifically for |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
I'm trying to use LangChainJS to set up a simple RAG pipeline with an LLM. I'm trying to call "stream" on the chain rather than "invoke" to stream the output of the LLM.
Expected behaviour is that the final outputs of calling invoke and stream on the chain are the same, given that the input is the same.
Instead, what is actually happening is that the output differs. The output when calling stream on the chain does not seem to make any sense at all and is not related to the input. This is due to a bug in the llms.js file. I've managed to fix the problem, see the patch below. Within the implementation of the "async *_streamIterator" method of the BaseLLM class, line 65, the prompt value should be passed as first argument to "this._streamResponseChunks". However, instead of passing the prompt value as a string (which would be "prompt.toString()"), "input.toString()" is passed, which results in a bug since "input.toString()" always evaluates to the string "[object Object]", not the content of the prompt.
Note that my patch works for the generated JS, not the TS.
For TS, this line: https://github.com/langchain-ai/langchainjs/blob/b311ec5c19cd4ab7aad116e81fb1ea33c5d71a8d/langchain-core/src/language_models/llms.ts#L159C11-L159C25 should be changed to
prompt.toString()
instead ofinput.toString()
.System Info
ProductName: macOS
ProductVersion: 12.7.5
BuildVersion: 21H1222
NodeVersion: v22.3.0
NPMVersion: 10.8.1
LangChainVersion: 0.2.6
The text was updated successfully, but these errors were encountered: