Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix output final text for HuggingFaceTextGenInference when streaming #6211

Merged
merged 1 commit into from
Jun 19, 2023
Merged

Fix output final text for HuggingFaceTextGenInference when streaming #6211

merged 1 commit into from
Jun 19, 2023

Conversation

janpawellek
Copy link
Contributor

The LLM integration HuggingFaceTextGenInference already has streaming support.

However, when streaming is enabled, it always returns an empty string as the final output text when the LLM is finished. This is because text is instantiated with an empty string and never updated.

This PR fixes the collection of the final output text by concatenating new tokens.

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

@hwchase17 hwchase17 merged commit ea6a5b0 into langchain-ai:master Jun 19, 2023
This was referenced Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants