Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improved handling of LLM errors, do not retry if already began #1298

Merged
merged 4 commits into from
Dec 25, 2024

Conversation

davidzhao
Copy link
Member

@davidzhao davidzhao commented Dec 25, 2024

LLM could timeout after producing initial tokens, in those cases the inference request should not be retried. the error would be propagated to the caller (PipelineAgent), who could handle it similarly to an interruption. (i.e. flagging it and performing new inference)

LLM could timeout after producing initial tokens, in those cases
the inference request should not be retries. the error would be propagated
to the caller (PipelineAgent), who could handle it similarly to an
interruption. (i.e. flagging it and performing new inference)
@davidzhao davidzhao requested a review from theomonnom December 25, 2024 20:07
Copy link

changeset-bot bot commented Dec 25, 2024

🦋 Changeset detected

Latest commit: fdeee56

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
livekit-plugins-anthropic Patch
livekit-plugins-openai Patch
livekit-agents Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants