Skip to content

Missing Tokens from Beginning of any Streaming text content after 0.4.* #2428

@newsbubbles

Description

@newsbubbles

Initial Checks

Description

Meeting Notes from Troubleshooting Session

Meeting Purpose

Investigate and resolve streaming response issues with Kimi K2 and Claude models on staging environment.

Key Takeaways

  • The issue of missing initial tokens in streaming responses was caused by upgrading Pydantic AI from v0.3.x to v0.4.x, not by Kimi K2 or Claude models as initially suspected
  • Downgrading Pydantic AI to v0.3.7 resolved the issue
  • The team will pin Pydantic AI to v0.3.7 temporarily until the problem is fixed in newer versions
  • Detailed notes will be shared with Pydantic and Open Router communities to report the bug

Notes

Initial Problem Description

  • Streaming responses on staging were missing initial tokens/words
  • Issue wasn't present on production environment
  • Initially suspected to be related to Kimi K2 model or Open Router
  • The final message includes the missing tokens, while streaming does not

Debugging Process

  • Compared pip packages between staging and production environments
  • Found Pydantic AI version difference: staging on 0.4.9, production on 0.2.4
  • Downgraded Pydantic AI on staging to 0.2.4, which resolved the issue
  • Tested with Claude model, confirming the problem wasn't model-specific

Root Cause Identification

  • Issue started occurring when Pydantic AI was upgraded from 0.3.x to 0.4.x
  • Coincidentally upgraded on the same day as switching to Kimi K2, leading to initial misattribution
  • Tested Pydantic AI v0.3.7, which worked correctly without missing tokens

Open Router Provider Investigation

  • Briefly explored using Open Router provider instead of OpenAI provider
  • Ultimately determined not to be the source of the problem (didn't affect/fix the bug)

GitHub Issue Search

  • Searched Pydantic AI GitHub for related issues
  • Found some mentions of streaming issues, but nothing exactly matching the current problem

Version Testing

  • Tested Pydantic AI v0.5.0, issue still present
  • Confirmed v0.3.7 works correctly

Our Individual Solution

We have no choice but to pin our environments to pydantic-ai 0.3.7 until this issue is resolved.

Example Code

Python, Pydantic AI & LLM client version

pydantic-ai 0.4.7 (initially on staging) ... 0.5.0 (tested, not working)... 0.3.7 (tested, working)... 0.2.4 (on production)

Any LLM, doesn't matter if it's kimi-k2, claude models, openai models, or even the new z.ai GLM 4.5, they all show the same problem which led us to believe initially it was a pydantic-ai issue, as openrouter only makes the route and this spans across multiple providers.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions