You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sometimes, an openai-compatible provider doesn't give a finish reason, causing the stream to stop prematurely, such as with google's gemini openai-compatible api
The text was updated successfully, but these errors were encountered:
I never actually tested Google's OpenAI compatible API. Just did though, and I see the issue.
It indeed looks like finish_reason is always None which is definitely incorrect on their part.
I also noticed that the final chunk contains both finish_reason and finishReason. And it looks like only finishReason contains the actual finish reason (in this case stop), while finish_reason (the correctly named one) remains None.
I see other issues too...like the content of each chunk is way too long. Here's some that I saw for example:
content=' carp:\n\nBartholomew "Barty" Butterfield, a taxider'
content="mist with a penchant for sherry and questionable hygiene, inherited his great-aunt Mildred's dilapidated mansion – a gothic monstrosity clinging precariously to a cliff"
content=' overlooking a churning, grey sea. Mildred, a recluse with a rumored penchant for necromancy (and possibly tax evasion), left Barty only'
Google's OpenAI API is still in beta so hopefully we'll see these issues resolved. Otherwise I'll have to bandaid fix it in llmcord :)
In the meantime I recommend using Google models through OpenRouter. Those have issues too...but you can avoid them by setting use_plain_responses to true.
sometimes, an openai-compatible provider doesn't give a finish reason, causing the stream to stop prematurely, such as with google's gemini openai-compatible api
The text was updated successfully, but these errors were encountered: