-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Empty string returned when using google/gemma-2-9b-it model of Together AI API #90
Comments
Hello! In which context is the call to llm made? It could be that the call had terminators = ['\n'] and llm responded with a newline and was cut off? Could you share more of the code you are running? |
Thanks for your quick response! I am currently running my_agent.py from agent_development.ipynb with colab.
After that, during the running simulation with
I think there should be an output generated based on the language model next to the agent name, but it doesn't generate any output as shown above! |
Hi! What is most likely is that call_limit_wrapper.CallLimitLanguageModel has reached the limit of calls. You can simply disable it by doing removing this line: |
Thank you for response! Unfortunately, I tried removing the line
|
Is the issue that it always generates empty strings or that it sometimes generates empty strings? Do you ever see normal responses from this LLM? |
@jzleibo Thank you for the response! I am not sure, but I think It �always generates empty strings, because I do not see normal responses from the LLM in output logs and HTML logs (below image is an example image of HTML log's conversation scene). I also print the
|
Hey! We reproduced the issue and looking into it |
Oh, I see! Thanks! |
Any updates on this? @vezhnick |
The For now, the plan is still to use |
We are updating the contest Slack with the latest as this goes on: |
Hello,
I recently went through the agent_development.ipynb tutorial.
I used
API_TYPE='together_ai'
,MODEL_NAME='google/gemma-2-9b-it'
as recommended in the tutorial.I noticed that during the
runnable_simulation()
process, LLM generates an empty string (''
).In the api response of together_ai, I found that
result = response.choices[0].message.content
returns an empty string (''
).Is there any way to fix this?
Thank you!
The text was updated successfully, but these errors were encountered: