-
-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while using async conversation #674
Comments
using the model I get this error File "/private/tmp/temptest.py", line 17, in <module>
asyncio.run(run())
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/private/tmp/temptest.py", line 12, in run
response2 = await conversation.prompt("again")
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 471, in _force
async for _ in self:
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 458, in __anext__
chunk = await self._generator.__anext__()
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm_gemini.py", line 271, in execute
async with client.stream(
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/contextlib.py", line 181, in __aenter__
return await self.gen.__anext__()
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 1604, in stream
request = self.build_request(
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 357, in build_request
return Request(
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_models.py", line 340, in __init__
headers, stream = encode_request(
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_content.py", line 212, in encode_request
return encode_json(json)
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_content.py", line 175, in encode_json
body = json_dumps(json).encode("utf-8")
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type coroutine is not JSON serializable
sys:1: RuntimeWarning: coroutine 'AsyncResponse.text' was never awaited |
Sync version of the code works import llm
model = llm.get_model("llama3.2")
conversation = model.conversation()
def run():
response = conversation.prompt("joke")
text = response.text()
response2 = conversation.prompt("again")
text2 = response2.text()
print(text, text2)
run() (.llm) sukhbindersingh@sukhMacPro llm % python tests/test_real.py
Why don't scientists trust atoms?
Because they make up everything!
Why don't scientists trust atoms?
Because they make up everything! What do you call a fake noodle?
An impasta.
Here's one:
What do you call a fake noodle?
An impasta.
|
I have the same issue. I believe the issue is that subsequent calls in the conversation, try to include previous responses, however the handling of the previous responses does not account for an instance of With llm-gemini, the request body is built here, which in turn tries to get the text from previous responses in the conversation here. But because the response is an instance of AsyncResponse, we get the error As a workaround, I override the
|
Refs: simonw/llm#674 --------- Co-authored-by: Sukhbinder Singh <sukhbindersingh@gmail.com>
I tried this
This fails with this error
@simonw Am I doing something wrong? This works if I replace
conversation.prompt
withmodel.prompt
but no conversation history as expected.The text was updated successfully, but these errors were encountered: