Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while using async conversation #674

Open
sukhbinder opened this issue Dec 14, 2024 · 4 comments
Open

Error while using async conversation #674

sukhbinder opened this issue Dec 14, 2024 · 4 comments

Comments

@sukhbinder
Copy link
Contributor

I tried this

import asyncio
import llm

model = llm.get_async_model("llama3.2")

conversation = model.conversation()


async def run():
    response = await conversation.prompt("joke")
    text = await response.text()
    response2 = await conversation.prompt("again")
    text2 = await response2.text()
    print(text, text2)


asyncio.run(run())

This fails with this error

File "/private/tmp/temptest.py", line 17, in <module>
    asyncio.run(run())
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/private/tmp/temptest.py", line 12, in run
    response2 = await conversation.prompt("again")
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 471, in _force
    async for _ in self:
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 458, in __anext__
    chunk = await self._generator.__anext__()
  File "/Users/sukhbindersingh/PROJECTS/llm-ollama/llm_ollama.py", line 260, in execute
    raise RuntimeError(f"Async execution failed: {e}") from e
RuntimeError: Async execution failed: 1 validation error for Message
content
  Input should be a valid string [type=string_type, input_value=<coroutine object AsyncRe....text at 0x7fa261a8ac40>, input_type=coroutine]
    For further information visit https://errors.pydantic.dev/2.10/v/string_type
sys:1: RuntimeWarning: coroutine 'AsyncResponse.text' was never awaited

@simonw Am I doing something wrong? This works if I replace conversation.prompt with model.prompt but no conversation history as expected.

@sukhbinder sukhbinder changed the title Async Conversation Error while using async conversation Dec 14, 2024
@sukhbinder
Copy link
Contributor Author

using the model gemini-2.0-flash-exp

I get this error

  File "/private/tmp/temptest.py", line 17, in <module>
    asyncio.run(run())
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/private/tmp/temptest.py", line 12, in run
    response2 = await conversation.prompt("again")
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 471, in _force
    async for _ in self:
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm/models.py", line 458, in __anext__
    chunk = await self._generator.__anext__()
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/llm_gemini.py", line 271, in execute
    async with client.stream(
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/contextlib.py", line 181, in __aenter__
    return await self.gen.__anext__()
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 1604, in stream
    request = self.build_request(
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 357, in build_request
    return Request(
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_models.py", line 340, in __init__
    headers, stream = encode_request(
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_content.py", line 212, in encode_request
    return encode_json(json)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/site-packages/httpx/_content.py", line 175, in encode_json
    body = json_dumps(json).encode("utf-8")
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/Users/sukhbindersingh/opt/anaconda3/lib/python3.9/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type coroutine is not JSON serializable
sys:1: RuntimeWarning: coroutine 'AsyncResponse.text' was never awaited

@sukhbinder
Copy link
Contributor Author

sukhbinder commented Dec 17, 2024

Sync version of the code works

import llm

model = llm.get_model("llama3.2")
conversation = model.conversation()

def run():
    response = conversation.prompt("joke")
    text = response.text()
    response2 = conversation.prompt("again")
    text2 = response2.text()
    print(text, text2)

run()
(.llm) sukhbindersingh@sukhMacPro llm % python tests/test_real.py
Why don't scientists trust atoms?

Because they make up everything!
Why don't scientists trust atoms?

Because they make up everything! What do you call a fake noodle?

An impasta.
Here's one:

What do you call a fake noodle?

An impasta.

@KiwiPolarBear
Copy link

I have the same issue. I believe the issue is that subsequent calls in the conversation, try to include previous responses, however the handling of the previous responses does not account for an instance of AsyncResponse.

With llm-gemini, the request body is built here, which in turn tries to get the text from previous responses in the conversation here. But because the response is an instance of AsyncResponse, we get the error coroutine 'AsyncResponse.text' was never awaited.

As a workaround, I override the text method of the response to make it synchronous, but this is pretty jank...

response = await conversation.prompt(prompt)
result = await response.text()
response.text = lambda: result
response2 = await conversation.prompt(prompt)
result2 = response2.text()

@sukhbinder
Copy link
Contributor Author

sukhbinder commented Dec 19, 2024

I have pushed a fix for this in the llm-gemini plugin repo. The PR is here and in llm-ollama plugin repo PR 25

sukhbinder pushed a commit to sukhbinder/llm-ollama that referenced this issue Dec 19, 2024
simonw pushed a commit to simonw/llm-gemini that referenced this issue Dec 19, 2024
Refs: simonw/llm#674
---------

Co-authored-by: Sukhbinder Singh <sukhbindersingh@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants