Replies: 1 comment 1 reply
-
Hi @jingwangfe can you try out the most recent version (0.11.0) and see if you can reproduce this issue, and try to upgrade your dependencies. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to integrate nemoguardrails in my RAG(built using Langchain Framework). This RAG can use both GPT and VertexAI models(gemini). With GPT all works perfectly, no error. However, when I try to use gemini I get this error:
**Error in ask**: LLM Call Exception: Task <Task pending name='Task-2953' coro=<BaseChatModel._agenerate_with_cache() running at /app/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:715> cb=[gather.<locals>._done_callback() at /usr/local/lib/python3.10/asyncio/tasks.py:720]> got Future <Task pending name='Task-2954' coro=<UnaryUnaryCall._invoke() running at /app/.venv/lib/python3.10/site-packages/grpc/aio/_call.py:577>> attached to a different loop
These are my config:
I've followed this documentation (Add Guardrails to a Chain): https://docs.nvidia.com/nemo/guardrails/user_guides/langchain/langchain-integration.html
.....
self.guardrails_config = RailsConfig.from_path("./data/configs_guardrails")
.....
return RunnableRails(self.guardrails_config, llm, input_key="question",
output_key="answer", runnable=chain)
.....
Beta Was this translation helpful? Give feedback.
All reactions