You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've developed a simple RAG example with Guardrails based on some content extracted from Wikipedia. It does work but I face a problem and find no solution. Results are truncated.
from nemoguardrails import LLMRails, RailsConfig
from langchain.chat_models import ChatOpenAI
# Lisons les fichiers de configuration
with open("astro.co", encoding="utf-8") as f:
colang = f.read()
config = RailsConfig.from_content(colang_content=colang)
# Initialisons les rails
rag_rails = LLMRails(config, llm=ChatOpenAI(max_tokens=3000))
rag_rails.register_action(action=retrieve, name="retrieve")
rag_rails.register_action(action=rag, name="rag")
rag_rails.generate(prompt="Quelle est la différence entre une étoile à neutrons et un magnétar ?")
RAG Called
' La principale différence entre une étoile à neutrons et un magnétar est que les magnétars possèdent un champ magnétique extrêmement intense et émettent un rayonnement électromagnétique de haute énergie (rayons X et rayons gamma). Les étoiles à neutrons possèdent également un champ magnétique, mais il est moins intense et ell'
You see in the result it sundenly truncates content in the middle of a word ("elle" truncated to "ell")
As you see, I've already tried to move to GPT-35 instead of text-davinci and set max_tokens to 3000. It does not seem to affect the result that keeps being truncated after the same length.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello
I've developed a simple RAG example with Guardrails based on some content extracted from Wikipedia. It does work but I face a problem and find no solution. Results are truncated.
from nemoguardrails import LLMRails, RailsConfig
from langchain.chat_models import ChatOpenAI
You see in the result it sundenly truncates content in the middle of a word ("elle" truncated to "ell")
As you see, I've already tried to move to GPT-35 instead of text-davinci and set max_tokens to 3000. It does not seem to affect the result that keeps being truncated after the same length.
Any idea is welcome...
Bernard
Beta Was this translation helpful? Give feedback.
All reactions