-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added NeMo Guardrails Chat component #3331
Added NeMo Guardrails Chat component #3331
Conversation
awesome! can we do |
Yes! I did it and now has just 3 warnings |
_generate(messages: BaseMessage[], options: this['ParsedCallOptions'], runManager?: CallbackManagerForLLMRun): Promise<ChatResult> { | ||
const generate = async (messages: BaseMessage[], client: NemoClient): Promise<ChatResult> => { | ||
const chatMessages = await client.chat(messages) | ||
const generations = chatMessages.map((message) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add await runManager?.handleLLMNewToken(token ?? '')
so that we can trace it for analytic as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @HenryHengZJ
I perceived this await runManager... is used when request is a streming. In this case, my request is direct. Do you think use this command is really necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is used for both, you can do this:
await _runManager?.handleLLMNewToken(generationResult.generations?.length ? generationResult.generations[0].text : '')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got it. I added this line of code (with some adjusments) and it is working fine. Thanks for this!
const chatMessages = await client.chat(messages) | ||
const generations = chatMessages.map((message) => { | ||
return { | ||
text: '', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the text
be empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nops, I just fixed it. :)
] | ||
} | ||
|
||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can just have async init(nodeData: INodeData)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i got it. Done!
fix: generation text has been updated with content string
thank you @patrickreinan! For the next iteration, we can add support for streaming as well |
You are welcome. Yes, let do it! |
NeMo Guardrails Chat Component
This is a new component which allow connect in LLM models through NVidia Nemo Guardrails. It is a important component for application that need decouple conversational policies from its chatflows.