Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added NeMo Guardrails Chat component #3331

Conversation

patrickreinan
Copy link
Contributor

NeMo Guardrails Chat Component

This is a new component which allow connect in LLM models through NVidia Nemo Guardrails. It is a important component for application that need decouple conversational policies from its chatflows.

@HenryHengZJ
Copy link
Contributor

awesome! can we do pnpm lint-fix to get rid of the linting issues?

@patrickreinan
Copy link
Contributor Author

Yes! I did it and now has just 3 warnings

_generate(messages: BaseMessage[], options: this['ParsedCallOptions'], runManager?: CallbackManagerForLLMRun): Promise<ChatResult> {
const generate = async (messages: BaseMessage[], client: NemoClient): Promise<ChatResult> => {
const chatMessages = await client.chat(messages)
const generations = chatMessages.map((message) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add await runManager?.handleLLMNewToken(token ?? '') so that we can trace it for analytic as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @HenryHengZJ

I perceived this await runManager... is used when request is a streming. In this case, my request is direct. Do you think use this command is really necessary?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is used for both, you can do this:

await _runManager?.handleLLMNewToken(generationResult.generations?.length ? generationResult.generations[0].text : '')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got it. I added this line of code (with some adjusments) and it is working fine. Thanks for this!

const chatMessages = await client.chat(messages)
const generations = chatMessages.map((message) => {
return {
text: '',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the text be empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nops, I just fixed it. :)

]
}

async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can just have async init(nodeData: INodeData)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i got it. Done!

@HenryHengZJ
Copy link
Contributor

thank you @patrickreinan!

For the next iteration, we can add support for streaming as well

@HenryHengZJ HenryHengZJ merged commit 5117948 into FlowiseAI:main Oct 17, 2024
2 checks passed
@patrickreinan
Copy link
Contributor Author

You are welcome. Yes, let do it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants