Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of changes:
Add the Plugin feature below to the Bedrock plugin..
(Optional) Use the LLM as a fallback source of answers, using Lambda hooks with CustomNoMatches/no_hits*
Optionally configure QnAbot to prompt the LLM directly by configuring the LLM Plugin LambdaHook function
QnAItemLambdaHookFunctionName
as a Lambda Hook for the QnABot CustomNoMatchesno_hits
item. When QnABot cannot answer a question by any other means, it reverts to theno_hits
item, which, when configured with this Lambda Hook function, will relay the question to the LLM.When your Plugin CloudFormation stack status is CREATE_COMPLETE, choose the Outputs tab. Look for the outputs
QnAItemLambdaHookFunctionName
andQnAItemLambdaHookArgs
. Use these values in the LambdaHook section of your no_hits item. You can change the value of "Prefix', or use "None" if you don't want to prefix the LLM answer.The default behavior is to relay the user's query to the LLM as the prompt. If LLM_QUERY_GENERATION is enabled, the generated (disambiguated) query will be used, otherwise the user's utterance is used. You can override this behavior by supplying an explicit
"Prompt"
key in theQnAItemLambdaHookArgs
value. For example settingQnAItemLambdaHookArgs
to{"Prefix": "LLM Answer:", "Model_params": {"modelId": "anthropic.claude-instant-v1", "temperature": 0}, "Prompt":"Why is the sky blue?"}
will ignore the user's input and simply use the configured prompt instead. Prompts supplied in this manner do not (yet) support variable substitution (eg to substitute user attributes, session attributes, etc. into the prompt). If you feel that would be a useful feature, please create a feature request issue in the repo, or, better yet, implement it, and submit a Pull Request!By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.