-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add GuardrailsEngine for llama index #1005
Conversation
Interesting design! A few followups
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have similar questions as @zsimjee. WRT to principal of least surprise I would expect to see something like query_engine.query
passed as the llm_api and the existing forms of input and output validation used on a single Guard.
In addition to this I have doubts about the behaviour of the current implementation since the types don't line up. Could you add some unit and integration tests to assert the code is acting as expected?
You're correct. Removed the need of defining the input and output validators separately. Current design should account for it. It will be highlighted in the documentation.
Let's consider the implications of inverting the flow: Potential benefits of the inverted design:
Potential drawbacks or limitations:
|
yup, of course. it would be better for us to align on the RFC first. Will be fixing the typing issues too. |
This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This PR was closed because it has been stalled for 14 days with no activity. |
Closing in favor of branch on guardrails repo: https://github.com/guardrails-ai/guardrails/tree/feat/llama-index |
See: #1160 |
Updated.
Implement GuardrailsEngine for LlamaIndex integration
This PR introduces the
GuardrailsEngine
class, which integrates Guardrails validation with LlamaIndex's query and chat engines. TheGuardrailsEngine
provides a unified interface for applying guardrails to both query and chat functionalities while maintaining compatibility with LlamaIndex's expected interfaces.BaseQueryEngine
for compatibility with LlamaIndex components.BaseQueryEngine
or aBaseChatEngine
.QueryEngine and ChatEngine Integration:
The
GuardrailsEngine
is designed to work seamlessly with both LlamaIndex's QueryEngine and ChatEngine.This dual functionality allows users to apply guardrails to both one-off queries and multi-turn conversations, enhancing the safety and reliability of LLM responses in various use cases.
Example