You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the Feature
Add support for multi-modal testset generation in RAGAS. While RAGAS currently supports multi-modal evaluation through MultiModalRelevance and MultiModalFaithfulness metrics, it lacks the ability to generate multi-modal test cases. This feature would enable automatic generation of test cases that combine both text and image contexts, making it easier to evaluate multi-modal RAG systems comprehensively. Ideally it would also extract images from documents.
Why is the feature important for you?
Currently, while RAGAS can evaluate multi-modal RAG systems, users must manually create test cases that include images. This creates a disconnect between the evaluation and generation capabilities. Visual RAG appears to be the future.
The text was updated successfully, but these errors were encountered:
Describe the Feature
Add support for multi-modal testset generation in RAGAS. While RAGAS currently supports multi-modal evaluation through MultiModalRelevance and MultiModalFaithfulness metrics, it lacks the ability to generate multi-modal test cases. This feature would enable automatic generation of test cases that combine both text and image contexts, making it easier to evaluate multi-modal RAG systems comprehensively. Ideally it would also extract images from documents.
Why is the feature important for you?
Currently, while RAGAS can evaluate multi-modal RAG systems, users must manually create test cases that include images. This creates a disconnect between the evaluation and generation capabilities. Visual RAG appears to be the future.
The text was updated successfully, but these errors were encountered: