Using Semantic Kernel connectors onnx and kernel memory for rag #744
Barshan-Mandal
started this conversation in
2. Feature requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Description
This issue involves integrating Semantic Kernel connectors with ONNX models and Kernel Memory to implement Retrieval Augmented Generation (RAG). The goal is to enhance data-driven features in applications by leveraging advanced embeddings and large language models (LLMs).
Objectives
Integrate ONNX Models: (lets say phi 3 vision 128 k)
Utilize ONNX models for efficient and scalable inference.
Ensure compatibility with Semantic Kernel connectors for seamless integration.
Implement Kernel Memory:
Use Kernel Memory for efficient indexing and querying of datasets.
Support continuous data hybrid pipelines for RAG, synthetic memory, and prompt engineering.
Enhance RAG Capabilities:
Enable Natural Language querying to obtain answers from indexed data.
Provide citations and links to original sources for transparency and reliability.
Tasks
Setup Environment:
Install necessary dependencies (LibVLCSharp, ONNX Runtime, etc.).
Configure Semantic Kernel and Kernel Memory.
Develop Integration:
Implement connectors to integrate ONNX models with Semantic Kernel.
Develop methods to import and index documents using Kernel Memory.
Testing and Validation:
Test the integration with various datasets.
Validate the accuracy and performance of the RAG system.
Documentation:
Document the setup, integration process, and usage instructions.
Provide examples and best practices for using the system.
Expected Outcome
A robust system that combines Semantic Kernel connectors, ONNX models, and Kernel Memory to enhance RAG capabilities, enabling efficient and accurate data retrieval and generation.
Beta Was this translation helpful? Give feedback.
All reactions