-
-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Integrate RAG-based Language Model for Interactive Q&A 🤖 #13
Comments
@kadirnar Can I work on this issue? |
Yes🚀 It would be great if you could use the Autollm library. You can also add other libraries. |
Thank you @kadirnar |
Hi @StatKumar, the autollm library can be difficult. Do you want me to help you? You can ask all your questions. We can send a pull reqest for testing and develop it together. |
@kadirnar Sure. Please guide me through it. |
@kadirnar should we create a roadmap for this task? With smaller milestones |
@kadirnar Also I was trying out the app and I was getting the following: It happens when the device is not switched - cpu to gpu or vice versa Meanwhile, do you want to create slack channel? so we can discuss it on there. |
Do you use discord app? Can you text me? username: kadirnar |
@kadirnar sent you a friend's request on discord. |
ill create new pr for RAG with lancedb as vectordb ,so we can chat with our audio/video |
Integrate the RAG-based Language Model for interactive question and answer functionality. Users can utilize any text input to query their preferred language model. The model will respond with answers derived from the specified document, enhancing user engagement and interactivity. 🤖
Github: Autollm: Ship RAG based LLM ⚙️
The text was updated successfully, but these errors were encountered: