Question-Answering Feature in BlenderBot 2.0 #4092
-
Hi, I am a student and I believe that BlenderBot 2.0 is one of the best Chatbots out there. But I like the question-answering feature in GPT-3(Link). I would like to know is there a way to achieve the same while having all the goodies of Blenderbot 2.0. Here are some of the solutions I have thought of -
Describe alternatives you've considered Therefore, it would be amazing if you can guide me on how to achieve the above solutions. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
If you know the documents you want to retrieve in advance, you can create a dense index for FiD, and retrieve from there (i.e., rather than retrieving from the internet). This would not require re-training the model, as you could use a trained DPR-style model for retrieving from the index - the model is trained to incorporate knowledge from "documents", regardless of where those documents come from (internet, wiki index, memory etc.) As for Q/A ability specifically, note that BB2 was fine-tuned exclusively on dialogue datasets, so in a pure q/a setting it may not yield responses like what you'd expect from a QA-trained model |
Beta Was this translation helpful? Give feedback.
-
@klshuster Thank you for answering. I get your point and it makes sense, Here are some of the doubts/questions I have :
Apologies in advance for asking quite a few questions and suggestions. It would be amazing if you could help me out in understanding these. Thank you! |
Beta Was this translation helpful? Give feedback.
If you know the documents you want to retrieve in advance, you can create a dense index for FiD, and retrieve from there (i.e., rather than retrieving from the internet). This would not require re-training the model, as you could use a trained DPR-style model for retrieving from the index - the model is trained to incorporate knowledge from "documents", regardless of where those documents come from (internet, wiki index, memory etc.)
As for Q/A ability specifically, note that BB2 was fine-tuned exclusively on dialogue datasets, so in a pure q/a setting it may not yield responses like what you'd expect from a QA-trained model