Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Commit

Permalink
Fixed a Typo & Improved Readability (#4104)
Browse files Browse the repository at this point in the history
* Fixed typo 

rertrieving -> retrieving

* Fixed Typo

Removed Extra the

* Added missing word? 

RetrieverType

* Fixed Some Typos

* Fixed a typo and Improved Readability

* Formatted using Black

* Fixed using Autoformat.sh
  • Loading branch information
atharvjairath authored Oct 22, 2021
1 parent e04a35c commit 1bf1c81
Showing 1 changed file with 13 additions and 10 deletions.
23 changes: 13 additions & 10 deletions parlai/agents/rag/retrievers.py
Original file line number Diff line number Diff line change
Expand Up @@ -1175,26 +1175,29 @@ def retrieve_and_score(
self, query: torch.LongTensor
) -> Tuple[List[List[Document]], torch.Tensor]:
"""
Retrieves relevant documents for the query (the conversation context).
Retrieves relevant documents for the query (the conversation context). This
method conducts three main steps that are flagged in the main code as well.
This method conducts three main steps that are flagged in the main code as well.
step 1: generate search queries for the conversation context batch. This
step uses the query generator model (self.query_generator). step 2: use the
search client to retrieve documents. This step uses retrieval API agent
(self.search_client) step 3: generate the list of Document objects from the
retrieved content. Here if the documents too long, the code splits them and
chooses a chunk based on the selected `doc_chunks_ranker` in the opt.
Step 1: generate search queries for the conversation context batch.This step
uses the query generator model (self.query_generator).
Step 2: use the search client to retrieve documents.This step uses retrieval
API agent (self.search_client)
Step 3: generate the list of Document objects from the
retrieved content. Here if the documents too long, the code splits them and
chooses a chunk based on the selected `doc_chunks_ranker` in the opt.
"""
# step 1
search_queries = self.generate_search_query(query)

# step 2
search_results_batach = self.search_client.retrieve(search_queries, self.n_docs)
search_results_batch = self.search_client.retrieve(search_queries, self.n_docs)

# step 3
top_docs = []
top_doc_scores = []
for sq, search_results in zip(search_queries, search_results_batach):
for sq, search_results in zip(search_queries, search_results_batch):
if not search_results:
search_results = self._empty_docs(self.n_docs)
elif len(search_results) < self.n_docs:
Expand Down

0 comments on commit 1bf1c81

Please sign in to comment.