-
Notifications
You must be signed in to change notification settings - Fork 467
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add regressions for DeepImpact and uniCOIL on MS MARCO passage (#1633)
- Loading branch information
Showing
12 changed files
with
482 additions
and
5 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
# Anserini: Regressions for DeepImpact on [MS MARCO Passage](https://github.com/microsoft/MSMARCO-Passage-Ranking) | ||
|
||
This page documents regression experiments for DeepImpact on the MS MARCO Passage Ranking Task, which is integrated into Anserini's regression testing framework. | ||
DeepImpact is described in the following paper: | ||
|
||
> Antonio Mallia, Omar Khattab, Nicola Tonellotto, and Torsten Suel. [Learning Passage Impacts for Inverted Indexes.](https://dl.acm.org/doi/10.1145/3404835.3463030) _SIGIR 2021_. | ||
For more complete instructions on how to run end-to-end experiments, refer to [this page](experiments-msmarco-passage-deepimpact.md). | ||
|
||
The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/msmarco-passage-deepimpact.yaml). | ||
Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/msmarco-passage-deepimpact.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. | ||
|
||
## Indexing | ||
|
||
Typical indexing command: | ||
|
||
``` | ||
nohup sh target/appassembler/bin/IndexCollection -collection JsonVectorCollection \ | ||
-input /path/to/msmarco-passage-deepimpact \ | ||
-index indexes/lucene-index.msmarco-passage-deepimpact.raw \ | ||
-generator DefaultLuceneDocumentGenerator \ | ||
-threads 16 -impact -pretokenized -storeRaw \ | ||
>& logs/log.msmarco-passage-deepimpact & | ||
``` | ||
|
||
The directory `/path/to/msmarco-passage-deepimpact/` should be a directory containing the compressed `jsonl` files that comprise the corpus. | ||
See [this page](experiments-msmarco-passage-deepimpact.md) for additional details. | ||
|
||
For additional details, see explanation of [common indexing options](common-indexing-options.md). | ||
|
||
## Retrieval | ||
|
||
Topics and qrels are stored in [`src/main/resources/topics-and-qrels/`](../src/main/resources/topics-and-qrels/). | ||
The regression experiments here evaluate on the 6980 dev set questions; see [this page](experiments-msmarco-passage.md) for more details. | ||
|
||
After indexing has completed, you should be able to perform retrieval as follows: | ||
|
||
``` | ||
nohup target/appassembler/bin/SearchCollection -index indexes/lucene-index.msmarco-passage-deepimpact.raw \ | ||
-topicreader TsvInt -topics src/main/resources/topics-and-qrels/topics.msmarco-passage.dev-subset.deepimpact.tsv.gz \ | ||
-output runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deepimpact.tsv.gz \ | ||
-impact -pretokenized & | ||
``` | ||
|
||
Evaluation can be performed using `trec_eval`: | ||
|
||
``` | ||
tools/eval/trec_eval.9.0.4/trec_eval -m map -c -m recip_rank -c -m recall.1000 -c src/main/resources/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deepimpact.tsv.gz | ||
``` | ||
|
||
## Effectiveness | ||
|
||
With the above commands, you should be able to reproduce the following results: | ||
|
||
MAP | DeepImpact| | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.3334 | | ||
|
||
|
||
MRR | DeepImpact| | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.3386 | | ||
|
||
|
||
R@1000 | DeepImpact| | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.9476 | | ||
|
||
The above runs are in TREC output format and evaluated with `trec_eval`. | ||
In order to reproduce results reported in the paper, we need to convert to MS MARCO output format and then evaluate: | ||
|
||
```bash | ||
python tools/scripts/msmarco/convert_trec_to_msmarco_run.py \ | ||
--input runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz \ | ||
--output runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz.msmarco --quiet | ||
|
||
python tools/scripts/msmarco/msmarco_passage_eval.py \ | ||
collections/msmarco-passage/qrels.dev.small.tsv \ | ||
runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz.msmarco | ||
``` | ||
|
||
The results should be as follows: | ||
|
||
``` | ||
##################### | ||
MRR @10: 0.3252764133351524 | ||
QueriesRanked: 6980 | ||
##################### | ||
``` | ||
|
||
The final evaluation metric is very close to the one reported in the paper (0.326). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
# Anserini: Regressions for uniCOIL on [MS MARCO Passage](https://github.com/microsoft/MSMARCO-Passage-Ranking) | ||
|
||
This page documents regression experiments for uniCOIL on the MS MARCO Passage Ranking Task, which is integrated into Anserini's regression testing framework. | ||
The uniCOIL model is described in the following paper: | ||
|
||
> Jimmy Lin and Xueguang Ma. [A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques.](https://arxiv.org/abs/2106.14807) _arXiv:2106.14807_. | ||
For more complete instructions on how to run end-to-end experiments, refer to [this page](experiments-msmarco-passage-unicoil.md). | ||
|
||
The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/msmarco-passage-unicoil.yaml). | ||
Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/msmarco-passage-unicoil.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. | ||
|
||
## Indexing | ||
|
||
Typical indexing command: | ||
|
||
``` | ||
nohup sh target/appassembler/bin/IndexCollection -collection JsonVectorCollection \ | ||
-input /path/to/msmarco-passage-unicoil \ | ||
-index indexes/lucene-index.msmarco-passage-unicoil.raw \ | ||
-generator DefaultLuceneDocumentGenerator \ | ||
-threads 16 -impact -pretokenized -storeRaw \ | ||
>& logs/log.msmarco-passage-unicoil & | ||
``` | ||
|
||
The directory `/path/to/msmarco-passage-unicoil/` should be a directory containing the compressed `jsonl` files that comprise the corpus. | ||
See [this page](experiments-msmarco-passage-unicoil.md) for additional details. | ||
|
||
For additional details, see explanation of [common indexing options](common-indexing-options.md). | ||
|
||
## Retrieval | ||
|
||
Topics and qrels are stored in [`src/main/resources/topics-and-qrels/`](../src/main/resources/topics-and-qrels/). | ||
The regression experiments here evaluate on the 6980 dev set questions; see [this page](experiments-msmarco-passage.md) for more details. | ||
|
||
After indexing has completed, you should be able to perform retrieval as follows: | ||
|
||
``` | ||
nohup target/appassembler/bin/SearchCollection -index indexes/lucene-index.msmarco-passage-unicoil.raw \ | ||
-topicreader TsvInt -topics src/main/resources/topics-and-qrels/topics.msmarco-passage.dev-subset.unicoil.tsv.gz \ | ||
-output runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz \ | ||
-impact -pretokenized & | ||
``` | ||
|
||
Evaluation can be performed using `trec_eval`: | ||
|
||
``` | ||
tools/eval/trec_eval.9.0.4/trec_eval -m map -c -m recip_rank -c -m recall.1000 -c src/main/resources/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz | ||
``` | ||
|
||
## Effectiveness | ||
|
||
With the above commands, you should be able to reproduce the following results: | ||
|
||
MAP | uniCOIL | | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.3574 | | ||
|
||
|
||
MRR | uniCOIL | | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.3625 | | ||
|
||
|
||
R@1000 | uniCOIL | | ||
:---------------------------------------|-----------| | ||
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.9582 | | ||
|
||
The above runs are in TREC output format and evaluated with `trec_eval`. | ||
In order to reproduce results reported in the paper, we need to convert to MS MARCO output format and then evaluate: | ||
|
||
```bash | ||
python tools/scripts/msmarco/convert_trec_to_msmarco_run.py \ | ||
--input runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz \ | ||
--output runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz.msmarco --quiet | ||
|
||
python tools/scripts/msmarco/msmarco_passage_eval.py \ | ||
tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt \ | ||
runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz.msmarco | ||
``` | ||
|
||
The results should be as follows: | ||
|
||
``` | ||
##################### | ||
MRR @10: 0.35155222404147896 | ||
QueriesRanked: 6980 | ||
##################### | ||
``` | ||
|
||
This corresponds to the effectiveness reported in the paper. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
71 changes: 71 additions & 0 deletions
71
src/main/resources/docgen/templates/msmarco-passage-deepimpact.template
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# Anserini: Regressions for DeepImpact on [MS MARCO Passage](https://github.com/microsoft/MSMARCO-Passage-Ranking) | ||
|
||
This page documents regression experiments for DeepImpact on the MS MARCO Passage Ranking Task, which is integrated into Anserini's regression testing framework. | ||
DeepImpact is described in the following paper: | ||
|
||
> Antonio Mallia, Omar Khattab, Nicola Tonellotto, and Torsten Suel. [Learning Passage Impacts for Inverted Indexes.](https://dl.acm.org/doi/10.1145/3404835.3463030) _SIGIR 2021_. | ||
|
||
For more complete instructions on how to run end-to-end experiments, refer to [this page](experiments-msmarco-passage-deepimpact.md). | ||
|
||
The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/msmarco-passage-deepimpact.yaml). | ||
Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/msmarco-passage-deepimpact.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. | ||
|
||
## Indexing | ||
|
||
Typical indexing command: | ||
|
||
``` | ||
${index_cmds} | ||
``` | ||
|
||
The directory `/path/to/msmarco-passage-deepimpact/` should be a directory containing the compressed `jsonl` files that comprise the corpus. | ||
See [this page](experiments-msmarco-passage-deepimpact.md) for additional details. | ||
|
||
For additional details, see explanation of [common indexing options](common-indexing-options.md). | ||
|
||
## Retrieval | ||
|
||
Topics and qrels are stored in [`src/main/resources/topics-and-qrels/`](../src/main/resources/topics-and-qrels/). | ||
The regression experiments here evaluate on the 6980 dev set questions; see [this page](experiments-msmarco-passage.md) for more details. | ||
|
||
After indexing has completed, you should be able to perform retrieval as follows: | ||
|
||
``` | ||
${ranking_cmds} | ||
``` | ||
|
||
Evaluation can be performed using `trec_eval`: | ||
|
||
``` | ||
${eval_cmds} | ||
``` | ||
|
||
## Effectiveness | ||
|
||
With the above commands, you should be able to reproduce the following results: | ||
|
||
${effectiveness} | ||
|
||
The above runs are in TREC output format and evaluated with `trec_eval`. | ||
In order to reproduce results reported in the paper, we need to convert to MS MARCO output format and then evaluate: | ||
|
||
```bash | ||
python tools/scripts/msmarco/convert_trec_to_msmarco_run.py \ | ||
--input runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz \ | ||
--output runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz.msmarco --quiet | ||
|
||
python tools/scripts/msmarco/msmarco_passage_eval.py \ | ||
collections/msmarco-passage/qrels.dev.small.tsv \ | ||
runs/run.msmarco-passage-deepimpact.deepimpact.topics.msmarco-passage.dev-subset.deep-impact.tsv.gz.msmarco | ||
``` | ||
|
||
The results should be as follows: | ||
|
||
``` | ||
##################### | ||
MRR @10: 0.3252764133351524 | ||
QueriesRanked: 6980 | ||
##################### | ||
``` | ||
|
||
The final evaluation metric is very close to the one reported in the paper (0.326). |
71 changes: 71 additions & 0 deletions
71
src/main/resources/docgen/templates/msmarco-passage-unicoil.template
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# Anserini: Regressions for uniCOIL on [MS MARCO Passage](https://github.com/microsoft/MSMARCO-Passage-Ranking) | ||
|
||
This page documents regression experiments for uniCOIL on the MS MARCO Passage Ranking Task, which is integrated into Anserini's regression testing framework. | ||
The uniCOIL model is described in the following paper: | ||
|
||
> Jimmy Lin and Xueguang Ma. [A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques.](https://arxiv.org/abs/2106.14807) _arXiv:2106.14807_. | ||
|
||
For more complete instructions on how to run end-to-end experiments, refer to [this page](experiments-msmarco-passage-unicoil.md). | ||
|
||
The exact configurations for these regressions are stored in [this YAML file](../src/main/resources/regression/msmarco-passage-unicoil.yaml). | ||
Note that this page is automatically generated from [this template](../src/main/resources/docgen/templates/msmarco-passage-unicoil.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead. | ||
|
||
## Indexing | ||
|
||
Typical indexing command: | ||
|
||
``` | ||
${index_cmds} | ||
``` | ||
|
||
The directory `/path/to/msmarco-passage-unicoil/` should be a directory containing the compressed `jsonl` files that comprise the corpus. | ||
See [this page](experiments-msmarco-passage-unicoil.md) for additional details. | ||
|
||
For additional details, see explanation of [common indexing options](common-indexing-options.md). | ||
|
||
## Retrieval | ||
|
||
Topics and qrels are stored in [`src/main/resources/topics-and-qrels/`](../src/main/resources/topics-and-qrels/). | ||
The regression experiments here evaluate on the 6980 dev set questions; see [this page](experiments-msmarco-passage.md) for more details. | ||
|
||
After indexing has completed, you should be able to perform retrieval as follows: | ||
|
||
``` | ||
${ranking_cmds} | ||
``` | ||
|
||
Evaluation can be performed using `trec_eval`: | ||
|
||
``` | ||
${eval_cmds} | ||
``` | ||
|
||
## Effectiveness | ||
|
||
With the above commands, you should be able to reproduce the following results: | ||
|
||
${effectiveness} | ||
|
||
The above runs are in TREC output format and evaluated with `trec_eval`. | ||
In order to reproduce results reported in the paper, we need to convert to MS MARCO output format and then evaluate: | ||
|
||
```bash | ||
python tools/scripts/msmarco/convert_trec_to_msmarco_run.py \ | ||
--input runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz \ | ||
--output runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz.msmarco --quiet | ||
|
||
python tools/scripts/msmarco/msmarco_passage_eval.py \ | ||
tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt \ | ||
runs/run.msmarco-passage-unicoil.unicoil.topics.msmarco-passage.dev-subset.unicoil.tsv.gz.msmarco | ||
``` | ||
|
||
The results should be as follows: | ||
|
||
``` | ||
##################### | ||
MRR @10: 0.35155222404147896 | ||
QueriesRanked: 6980 | ||
##################### | ||
``` | ||
|
||
This corresponds to the effectiveness reported in the paper. |
Oops, something went wrong.