Models: various bag-of-words approaches on complete documents with WordPiece tokenization
This page documents regression experiments on the MS MARCO document ranking task, which is integrated into Anserini's regression testing framework. Here we are using WordPiece tokenization (i.e., from BERT) on the entire document. In general, effectiveness is lower than with "standard" Lucene tokenization for two reasons: (1) we're losing stemming, and (2) some terms are chopped into less meaningful subwords.
The exact configurations for these regressions are stored in this YAML file. Note that this page is automatically generated from this template as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
From one of our Waterloo servers (e.g., orca
), the following command will perform the complete regression, end to end:
python src/main/python/run_regression.py --index --verify --search --regression msmarco-doc-wp
Typical indexing command:
target/appassembler/bin/IndexCollection \
-collection JsonCollection \
-input /path/to/msmarco-doc-wp \
-index indexes/lucene-index.msmarco-doc-wp/ \
-generator DefaultLuceneDocumentGenerator \
-threads 7 -storePositions -storeDocvectors -storeRaw -pretokenized \
>& logs/log.msmarco-doc-wp &
The directory /path/to/msmarco-doc/
should be a directory containing the document corpus in Anserini's jsonl format.
See this page for how to prepare the corpus.
For additional details, see explanation of common indexing options.
Topics and qrels are stored in src/main/resources/topics-and-qrels/
.
The regression experiments here evaluate on the 5193 dev set questions.
After indexing has completed, you should be able to perform retrieval as follows:
target/appassembler/bin/SearchCollection \
-index indexes/lucene-index.msmarco-doc-wp/ \
-topics src/main/resources/topics-and-qrels/topics.msmarco-doc.dev.wp.tsv.gz \
-topicreader TsvInt \
-output runs/run.msmarco-doc-wp.bm25-default.topics.msmarco-doc.dev.wp.txt \
-bm25 -pretokenized &
Evaluation can be performed using trec_eval
:
tools/eval/trec_eval.9.0.4/trec_eval -c -m map src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc-wp.bm25-default.topics.msmarco-doc.dev.wp.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m recip_rank src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc-wp.bm25-default.topics.msmarco-doc.dev.wp.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc-wp.bm25-default.topics.msmarco-doc.dev.wp.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc-wp.bm25-default.topics.msmarco-doc.dev.wp.txt
With the above commands, you should be able to reproduce the following results:
AP@1000 | BM25 (default) |
---|---|
MS MARCO Doc: Dev | 0.2234 |
RR@100 | BM25 (default) |
---|---|
MS MARCO Doc: Dev | 0.2227 |
R@100 | BM25 (default) |
---|---|
MS MARCO Doc: Dev | 0.7031 |
R@1000 | BM25 (default) |
---|---|
MS MARCO Doc: Dev | 0.8743 |