Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages. It is designed for mono-lingual retrieval, specifically to evaluate ranking with learned dense representations. Mr. TyDi is licensed under the Apache License 2.0.
Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th
The dataset (v1.1) is also available on HuggingFace Dataset:
Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th
Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th
Previous Versions
Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th
The one-command reproduction (on v1.1) would require the recent dev version of Pyserini. Please follow this guidance to setup a dev installation for Pyserini.
This page only covers the scripts to reproduce searching. The indexes are all handled within Pyserini. That is, you won't need to manually download the above indexes or models to run the following scripts. For the scripts to reproduce the sparse and dense indexes, please refer to the Pyserini documentations:
Model | Documentation Link |
---|---|
Sparse Index | Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th |
Dense Index | Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th |
lang=arabic # one of {'arabic', 'bengali', 'english', 'finnish', 'indonesian', 'japanese', 'korean', 'russian', 'swahili', 'telugu', 'thai'}
lang_abbr=ar # one of {'ar', 'bn', 'en', 'fi', 'id', 'ja', 'ko', 'ru', 'sw', 'te', 'th'}
set_name=test # one of {'train', 'dev', 'test'}
runfile=runs/run.bm25.mrtydi-v1.1-${lang}.${set_name}.txt
python -m pyserini.search --bm25 \
--language ${lang_abbr} \
--topics mrtydi-v1.1-${lang}-${set_name} \
--index mrtydi-v1.1-${lang} \
--output ${runfile}
lang=arabic # one of {'arabic', 'bengali', 'english', 'finnish', 'indonesian', 'japanese', 'korean', 'russian', 'swahili', 'telugu', 'thai'}
set_name=test # one of {'train', 'dev', 'test'}
runfile=runs/run.mdpr.mrtydi-v1.1-$lang.${set_name}.txt
python -m pyserini.dsearch \
--topics mrtydi-v1.1-${lang}-${set_name} \
--index mrtydi-v1.1-${lang}-mdpr-nq \
--encoder castorini/mdpr-question-nq \
--batch-size 36 \
--threads 12 \
--output
To use the pre-set best alpha for each language:
lang=arabic # one of {'arabic', 'bengali', 'english', 'finnish', 'indonesian', 'japanese', 'korean', 'russian', 'swahili', 'telugu', 'thai'}
python scripts/hybrid.py --lang ${lang} \
--sparse ${bm25_runfile} \
--dense ${dense_runfile} \
--output ${runfile} \
--weight-on-dense \
--normalization
Or to run hybrid with any alpha
:
alpha=0.5
python scripts/hybrid.py --alpha ${alpha} \
--sparse ${bm25_runfile} \
--dense ${dense_runfile} \
--output ${runfile} \
--weight-on-dense \
--normalization
where the bm25_runfile
and dense_runfile
are prepared from the previous two steps.
python -m pyserini.eval.trec_eval -c -mrecip_rank -mrecall.100 ${qrels} ${runfile}
Here we present the MRR@100 and Recall@100 scores after fixing a bug relavant to using multi-lingual models in Pyserini 0.13.0. The sparse scores are unaffected, whereas the mDPR and Hybrid scores are higher than the original reported ones to various degree. We also put the updated figures under the figures/ directory.
Ar | Bn | En | Fi | In | Ja | Ko | Ru | Sw | Te | Th | avg | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
BM25 (default) | 0.368 | 0.418 | 0.140 | 0.284 | 0.376 | 0.211 | 0.285 | 0.313 | 0.389 | 0.343 | 0.401 | 0.321 |
BM25 (tuned) | 0.366 | 0.413 | 0.150 | 0.287 | 0.382 | 0.217 | 0.280 | 0.329 | 0.396 | 0.424 | 0.416 | 0.333 |
mDPR (NQ) | 0.291 | 0.291 | 0.291 | 0.206 | 0.271 | 0.213 | 0.235 | 0.283 | 0.189 | 0.111 | 0.172 | 0.226 |
Hybrid | 0.500 | 0.555 | 0.328 | 0.377 | 0.481 | 0.360 | 0.361 | 0.455 | 0.415 | 0.418 | 0.507 | 0.426 |
Ar | Bn | En | Fi | In | Ja | Ko | Ru | Sw | Te | Th | avg | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
BM25 (default) | 0.793 | 0.869 | 0.537 | 0.719 | 0.843 | 0.645 | 0.619 | 0.648 | 0.764 | 0.758 | 0.853 | 0.732 |
BM25 (tuned) | 0.800 | 0.874 | 0.551 | 0.725 | 0.846 | 0.656 | 0.797 | 0.660 | 0.764 | 0.813 | 0.853 | 0.758 |
mDPR (NQ) | 0.650 | 0.779 | 0.678 | 0.568 | 0.685 | 0.584 | 0.533 | 0.647 | 0.528 | 0.366 | 0.515 | 0.594 |
Hybrid | 0.871 | 0.946 | 0.793 | 0.827 | 0.900 | 0.794 | 0.718 | 0.815 | 0.808 | 0.823 | 0.883 | 0.834 |
If you find our paper useful or use the dataset in your work, please cite our paper and the TyDi paper:
@article{mrtydi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{tydiqa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
If you have any question or suggestions regarding the dataset, code or publication, please contact Xinyu Zhang (x978zhan[at]uwaterloo.ca)