Anserini is an open-source information retrieval toolkit built on Lucene that aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this. Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
A low-effort way to try out Anserini is to look at our online notebooks, which will allow you to get started with just a few clicks. For convenience, we've pre-built a few common indexes, available to download here.
If you want to build Anserini itself, then start by verifying the main dependencies:
- Anserini was upgraded to Java 11 at commit
17b702d
(7/11/2019) from Java 8. Maven 3.3+ is also required. - Anserini was upgraded to Lucene 8.0 as of commit
75e36f9
(6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all regressions have changed slightly. To replicate old results from Lucene 7.6, use v0.5.1.
After cloning our repo, build using Maven:
mvn clean package appassembler:assemble
The eval/
directory contains evaluation tools and scripts, including
trec_eval,
gdeval.pl,
ndeval.
Before using trec_eval
, unpack and compile it, as follows:
tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make
Before using ndeval
, compile it as follows:
cd ndeval && make
Anserini is designed to support experiments on various standard IR test collections out of the box.
The following experiments are backed by rigorous end-to-end regression tests with run_regression.py
and the Anserini replicability promise.
For the most part, these runs are based on default parameter settings.
- Regressions for Disks 1 & 2
- Regressions for Disks 4 & 5 (Robust04)
- Regressions for AQUAINT (Robust05)
- Regressions for the New York Times (Core17)
- Regressions for the Washington Post (Core18)
- Regressions for Wt10g
- Regressions for Gov2
- Regressions for ClueWeb09 (Category B)
- Regressions for ClueWeb12-B13
- Regressions for ClueWeb12
- Regressions for Tweets2011 (MB11 & MB12)
- Regressions for Tweets2013 (MB13 & MB14)
- Regressions for Complex Answer Retrieval v1.5 (CAR17)
- Regressions for Complex Answer Retrieval v2.0 (CAR17)
- Regressions for Complex Answer Retrieval v2.0 (CAR17) with doc2query expansion
- Regressions for the MS MARCO Passage Retrieval Task
- Regressions for the MS MARCO Passage Retrieval Task with doc2query expansion
- Regressions for the MS MARCO Passage Retrieval Task with docTTTTTquery expansion
- Regressions for the MS MARCO Document Retrieval
- Regressions for the TREC 2019 Deep Learning Track (Passage Ranking Task)
- Regressions for the TREC 2019 Deep Learning Track (Document Ranking Task)
- Regressions for the TREC 2018 News Track (Background Linking Task)
- Regressions for the TREC 2019 News Track (Background Linking Task)
- Regressions for NTCIR-8 ACLIA (IR4QA subtask, Monolingual Chinese)
- Regressions for CLEF 2006 Monolingual French
- Regressions for TREC 2002 Monolingual Arabic
- Regressions for FIRE 2012 Monolingual Bengali
- Regressions for FIRE 2012 Monolingual Hindi
- Regressions for FIRE 2012 Monolingual English
The experiments described below are not associated with rigorous end-to-end regression testing and thus provide a lower standard of replicability. For the most part, manual copying and pasting of commands into a shell is required to replicate our results:
- Working with AI2's COVID-19 Open Research Dataset
- Baselines for the TREC-COVID Challenge
- Replicating "Neural Hype" Experiments
- Guide to running BM25 baselines on the MS MARCO Passage Retrieval Task
- Guide to running BM25 baselines on the MS MARCO Document Retrieval Task
- Guide to replicating doc2query results
- Guide to replicating docTTTTTquery results
- Guide to running experiments on the AI2 Open Research Corpus
- Experiments from Yang et al. (JDIQ 2018)
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
See this page for additional documentation.
- Use Anserini in Python via Pyserini
- Anserini integrates with SolrCloud via Solrini
- Anserini integrates with Elasticsearch via Elasterini
- Anserini supports approximate nearest-neighbor search on arbitrary dense vectors with Lucene
If you've found Anserini to be helpful, we have a simple request for you to contribute back. In the course of replicating baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the Robust04 page. Replicability is important to us, and we'd like to know about successes as well as failures. Since the regression documentation is auto-generated, pull requests should be sent against the raw templates. In turn, you'll be recognized as a contributor.
Beyond that, there are always open issues we would appreciate help on!
- v0.9.1: May 6, 2020 [Release Notes]
- v0.9.0: April 18, 2020 [Release Notes]
- v0.8.1: March 22, 2020 [Release Notes]
- v0.8.0: March 11, 2020 [Release Notes]
- v0.7.2: January 25, 2020 [Release Notes]
- v0.7.1: January 9, 2020 [Release Notes]
- v0.7.0: December 13, 2019 [Release Notes]
- v0.6.0: September 6, 2019 [Release Notes][Known Issues]
- v0.5.1: June 11, 2019 [Release Notes]
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
- Wei Yang, Haotian Zhang, and Jimmy Lin. Simple Applications of BERT for Ad Hoc Document Retrieval. arXiv:1903.10972, March 2019.
- Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document Expansion by Query Prediction. arXiv:1904.08375, April 2019.
- Peilin Yang and Jimmy Lin. Reproducing and Generalizing Semantic Term Matching in Axiomatic Information Retrieval. ECIR 2019.
- Ruifan Yu, Yuhao Xie and Jimmy Lin. Simple Techniques for Cross-Collection Relevance Transfer. ECIR 2019.
- Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. End-to-End Open-Domain Question Answering with BERTserini. NAACL-HLT 2019 Demos.
- Ryan Clancy, Toke Eskildsen, Nick Ruest, and Jimmy Lin. Solr Integration in the Anserini Information Retrieval Toolkit. SIGIR 2019.
- Ryan Clancy, Jaejun Lee, Zeynep Akkalyoncu Yilmaz, and Jimmy Lin. Information Retrieval Meets Scalable Text Analytics: Solr Integration with Spark. SIGIR 2019.
- Jimmy Lin and Peilin Yang. The Impact of Score Ties on Repeatability in Document Ranking. SIGIR 2019.
- Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models. SIGIR 2019.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.