A tool for computing optimal paper-reviewer matches for peer review, subject to constraints and affinity scores. Comes with a simple web server designed for integration with the OpenReview server application.
Clone the GitHub repository and install with pip
:
git clone https://github.com/openreview/openreview-matcher.git
pip install ./openreview-matcher
The matcher can be run from the command line. For example:
python -m matcher \
--scores affinity_scores.txt \
--weights 1 \
--min_papers 1 \
--max_papers 10 \
--num_reviewers 3 \
--num_alternates 3
Run the module with the --help
flag to learn about the arguments:
python -m matcher --help
The server is implemented in Flask and can be started from the command line:
python -m matcher.service --host localhost --port 5000
By default, the app will run on http://localhost:5000
. The endpoint /match/test
should show a simple page indicating that Flask is running.
Configuration files are located in /matcher/service/config
. When started, the server will search for a .cfg
file in /matcher/service/config
that matches the environment variable FLASK_ENV
, and will default to the values in default.cfg
.
For example, with file /matcher/service/config/development.cfg
:
# development.cfg
LOG_FILE='development.log'
OPENREVIEW_USERNAME='OpenReview.net'
OPENREVIEW_PASSWORD='1234'
OPENREVIEW_BASEURL='http://localhost:3000'
Start the server with development.cfg
:
FLASK_ENV=development python -m matcher.service
The /tests
directory contains unit tests and integration tests (i.e. tests that communicate with an instance of the OpenReview server application), written with pytest.
Running the tests requires MongDB and Redis to support the OpenReview server instance used in the integration tests.
Before running integration tests, ensure that mongod
and redis-server
are running, and that no existing OpenReview instances are active.
Also ensure that OpenReview environment variables are unset:
unset OPENREVIEW_USERNAME
unset OPENREVIEW_PASSWORD
unset OPENREVIEW_BASEURL
Integration tests use the test_context
pytest fixture, which starts a clean, empty OpenReview instance and creates a mock conference.
The entire suite of tests can be run with the following commands from the top level project directory:
export OPENREVIEW_HOME=<path_to_openreview>
python -m pytest tests
Individual test modules can be run by passing in the module file as the argument:
export OPENREVIEW_HOME=<path_to_openreview>
python -m pytest tests/test_integration.py