This repository provides a command-line tool (exabench
) to benchmark hardware for distributed computing.
We use this tool to benchmark cloud provider systems for the cases supported below, relevant for materials modeling:
- High-performance Linpack (HPL)
- Vienna ab-initio simulations provider
- GROMACS
More information about the test cases per each application, including the input sources, is provided inside their corresponding directories.
The latest benchmark results are maintained at this Google Spreadsheet. Please note that the data stored there is preliminary/raw and so might not be accurate.
Readers are welcome to submit their contributions for other hardware and software configurations following the guidelines in the Contribution section below.
It is assumed that the benchmarks are executed on a computing cluster containing a resource management system (RMS) such as Torque/PBS (supported by default). In order to support other RMS such as SLURM or LSF, users can follow the explanation in Configuration section below.
By default, Environment Modules are used to load the software applications needed by the benchmarks. Follow the explanation in Configuration section below for systems where Environment Modules are not available.
-
Install git-lfs in order to get files stored on git LFS.
-
Clone the repository into the user home directory
git clone git@github.com:Exabyte-io/exabyte-benchmarks.git
-
Install python virtualenv if it is not installed
pip install virtualenv
-
Install required python packages
cd exabyte-benchmarks virtualenv env source env/bin/activate pip install -r requirements.txt
-
Add
exabench
command toPATH
export PATH=`pwd`:${PATH}
-
Adjust job.rms template as necessary. Please note that the template is for PBS/Torque. In order to incorporate support for other resource managers one should adjust the RMS directives (
#PBS
) accordingly. -
Set site name and location in settings.py. These settings are important to uniquely identify the sites.
-
Adjust RMS settings in settings.py as necessary, e.g set PPN to maximum number of cores per node.
-
Adjust
MODULES
settings in settings.py to load the software applications needed by the benchmarks. If Environment Modules are not installed, one should adjust thecommand
inside benchmark configs to load required libraries. -
Adjust HPL configs. Use the below links to generate the initial configs.
-
Prepare the benchmark cases. This creates cases directories, job script files and cases input files.
exabench --prepare # prepares all cases exabench --prepare --type hpl --type vasp # prepares only hpl and vasp cases exabench --prepare --name hpl-01 --name hpl-02 # prepares only hpl-{01,02} cases
-
Run the cases and wait for them to finish. Use
qstat
command to monitor the progress if available.exabench --execute # execute all cases exabench --execute --type hpl # execute only hpl cases exabench --execute --name hpl-01 # execute only hpl-01 case
-
Store and publish the results
exabench --results # store all results exabench --results --type hpl # store only hpl results exabench --results --name hpl-01 # store only hpl-01 results
-
Plot the results
exabench --plot --metric PerformancePerCore # compare all sites exabench --plot --metric SpeedupRatio --site-name AWS-NHT --site-name AZURE-IB-H # compare given sites
Benchmark results are stored in the local cache and are also published to this Google Spreadsheet with the format specified in the schema file (one-level dictionary with no nested keys).
Set PUBLISH_RESULTS
to False
in settings.py to disable publishing results to Google Spreadsheet.
This is an open-source repository and we welcome contributions for other test cases. We suggest forking this repository and introducing the adjustments there. The changes in the fork can further be considered for merging into this repository as it is commonly used on Github. This process is explained in more details here.
This section explains how to add new benchmark cases.
- Open HPL cases file and add the HPL config for the new case.
-
Put the POSCAR into the POSCARS directory or reuse existing ones
-
Put the INCAR into the templates directory or reuse existing ones
-
Create a config as below and add it to VASP CASES.
{ "name": "vasp-elb-01-04", "type": "vasp", "reference": "benchmarks.vasp.VASPCase", "config": { "nodes": 1, "ppn": 4, "kgrid": { "dimensions": [ 1, 1, 1 ], "shifts": [ 0, 0, 0 ] }, "inputs": [ { "name": "INCAR", "template": "benchmarks/vasp/templates/ELB-INCAR" }, { "name": "POSCAR", "template": "benchmarks/vasp/POSCARS/Ba25Bi15O54" }, { "name": "KPOINTS", "template": "benchmarks/vasp/templates/KPOINTS" } ], "pseudos": [ "/export/share/pseudo/ba/gga/pbe/vasp/5.2/paw/sv/POTCAR", "/export/share/pseudo/bi/gga/pbe/vasp/5.2/paw/default/POTCAR", "/export/share/pseudo/o/gga/pbe/vasp/5.2/paw/default/POTCAR" ] } }
-
Adjust
inputs
according to step 1 and step 2. -
Adjust The
pseudos
accordingly. It contains a list of pseudopotential absolute paths sorted by elements in INCAR file which will be concatenated together to form the POTCAR. -
Adjust
kgrid
as necessary. The object is passed toKPOINTS
template specified ininputs
to create KPOINTS file. AdjustKPOINTS
template or add new ones for extra parameters.
-
Put the tpr file into the inputs directory or reuse existing ones
-
Create a config as below and add it to GROMACS CASES
{ "name": "model-1-01-04", "type": "gromacs", "reference": "benchmarks.gromacs.GROMACSCase", "config": { "nodes": 1, "ppn": 4, "inputs": [ { "name": "md.tpr", "template": "benchmarks/gromacs/inputs/model-1/md.tpr" } ], "command": "source GMXRC.bash; mpirun -np $PBS_NP gmx_mpi_d mdrun -ntomp 1 -s md.tpr -deffnm md" } }
-
Create a class inside metrics package and inherit it from base Metric class, e.g. PerformancePerCore.
-
Implement
config
andplot
methods accordingly. -
Register the new metric inside METRICS_REGISTRY.