Computational efficiency is essential to simulate complex neuronal networks and study long-term effects such as learning.
The scaling performance of neuronal network simulators on high-performance computing systems can be assessed with benchmark simulations.
However, maintaining comparability of benchmark results across different systems, software environments, network models, and researchers from potentially different labs poses a challenge.
beNNch
tackles this challenge by implementing a unified, modular workflow for configuring, executing, and analyzing such benchmarks.
The software framework builds around the JUBE Benchmarking Environment, installs simulation software, provides an interface to benchmark models, automates data and metadata annotation, and accounts for storage and presentation of results.
For more details on the conceptual ideas behind beNNch
, refer to our preprint ( https://arxiv.org/abs/2112.09018 ):
"A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations"
Jasper Albers, Jari Pronold, Anno Kurth, Stine Brekke Vennemo, Kaveh Haghighi Mood, Alexander Patronis, Dennis Terhorst, Jakob Jordan, Susanne Kunkel, Tom Tetzlaff, Markus Diesmann, Johanna Senk (2021)
Example beNNch
output (Figure 5C of Albers et al., 2021):
Strong-scaling performance of the multi-area model simulated with the neuronal network simulator NEST on JURECA-DC.
The left graph shows the absolute wall-clock time measured with Python-level timers for both network construction and state propagation.
Error bars indicate variability across three simulation repeats with different random seeds.
The top right graph displays the real-time factor defined as wall-clock time normalized by the model time.
Built-in timers resolve four different phases of the state propagation: update, collocation, communication, and delivery.
Pink error bars show the same variability of state propagation as the left graph.
The lower right graph shows the relative contribution of these phases to the state-propagation time.
See also the accompanying GitHub Page for further beNNch
results in flip-book format.
directory | description |
---|---|
analysis | scripts for data and metadata analysis |
benchmarks | JUBE benchmark scripts for select neuroscientific models |
config | templates for user configuration files to be copied and adapted |
flipbook | script for generating a comparative flip book |
helpers | JUBE helper functions and parameter sets |
models | git submodule; the linked repository (https://github.com/INM-6/beNNch-models ) contains NEST network models adapted to work with beNNch |
plot | git submodule; the linked repository (https://github.com/INM-6/beNNch-plot ) contains predefined plotting routines designed to process the performance results and provide a standardized plotting format |
results | git submodule; the repository linked by default (https://gin.g-node.org/nest/beNNch-results.git ) is private. To see how to change this link to your own results repository, see the optional step in Initialization. Make sure your repository works with git-annex . |
- Download git submodules:
git submodule init
- optional: if you want to change the url of any of the submodules (requires
git v2.25.0
):
git submodule set-url -- <submodule> <new_url>
git submodule update --remote
- Install benchplot as Python module:
pip install -e plot --user
- git annex
- can e.g. be installed via
wget 'http://downloads.kitenet.net/git-annex/linux/current/git-annex-standalone-amd64.tar.gz'
tar -xzf git-annex-standalone-amd64.tar.gz
export PATH=$PATH:<install_path>/git-annex.linux
- JUBE
Note that if you are using the JUBE version 2.4.1 or lower, the following export command is required for executing benchmarks due to a known bug. Once the bug is fixed, the export will become unnecessary and the documentation here will be updated accordingly.
export JUBE_INCLUDE_PATH="<PATH_TO_REPO>config/:helpers/"
-
- see Builder documentation for installation guide
-
Python 3.X
For the following network models, there is currently a NEST implementation in the models submodule and corresponding JUBE benchmark script in the benchmarks/
folder:
-
Multi-Area Model
multi-area-model_2
for usage with NEST 2.multi-area-model_3
for usage with NEST 3.
-
Microcircuit
microcircuit
-
HPC Benchmark
hpc_benchmark_2
for usage with NEST 2.hpc_benchmark_3
for usage with NEST 3.0.hpc_benchmark_31
for usage with NEST 3.1.
Make a copy of the template config file for user parameters and fill it:
cp config/templates/user_config_template.yaml config/user_config.yaml
Copy and fill also the parameter file with model-specific parameters:
cp config/templates/<model>_config_template.yaml config/<model>_config.yaml
In the model config file, you can specify the software
(i.e. the simulator), its version
, and a variant
(allowing to install the software with different dependencies) you want to benchmark. For example: software = nest-simulator
, version = 3.0
, variant = gcc9.3
. For convenience, you may also add a suffix
.
To install software for which a plan file does not yet exist (e.g. a new dependency or simulator), you need to configure Builder by adding a common
file explicating the necessary steps of installation that is shared between all variants to
<path/to/Builder>/plans/<software>/commons
Note that Builder already provides a common
file for nest-simulator
.
If the common
file for the simulator or software you wish to install already exists and you only want to add a new version or variant, add both a plan file and a module file template to
<path/to/Builder>/plans/<software>/<version>/<{variant, variant.module}>
In variant
, you state the source location of the software as well as the chosen dependencies.
If you use the module
system for loading dependencies, add the corresponding module load
commands to this file.
See as an example the nest-simulator
plan files that Builder ships with.
Specific to NEST benchmarking: don't forget to include -Dwith-detailed-timers=ON
in the CMAKEFLAGS
if you want to have access to C++ level timers.
The JUBE benchmarking scripts can be found in benchmarks/
.
To run a benchmark, execute:
jube run benchmarks/<model>.yaml
JUBE displays a table summarizing the submitted job(s) and the corresponding job id
.
First, create a new instance of the analysis configuration with
cp config/templates/analysis_config_template.yaml config/analysis_config.yaml
Here, fill in
- whether the scaling benchmark runs across threads or nodes. This sets up a quick, glanceable plot of the benchmark to confirm that no substantial errors occurred.
beNNch
provides defaults for plotting timers acrossnodes
andthreads
, but alternatives can be readily implemented by adding toanalysis/plot_helpers.py
. - the path to the JUBE output (usually the same as the
outpath
of the<benchmark>
inbenchmarks/<model>
)
To start the analysis, execute
cd results
- optional: initialize for the first time
git pull origin main
git checkout main
git annex init
git annex sync
python ../analysis/analysis.py <id>
where <id>
is the job id
of the benchmark you want to analyze.
For sharing, upload the results to the central repository via
git annex sync
cd results
- optional: add a new remote
git add remote <name> <location>
, e.g.git add remote jureca <username>@jureca.fz-juelich.de:<PATH/TO/REPO>/results
git fetch <name>
git annex get
cd results
First, filter which benchmarks to plot using the following syntax:
git annex view <common_metadata>="<value_of_metadata>" <differing_metadata>="*"
Here, common_metadata
refers to a key that should be the same value for all benchmarks, e.g. the "machine"
. A list of all available metadata keys can be obtained via git annex metadata <uuidgen_hash>.csv
. One can use *
here as well, e.g. when filtering out all runs that include simulations done on 10 nodes via num_nodes='*,10*'
or all machines that have jureca
in their name via machine='*jureca*
. To specify multiple numerical values use keyword={value1,value2}
.
- example:
git annex view machine="jureca" model_name="microcircuit" nest="*" num_vps="*"
.
Note that this changes the local file structure; the values corresponding to the <differing_metadata>
determine the names of the folders in a hierarchical fashion. In the example above, the top level would consist of folders named after the values of nest
(e.g. nest-simulator/2.14.1
, nest-simulator/2.20.2
and nest-simulator/3.1
), with each of those containing folders named after the number of virtual processes of the simulations (e.g. 4
, 8
, 16
). Rearranging the order of <differing_metadata>
in the command above also reorders the hierarchical file structure.
To "go back" a view, execute
git annex vpop
After choosing which benchmarks to display via filtering above and ordering them via <differing_metadata>
, you can create a flip book of all plots with
python ../flipbook/flipbook.py <scaling_type> <bullet_1> <bullet_2> ...
with an arbitrarily long list of bullet items (consisting of metadata keys) that appear as bullet points on the slides for comparison. <scaling_type>
defines the style of plotting, c.f. section on Analyze Benchmarks.
- error
jinja2.exceptions.TemplateNotFound: index.html.j2
- issue with a recent version of
nbconvert
, try to install version5.6.1
instead (e.g.pip install nbconvert==5.6.1 --user
)
- issue with a recent version of
benchmarks/template.yaml
provides a template for a JUBE benchmarking script and can be used as a starting point for adding a new model. Here, only the marked section needs to be adapted. As a reference, see the implementation of the microcircuit in benchmarks/microcircuit.yaml
.
In addition, minor modifications to a regular network model need to be made in order to comply with beNNch
's standards. In particular, this concerns how JUBE feeds the configuration parameters to the network and how JUBE reads the performance measurement output.
A new model needs to be able to receive input from JUBE for setting parameters. Following the substituteset
defined in benchmarks/template.yaml
, all listed source keys need to be initialized. In addition, the corresponding target keys need to be defined in a config file. Use models/Potjans_2014/run_bm_microcircuit.py
for the former and config/templates/microcircuit_config_template.yaml
as a reference.
As current releases of NEST (including 2.14.1, 2.20.2 and 3.0+) include timers on the C++ level for measuring the simulation performance, the model only needs to output this information in a way compliant with beNNch
. This can be done via adding a call to the logging
function defined in models/Potjans_2014/bm_helpers.py
. Note that this also provides the optional functionality to include python level timers as well as memory information.
We recommend to provide a link to this repository with the hash of the respective commit.