src
: Load-Balancing Simulator codedoc
: Research papers and related documentsdata
: Various data inputs or outputstests
: Unit tests and acceptance tests
Please refer to our documentation for more details.
LBAF currently supports Python 3.8 - 3.11. You can download Python here.
To create and activate a virtual environment:
python -m venv venv
source venv/bin/activate
Note
You can create separate virtual environments for different development branches. For example, a Python 3.8 environment for branch 125 could be named venv38-branch-125
. Within this environment, you can install lbaf
as an editable package (see below).
LBAF can be installed in two ways:
1. Install the LBAF Package (recommended)
Users can easily install the latest release of LBAF with:
pip install lbaf
Developers should clone the repo and install the package in editable mode:
git clone git@github.com:DARMA-tasking/LB-analysis-framework.git
pip install -e LB-analysis-framework
2. Install Dependencies
If you do not wish to install LBAF as a package, simply clone the repo and install dependencies:
git clone git@github.com:DARMA-tasking/LB-analysis-framework.git
pip install -r LB-analysis-framework/requirements.txt
Begin by installing the test dependencies in requirements.txt
.
pip install tox coverage pylint pytest anybadge
Then, to run all tests locally:
cd <project-path>
tox -e py<x>
where <x>
is 38
, 39
, 310
, or 311
, depending on your Python version. For example, in an environment with Python 3.8: tox -e py38
.
The tox
command will:
- run all tests defined in
tox.ini
- create the
artifacts
directory in main project path - create an html coverage report and a pylint report within the
artifacts
directory
If the lbaf
package is installed, LBAF can be run using the following command:
lbaf -c <config-file-path>
If dependencies were installed instead, LBAF must be run from source:
cd <project-path>
python src/lbaf/Applications/LBAF_app.py -c <config-file-path>
The configuration file is a YAML file that specifies how LBAF will run.
<config-file-path>
can be an absolute path or a relative path to your configuration file.
A description of each parameter in the configuration file can be found here, and sample configurations can be found in the config
directory.
LBAF can optionally leverage vt-tv
, a DARMA-tasking tool built off of VTK
, to visualize the work-to-rank mappings, communications, and memory usage of a run.
To get started, you will need to build VTK
(instructions here).
Then, clone the vt-tv
repository and install the Python bindings:
git clone https://github.com/DARMA-tasking/vt-tv.git
VTK_DIR=/path/to/vtk/build pip install vt-tv
Once vt-tv
has been installed, you may include visualization parameters in the configuration file. Sample parameters are found (commented out) at the bottom of config/conf.yaml
.
For more instructions on building and using vt-tv
, refer to the documentation.
To print a list of all Quantities of Interest (QOI) supported by LBAF, add a verbosity argument to the run command:
cd <project-path>
lbaf -c <config-file-name> -v <verbosity-level>
or
cd <project-path>
python src/lbaf/Applications/LBAF_app.py -c <config-file-name> -v <verbosity-level>
To output only the Rank QOI, use -v 1
. Otherwise, to print both Rank and Object QOI, use -v 2
.
JSON data files validator
JSON data files Validator validates vt data files against defined schema. It is located in the vt repository and can be found here.
If the lbaf
package is installed, run:
lbaf-vt-data-files-validator-loader
Otherwise, run from source:
cd <project-path>
python src/lbaf/Utils/lbsJSONDataFilesValidatorLoader.py
The script will be saved to <project-path>/src/lbaf/imported/JSON_data_files_validator.py
If the lbaf
package is installed, run:
lbaf-vt-data-files-validator
Otherwise, run from source:
cd <project-path>
python src/lbaf/imported/JSON_data_files_validator.py
Note: This command automatically downloads the JSON_data_files_validator.py
script if needed.
These commands assume that LBAF was installed as a package. When running from source, replace the run command as noted above.
For single file:
# With relative path
lbaf-vt-data-files-validator --file_path=../../../data/nolb-8color-16nodes-data/data.0.json
# With absolute path
lbaf-vt-data-files-validator --file_path=<project-path>/data/nolb-8color-16nodes-data/data.0.json
For many files in the same directory:
# With relative path
lbaf-vt-data-files-validator --dir_path=../../../data/nolb-8color-16nodes-data
# With absolute path
lbaf-vt-data-files-validator --dir_path=<project-path>/data/nolb-8color-16nodes-data
# Optionally one could pass --file_prefix and/or --file_suffix
# When one passes files with given prefix/suffix or both will be validated
# When no prefix and suffix will be given validator will find most common prefix and suffix in the directory
# and will use them for validation process
lbaf-vt-data-files-validator --dir_path=../../data/nolb-8color-16nodes-data --file_prefix=data --file_suffix=json
vt Data Extractor
The vt Data Extractor extracts phases from vt stats files.
To run using the lbaf package:
lbaf-vt-data-extractor
To run from source:
cd <project-path>
python src/lbaf/Utils/lbsVTDataExtractor.py
input_data_dir
: str - path to dir with files to extract e.g."./data/<dir-with-files>"
output_data_dir
: str - path to dir where files should be saved e.g."./output"
(will be created when doesn't exist)phases_to_extract
: list - list of phases[int or str]
e.g.[0, 1, "2-4"]
will extract phases[0, 1, 2, 3, 4]
file_prefix
: str - data file prefix e.g. if filename isstats.0.json
, then prefix should be set to "stats"file_suffix
: str - data file suffix e.g. if filename isstats.0.json
, then suffix should be set to "json"compressed
: bool - when True, brotli must be imported and then output data will be compressedschema_type
: str - should be"LBDatafile"
or"LBStatsfile"
depends on input data. Only"LBStatsfile"
is supportedcheck_schema
: bool - when True, validates schema (more time-consuming)
vt Data Maker
The vt Data Maker generates a dataset of JSON files that may be used throughout the DARMA-tasking organization. The generated files are compatible with LBAF
, vt-tv
, and vt
.
If the lbaf
package is installed, run with:
lbaf-vt-data-files-maker <args>
Otherwise, run:
python src/lbaf/Utils/lbsJSONDataFilesMaker.py <args>
The program can be run interactively with the --interactive
argument.
Otherwise, it accepts a pre-written specification file (--spec-file
) and the file stem for the resulting data files (--data-stem
).
Further documentation, including usage and examples, can be found within the script itself.
"This paper explores dynamic load balancing algorithms used by asynchronous many-task (AMT), or ‘task-based’, programming models to optimize task placement for scientific applications with dynamic workload imbalances."