Skip to content

This repository includes benchmarks that will be used to ensure the fit for purpose and to measure the performance of the Scientific Compute Platform in AgResearch.

Notifications You must be signed in to change notification settings

AgResearch/Benchmarks

Repository files navigation

AgResearch Scientific Compute Platform Benchmark Suite

This repository includes benchmarks that will be used to ensure the fit for purpose and to measure the performance of the Scientific Compute Platform in AgResearch. Benchmarks are classified into three categories:

  • Science Workflow - based on science workflows that are frequently run or are expected to run on the platform. It benchmarks the platform's capability and also capacity to support strategically important workflows. A workflow is a series of runs of science applications;

  • Storage Performance - benchmarks to measure performance of the platform in an theoretical way.

Each individual benchmark has its own README file which describes the purpose of the benchmark, how to run the benchmark and how to verify its output(s).

This benchmark suite uses binary distributions in the Conda repositories to deploy benchmark programs. In such a case, there shall be a Conda environment specification file included in the benchmark's subdirectory. Please follow its README file to deploy the benchmark program. Some benchmark program will require to be built from the source. Please use the Conda environment specification file included in the benchmark to crate a Conda environment for building and running such a benchmark program. This approach ensures a stable, although not necessary optimal, building and executing environment for benchmarking. If the target platform does not have Conda installed, follow instructions here to install it on the platform.

Environment Variables

Please update environment BENCHMARK_ROOT variable in file benchmark.env included in this repository based on target platform's local environment. This file must be sourced before deploying and running this benchmark suite.

$ source benchmark.env

Getting and Preparing Input Data

To obtain all input data required to execute this benchmark, please email a Linux system administrator in AgResearch Ltd and ask for access to the Globus share to download the input data via Globus.

The input data is packaged in a tarball (benchmark_input_data.20190913.taz). Please place the downloaded tarball in the root directory of the benchmark suite and then use the following command to extract data from the tarball:

$ cd $BENCHMARK_ROOT
$ tar xzf benchmark_input_data.20190913.taz

About

This repository includes benchmarks that will be used to ensure the fit for purpose and to measure the performance of the Scientific Compute Platform in AgResearch.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published