Copy the actions folder to the .github
folder of the futhark repository.
The intended use of slurmbench.py is by using the existing composite action. An example of this can be seen below and found here.
benchmark-titanrtx-opencl:
runs-on: [titanrtx, opencl]
needs: [build-linux-nix]
if: github.repository == 'diku-dk/futhark' && !github.event.pull_request.draft
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
fetch-depth: 0
- uses: ./.github/actions/benchmark
with:
backend: opencl
system: titanrtx
gpu: titanrtx:1
The job will use any runner which have both of the labels titanrtx
and opencl
. If it is the case that Slurm is installed then srun
will be used else the benchmarks will be executed on the system without Slurm. The arguments that can be specified are.
gpu:
is the GPU and amount of that GPUs to use. You may just writegpu: GPU
then it will default to one GPU. Ifgpu: none
is specified, or it is not specified then no GPU will be used for Slurm. It is important that the GPU match the name of some available GPU on the cluster using slurm.backend:
specifies the backend to be used for the Futhark compiler, this could becuda
,opencl
ormulticore
.system:
name of the system.options:
extra options to parse to the Futhark compiler.slurm:
extra options to parse tosrun
.
The way slurmbench.py
is best understood looking at the code. An example use case of slurmbench.py
is.
python3.9 slurmbench.py \
--gpu=gpu \
--futhark=path/to/futhark/compiler \
--slurm-options='--some --slurm --flags' \
--futhark-options='--some --futhark --flags' \
--json=path/to/result.json \
--benchmarks=path/to/futhark-benchmarks
This script will start a Slurm job using srun
with the specified flags running the benchmarks in the path/to/futhark-benchmarks
folder. The gpu
flag must be a valid GPU and amount of GPUs. The benchmarks will be run using the binaries path/to/futhark/compiler
with the specified flags. The json
flag will be the path where the benchmark results will be put.
Inside the benchmark example there is a job benchmark-results
that will collect the results. If anyone or more of the benchmarks fail this job will run and try to upload the results due to ${{ always() }}
.