diff --git a/.gitignore b/.gitignore index b5d2970509..45fbca1d33 100644 --- a/.gitignore +++ b/.gitignore @@ -16,4 +16,5 @@ STRU_READIN_ADJUST.cif *.egg-info build dist -.idea \ No newline at end of file +.idea +toolchain.tar.gz diff --git a/docs/quick_start/easy_install.md b/docs/quick_start/easy_install.md index 18fc410cea..fe6136278e 100644 --- a/docs/quick_start/easy_install.md +++ b/docs/quick_start/easy_install.md @@ -39,6 +39,18 @@ We recommend [IntelĀ® oneAPI toolkit](https://software.intel.com/content/www/us/ Please refer to our [guide](https://github.com/deepmodeling/abacus-develop/wiki/Building-and-Running-ABACUS) on installing requirements. +## Install requirements by toolchain + +We offer a set of [toolchain]((https://github.com/deepmodeling/abacus-develop/toolchain)) +scripts to compile and install all the requirements +automatically and suitable for machine characteristic in an online or offline way. +The toolchain can be downloaded with ABACUS repo, which is easily used and can +have a convenient installation under HPC environment in both `GNU` or `Intel-oneAPI` toolchain. +Sometimes, ABACUS by toolchain installation may have highly efficient performance. + +> Notice: the toolchain is under development, please let me know if you encounter any problem in using this toolchain. + + ## Get ABACUS source code Of course a copy of ABACUS source code is required, which can be obtained via one of the following choices: diff --git a/toolchain/Details.md b/toolchain/Details.md new file mode 100644 index 0000000000..631d8b8bdf --- /dev/null +++ b/toolchain/Details.md @@ -0,0 +1,304 @@ +# The ABACUS Toolchain + +## Author +QuantumMisaka (Zhaoqing Liu) @PKU @AISI + +Inspired by cp2k-toolchain, still in improvement. + +## Options + +Before you use the toolchain installer, you SHOULD put it under the path of ABACUS + +```shell +> mv abacus_toolchain path/to/abacus/ +``` + +To use the ABACUS toolchain installer, you may want to first follow +the instructions given in installer help message: + +```shell +> ./install_ABACUS_toolchain.sh --help +``` + +## Basic usage + +If you are new to ABACUS, and want a basic ABACUS binary, then just calling + +```shell +> ./install_ABACUS_toolchain.sh +``` + +may be enough. This will use your system gcc, and mpi library (if +existing) and build scalapack, fftw, openblas (MKL will be used +instead if MKLROOT env variable is found) and libxc from scratch, +and give you compiled libraries that allow you to compile ABACUS. + + +## Complete toolchain build + +For a complete toolchain build, with everything installed from +scratch, use: + +```shell +> ./install_ABACUS_toolchain.sh --install-all +``` + +### Package settings + +One can then change settings for some packages, by setting +`--with-PKG` options after the `--install-all` option. e.g.: + +```shell +> ./install_ABACUS_toolchain.sh --install-all --with-mkl=system +``` + +will set the script to look for a system MKL library to link, while +compile other packages from scratch. + + +### MPI implementation choice + +If you do not have a MPI installation, by default the `--install-all` +option will install MPICH for you. You can change this default +behavior by setting `--mpi-mode` after the `--install-all` option. + +## Trouble Shooting + +Below are solutions to some of the common problems you may encounter when +running this script. + +### The script terminated with an error message + +Look at the error message. If it does not indicate the reason for +failure then it is likely that some error occurred during +compilation of the package. You can look at the compiler log in +the file make.log in the source directory of the package in +`./build`. + +One of the causes on some systems may be the fact that too many +parallel make processes were initiated. By default the script +tries to use all of the processors on you node. You can override +this behavior using `-j` option. + +### The script failed at a tarball downloading stage + +Try run again with `--no-check-certificate` option. See the help +section for this option for details. + +### I've used --with-XYZ=system cannot find the XYZ library + +The installation script in "system" mode will try to find a library +in the following system PATHS: `LD_LIBRARY_PATH`, `LD_RUN_PATH`, +`LIBRARY_PATH`, `/usr/local/lib64`, `/usr/local/lib`, `/usr/lib64`, +`/usr/lib`. + +For MKL libraries, the installation script will try to look for +MKLROOT environment variable. + +You can use: + +```shell +> module show XYZ +``` + +to see exactly what happens when the module XYZ is loaded into your +system. Sometimes a module will define its own PATHS and +environment variables that is not in the default installation +script search path. And as a result the given library will likely +not be found. + +The simplest solution is perhaps to find where the root +installation directory of the library or package is, and then use +`--with-XYZ=/some/location/to/XYZ` to tell the script exactly where +to look for the library. + +## Licenses + +The toolchain only downloads and installs packages that are +[compatible with the GPL](https://www.gnu.org/licenses/gpl-faq.html#WhatDoesCompatMean). +The following table list the licenses of all those packages. While the toolchain +does support linking proprietary software packages, like e.g. MKL, these have to +be installed separately by the user. + +| Package | License | GPL Compatible | +| --------- | ---------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | +| cmake | [BSD 3-Clause](https://gitlab.kitware.com/cmake/cmake/raw/master/Copyright.txt) | Yes | +| elpa | [LGPL](https://gitlab.mpcdf.mpg.de/elpa/elpa/blob/master/LICENSE) | Yes | +| fftw | [GPL](http://www.fftw.org/doc/License-and-Copyright.html) | Yes | +| gcc | [GPL](https://gcc.gnu.org/git/?p=gcc.git;a=blob_plain;f=COPYING;hb=HEAD) | Yes | +| libxc | [MPL](https://gitlab.com/libxc/libxc/blob/master/COPYING) | Yes | +| mpich | [MPICH](https://github.com/pmodels/mpich/blob/master/COPYRIGHT) | [Yes](https://enterprise.dejacode.com/licenses/public/mpich/#license-conditions) | +| openblas | [BSD 3-Clause](https://github.com/xianyi/OpenBLAS/blob/develop/LICENSE) | Yes | +| openmpi | [BSD 3-Clause](https://github.com/open-mpi/ompi/blob/master/LICENSE) | Yes | +| scalapack | [BSD 3-Clause](http://www.netlib.org/scalapack/LICENSE) | Yes | + +## For Developers + +### Structure of the toolchain scripts + +- `install_ABACUS_toolchain.sh` is the main script that will call all + other scripts. It contains default flag settings, user input + parser, calls to each package installation scripts and the + generator of the ABACUS arch files. + +- `script/install_*.sh` are the installation scripts for individual + packages. They are relatively independent, in the sense that by + running `script/install_PKG.sh` it should install the package on its + own. However, in practice due to dependencies to other libraries, + sometimes for a package to be installed this way, it will depend + on other libraries being already installed and the correct + environment variables set. At the end of each script, it should + write to __two__ files: `build/setup_PKG` and `install/setup`. + + - The `build/setup_PKG` file contains all the instructions to set + the variables used by the `install_ABACUS_toolchain.sh` and other + `script/install_PKG.sh` scripts in order for them to correctly + compile the toolchain and set the correct library flags for the + arch files. + - The `install/setup` file contains all the instructions for setting + up the correct environment before the user can compile and/or + run ABACUS. + +- `script/toolkit.sh` contains all the macros that may be used by all + of the scripts, and provides functionalities such as prepending a + path, checking if a library exists etc. + +- `script/common_var.sh` contains all of the common variables used by + each installation scripts. All of the variables in the file should + have a default value, but allow the environment to set the values, + using: `VAR=${VAR:-default_value}`. + +- `script/parse_if.py` is a python code for parsing the `IF_XYZ(A|B)` + constructs in the script. Nested structures will be parsed + correctly. See + [`IF_XYZ` constructs](./README_FOR_DEVELOPERS.md#the-if_xyz-constructs) below. + +- `checksums.sha256` contains the pre-calculated SHA256 checksums for + the tar balls of all of the packages. This is used by the + `download_pkg` macro in `script/toolkit.sh`. + +- `arch_base.tmpl` contains the template skeleton structure for the + arch files. The `install_ABACUS_toolchain` script will set all the + variables used in the template file, and then do an eval to expand + all of `${VARIABLE}` items in `arch_base.tmpl` to give the ABACUS arch + files. + +### `enable-FEATURE` options + +The `enable-FEATURE` options control whether a FEATURE is enabled or disabled. +Possible values are: + +- `yes` (equivalent to using the option-keyword alone) +- `no` + +### `with_PKG` and `PKG_MODE` variables + +The `with_PKG` options controls how a package is going to be installed: + +- either compiled and installed from source downloaded + (`install`, or the option-keyword alone), +- or linked to locations provided by system search paths (`system`), +- or linked to locations provided by the user (``, path to some directory), +- or that the installer won't be used (`no`). + +For most packages the `with_pkg` variables will act like a switch for +turning on or off the support for this package. However, for +packages serving the same purpose, with the installer needing only +one, an extra variable `PKG_MODE` (e.g. `MPI_MODE`) are used as a +selector. In this case, while `with_PKG` controls the installation +method, the `PKG_MODE` variable picks which package to actually use. +This provides more flexibility. + +### The IF_XYZ constructs + +Due to the fact that `install_ABACUS_toolchain.sh` needs to produce +several different versions of the arch files: `psmp`, `pdbg`, +`ssmp`, `sdbg`, etc, it will have to resolve different flags for +different arch file versions. + +The solution used by this script is to use a syntax construct: + +```shell +IF_XYZ(A | B) +``` + +A parser will then parse this expression to *A* if *XYZ* is passed +to the parser (python `parse_if.py` filename XYZ); and to *B* if *XYZ* +is not passed as command line option (python `parse_if.py` filename). + +The `IF_XYZ(A|B)` construct can be nested, so things like: + +```shell +IF_XYZ(IF_ABC(flag1|flag2) | flag3) +``` + +will parse to *flag1* if both *XYZ* and *ABC* are present in the command +line arguments of `parser_if.py`, to *flag2* if only *XYZ* is present, +and *flag3* if nothing is present. + +### To ensure portability + +- one should always pass compiler flags through the + `allowed_gcc_flags` and `allowed_gfortran_flags` filters in + `scripts/toolkit.sh` to omit any flags that are not supported by + the gcc version used (or installed by this script). + +- note that `allowed_gcc_flags` and `allowed_gfortran_flags` do not work + with `IF_XYZ` constructs. So if you have something like: + +```shell +FCFLAGS="IF_XYZ(flag1 flag2 | flag3 flag4)" +``` + +Then you should break this into: + +```shell +XYZ_TRUE_FLAGS="flags1 flags2" +XYZ_FALSE_FLAGS="flags3 flags4" +# do filtering +XYZ_TRUE_FLAGS="$(allowed_gcc_flags $XYZ_TRUE_FLAGS)" +XYZ_FALSE_FLAGS="$(allowed_gcc_flags $XYZ_FALSE_FLAGS)" +``` + +So that: + +```shell +FCFLAGS="IF_XYZ($XYZ_TRUE_FLAGS | $XYZ_FALSE_FLAGS)" +``` + +- For any intrinsic fortran modules that may be used, it is best to + check with `check_gfortran_module` macro defined in + `script/tool_kit.sh`. Depending on the gcc version, some intrinsic + modules may not exist. + +- Try to avoid as much hard coding as possible: + e.g. instead of setting: + +```shell +./configure --prefix=some_dir CC=mpicc FC=mpif90 +``` + +use the common variables: + +```shell +./configure --prefix=some_dir CC=${MPICC} FC=${MPIFC} +``` + +## To keep maintainability it is recommended that we follow these practices + +- Reuse as much functionality from the macros defined in the + `script/toolkit.sh` as possible + +- When the existing macros in `script/toolkit.sh` do not provide the + functionalities you want, it is better to write the new + functionality as a macro in `script/toolkit.sh`, and then use the + macro (repeatedly if required) in the actual installation + script. This keeps the installation scripts uncluttered and more + readable. + +- All packages should install into their own directories, and with a + lock file created in their respective directory to indicate + installation has been successful. This allows the script to skip + over the compilation stages of already installed packages if the + user terminated the toolchain script at the middle of a run and + then restarted the script. diff --git a/toolchain/README.md b/toolchain/README.md new file mode 100644 index 0000000000..3c35262784 --- /dev/null +++ b/toolchain/README.md @@ -0,0 +1,113 @@ +# The ABACUS Toolchain +Version 2023.3 + +## Author +[QuantumMisaka](https://github.com/QuantumMisaka) +(Zhaoqing Liu) @PKU @AISI + +Inspired by cp2k-toolchain, still in improvement. + +## Introduction + +This toolchain will help you easily compile and install, +or link libraries ABACUS depends on +in ONLINE or OFFLINE way, +and give setup files that you can use to compile ABACUS. + +## Todo +- [x] `gnu-openblas` toolchain support for `openmpi` and `mpich`. +- [x] `intel-mkl-mpi` toolchain support using `icc` or `icx`. (`icx` version of ABACUS have some problem) +- [x] `intel-mkl-mpich` toolchain support (need more test). +- [x] Automatic installation of [CEREAL](https://github.com/USCiLab/cereal) and [LIBNPY](https://github.com/llohse/libnpy) (by github.com) +- [x] Support for [LibRI](https://github.com/abacusmodeling/LibRI) in `intel-mkl` toolchain. (LibRI do not support `gnu`) +- [ ] A better mirror station for all packages, especially for CEREAL and LIBNPY. +- [ ] A better README and Detail markdown file. +- [ ] Automatic installation of [DEEPMD](https://github.com/deepmodeling/deepmd-kit). +- [ ] Better compliation method for ABACUS-DEEPMD and ABACUS-DEEPKS. +- [ ] A better `setup` and toolchain code structure. +- [ ] Modulefile generation scripts. +- [ ] Support for `acml` toolchain (scripts are partly in toolchain now) +- [ ] Support for GPU compilation. + + +## Usage Online & Offline +Main script is `install_abacus_toolchain.sh`, +which will use scripts in `scripts` directory +to compile install dependencies of ABACUS. + +```shell +> ./install_ABACUS_toolchain.sh +``` + +All packages will be downloaded from [cp2k-static/download](https://www.cp2k.org/static/downloads). by `wget` , and will be detailedly compiled and installed in `install` directory by toolchain scripts, despite of `cereal` which will be downloaded from [CEREAL](https://github.com/USCiLab/cereal) and `libnpy` which will be downloaded from [LIBNPY](https://github.com/abacusmodeling/LibRI) + +If one want to install ABACUS by toolchain OFFLINE, +one can manually download all the packages and put them in `build` directory, +then run this toolchain. +All package will be detected and installed automatically. +Also, one can install parts of packages OFFLINE and parts of packages ONLINE +just by using this toolchain + +```shell +# for OFFLINE installation +# in toolchain directory +> mkdir build +> cp ***.tar.gz build/ +``` + +There are also well-modified script to run `install_abacus_toolchain.sh` for `gnu-openblas` and `intel-mkl` toolchains dependencies. + +```shell +# for gnu-openblas +> ./toolchain_gnu.sh +# for intel-mkl +> ./toolchain_intel.sh +# for intel-mkl-mpich +> ./toolchain_intel-mpich.sh +``` + +Users can easily compile and install dependencies of ABACUS +by running these scripts after loading `gcc` or `intel-mkl-mpi` +environment. + +The toolchain installation process can be interrupted at anytime. +just re-run `install_abacus_toolchain.sh`, toolchain itself may fix it + +If compliation is successful, a message will be shown like this: + +```shell +> Done! +> To use the installed tools and libraries and ABACUS version +> compiled with it you will first need to execute at the prompt: +> source ./install/setup +> To build ABACUS by gnu-toolchain, just use: +> ./build_abacus_gnu.sh +> To build ABACUS by intel-toolchain, just use: +> ./build_abacus_intel.sh +> or you can modify the builder scripts to suit your needs. +``` + +Then, after `source path/to/install/setup`, one can easily +run builder scripts to build ABACUS binary software. + +If users want to use toolchain but lack of some system library +dependencies, `install_requirements.sh` scripts will help. + +If users want to re-install all the package, just do: +```shell +> rm -rf install/* +``` +or you can also do it in a more completely way: +```shell +> rm -rf install/* build/*/* build/OpenBLAS*/ +``` + +Users can get help messages by simply: +```shell +> ./install_abacus_toolchain.sh -h # or --help +``` + + +## More +More infomation can be read from `Details.md`, +which is merely easily refined from cp2k-toolchain README. \ No newline at end of file diff --git a/toolchain/build_abacus_gnu.sh b/toolchain/build_abacus_gnu.sh new file mode 100755 index 0000000000..c05a85192f --- /dev/null +++ b/toolchain/build_abacus_gnu.sh @@ -0,0 +1,58 @@ +#!/bin/bash +#SBATCH -J build +#SBATCH -N 1 +#SBATCH -n 16 +#SBATCH -o build_abacus.log +#SBATCH -e build_abacus.err +# install ABACUS with libxc and deepks +# JamesMisaka in 2023.08.31 + +# Build ABACUS by gnu-toolchain + +#rm -rf ../build +# module load openmpi + +TOOL=$(pwd) +ABACUS_DIR=.. +source ./install/setup +cd $ABACUS_DIR + +PREFIX=. +BUILD_DIR=build_abacus +LAPACK=$TOOL/install/openblas-0.3.23/lib +SCALAPACK=$TOOL/install/scalapalack-2.2.1/lib +ELPA=$TOOL/install/elpa-2021.11.002/cpu +FFTW3=$TOOL/install/fftw-3.3.10 +CEREAL=$TOOL/install/cereal-1.3.2/include/cereal +LIBXC=$TOOL/install/libxc-6.2.2 +# LIBTORCH=$TOOL/install/libtorch-2.0.1/share/cmake/Torch +# LIBNPY=$TOOL/install/libnpy-0.1.0/include +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd + +cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ + -DCMAKE_CXX_COMPILER=g++ \ + -DMPI_CXX_COMPILER=mpicxx \ + -DLAPACK_DIR=$LAPACK \ + -DSCALAPACK_DIR=$SCALAPACK \ + -DELPA_DIR=$ELPA \ + -DFFTW3_DIR=$FFTW3 \ + -DCEREAL_INCLUDE_DIR=$CEREAL \ + -DLibxc_DIR=$LIBXC \ + -DENABLE_LCAO=ON \ + -DENABLE_LIBXC=ON \ + -DUSE_OPENMP=ON \ + -DENABLE_ASAN=OFF \ + -DUSE_ELPA=ON \ + | tee configure.log +# -DENABLE_LIBRI=ON \ +# -DENABLE_DEEPKS=1 \ +# -DTorch_DIR=$LIBTORCH \ +# -Dlibnpy_INCLUDE_DIR=$LIBNPY \ +# -DDeePMD_DIR=$DEEPMD \ +# -DTensorFlow_DIR=$DEEPMD \ + +# # add mkl env for libtorch to link +# module load mkl + +cmake --build $BUILD_DIR -j `nproc` | tee build.log +cmake --install $BUILD_DIR | tee install.log diff --git a/toolchain/build_abacus_intel-mpich.sh b/toolchain/build_abacus_intel-mpich.sh new file mode 100755 index 0000000000..4243b312c2 --- /dev/null +++ b/toolchain/build_abacus_intel-mpich.sh @@ -0,0 +1,51 @@ +#!/bin/bash +#SBATCH -J build +#SBATCH -N 1 +#SBATCH -n 16 +#SBATCH -o build_abacus.log +#SBATCH -e build_abacus.err +# install ABACUS with libxc and deepks +# JamesMisaka in 2023.08.31 + +# Build ABACUS by intel-toolchain with mpich + +#rm -rf ../build_abacus +# module load mkl compiler +# source path/to/vars.sh + +TOOL=$(pwd) +ABACUS_DIR=.. +source ./install/setup # include mpich +cd $ABACUS_DIR + +PREFIX=. +BUILD_DIR=build_abacus +ELPA=$TOOL/install/elpa-2021.11.002/cpu +CEREAL=$TOOL/install/cereal-1.3.2/include/cereal +LIBXC=$TOOL/install/libxc-6.2.2 +LIBTORCH=$TOOL/install/libtorch-2.0.1/share/cmake/Torch +LIBNPY=$TOOL/install/libnpy-0.1.0/include +#DEEPMD=$HOME/apps/anaconda3/envs/deepmd + +cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ + -DCMAKE_CXX_COMPILER=icpc \ + -DMPI_CXX_COMPILER=mpicxx \ + -DMKLROOT=$MKLROOT \ + -DELPA_DIR=$ELPA \ + -DCEREAL_INCLUDE_DIR=$CEREAL \ + -DLibxc_DIR=$LIBXC \ + -DENABLE_LCAO=ON \ + -DENABLE_LIBXC=ON \ + -DENABLE_LIBRI=ON \ + -DUSE_OPENMP=ON \ + -DENABLE_ASAN=OFF \ + -DUSE_ELPA=ON \ + -DENABLE_DEEPKS=1 \ + -DTorch_DIR=$LIBTORCH \ + -Dlibnpy_INCLUDE_DIR=$LIBNPY \ + | tee configure.log + # -DDeePMD_DIR=$DEEPMD \ + # -DTensorFlow_DIR=$DEEPMD \ + +cmake --build $BUILD_DIR -j `nproc` | tee build.log +cmake --install $BUILD_DIR | tee install.log diff --git a/toolchain/build_abacus_intel.sh b/toolchain/build_abacus_intel.sh new file mode 100755 index 0000000000..321233e07c --- /dev/null +++ b/toolchain/build_abacus_intel.sh @@ -0,0 +1,53 @@ +#!/bin/bash +#SBATCH -J build +#SBATCH -N 1 +#SBATCH -n 16 +#SBATCH -o build_abacus.log +#SBATCH -e build_abacus.err +# install ABACUS with libxc and deepks +# JamesMisaka in 2023.08.22 + +# Build ABACUS by intel-toolchain + +#rm -rf ../build_abacus +#rm -rf ../build_abacus +# module load mkl compiler mpi +# source path/to/vars.sh + +TOOL=$(pwd) +ABACUS_DIR=.. +source ./install/setup +cd $ABACUS_DIR + +PREFIX=. +BUILD_DIR=build_abacus +ELPA=$TOOL/install/elpa-2021.11.002/cpu +CEREAL=$TOOL/install/cereal-1.3.2/include/cereal +LIBXC=$TOOL/install/libxc-6.2.2 +LIBTORCH=$TOOL/install/libtorch-2.0.1/share/cmake/Torch +LIBNPY=$TOOL/install/libnpy-0.1.0/include +# DEEPMD=$HOME/apps/anaconda3/envs/deepmd + +# if use deepks and deepmd +cmake -B $BUILD_DIR -DCMAKE_INSTALL_PREFIX=$PREFIX \ + -DCMAKE_CXX_COMPILER=icpc \ + -DMPI_CXX_COMPILER=mpiicpc \ + -DMKLROOT=$MKLROOT \ + -DELPA_DIR=$ELPA \ + -DCEREAL_INCLUDE_DIR=$CEREAL \ + -DLibxc_DIR=$LIBXC \ + -DENABLE_LCAO=ON \ + -DENABLE_LIBXC=ON \ + -DENABLE_LIBRI=ON \ + -DUSE_OPENMP=ON \ + -DENABLE_ASAN=OFF \ + -DUSE_ELPA=ON \ + -DENABLE_DEEPKS=1 \ + -DTorch_DIR=$LIBTORCH \ + -Dlibnpy_INCLUDE_DIR=$LIBNPY \ + | tee configure.log +# -DDeePMD_DIR=$DEEPMD \ +# -DTensorFlow_DIR=$DEEPMD \ + +cmake --build $BUILD_DIR -j `nproc` | tee build.log +cmake --install $BUILD_DIR | tee build.log diff --git a/toolchain/install_abacus_toolchain.sh b/toolchain/install_abacus_toolchain.sh new file mode 100755 index 0000000000..d95e80656c --- /dev/null +++ b/toolchain/install_abacus_toolchain.sh @@ -0,0 +1,824 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")" && pwd -P)" + +# +---------------------------------------------------------------------------+ +# | ABACUS: (Atomic-orbital Based Ab-initio Computation at UStc) | +# | -- an open-source package based on density functional theory(DFT) | +# | Copyright 2004-2022 ABACUS developers group | +# | | +# | | +# | SPDX-License-Identifier: GPL-2.0-or-later | +# +---------------------------------------------------------------------------+ +# +# +# ***************************************************************************** +#> \brief This script will compile and install or link existing tools and +#> libraries ABACUS depends on and generate setup files that +#> can be used to compile and use ABACUS +#> \history Created on Friday, 2023/08/18 +# Update for Intel (18.08.2023, MK) +#> \author Zhaoqing Liu quanmisaka@stu.pku.edu.cn +#> with the reference of Lianheng Tong (ltong) lianheng.tong@kcl.ac.uk +# ***************************************************************************** + +# ------------------------------------------------------------------------ +# Work directories and used files +# ------------------------------------------------------------------------ +export ROOTDIR="${PWD}" +export SCRIPTDIR="${ROOTDIR}/scripts" +export BUILDDIR="${ROOTDIR}/build" +export INSTALLDIR="${ROOTDIR}/install" +export SETUPFILE="${INSTALLDIR}/setup" +export SHA256_CHECKSUM="${SCRIPTDIR}/checksums.sha256" +export ARCH_FILE_TEMPLATE="${SCRIPTDIR}/arch_base.tmpl" + +# ------------------------------------------------------------------------ +# Make a copy of all options for $SETUPFILE +# ------------------------------------------------------------------------ +TOOLCHAIN_OPTIONS="$@" + +# ------------------------------------------------------------------------ +# Load common variables and tools +# ------------------------------------------------------------------------ +source "${SCRIPTDIR}"/common_vars.sh +source "${SCRIPTDIR}"/tool_kit.sh + +# ------------------------------------------------------------------------ +# Documentation +# ------------------------------------------------------------------------ +show_help() { + cat << EOF +This script will help you compile and install, +or link libraries ABACUS depends on, +and give setup files that you can use to compile ABACUS. + +USAGE: + +$(basename $SCRIPT_NAME) [options] + +OPTIONS: + +-h, --help Show this message. +-j Number of processors to use for compilation, if + this option is not present, then the script + automatically tries to determine the number of + processors you have and try to use all of the + processors. +--no-check-certificate If you encounter "certificate verification" errors + from wget or ones saying that "common name doesn't + match requested host name" while at tarball downloading + stage, then the recommended solution is to install + the newest wget release. Alternatively, you can use + this option to bypass the verification and proceed with + the download. Security wise this should still be okay + as the installation script will check file checksums + after every tarball download. Nevertheless use this + option at your own risk. +--install-all This option will set value of all --with-PKG + options to "install". You can selectively set + --with-PKG to another value again by having the + --with-PKG option placed AFTER this option on the + command line. +--mpi-mode Selects which MPI flavour to use. Available options + are: mpich, openmpi, intelmpi, and no. By selecting "no", + MPI is not supported and disabled. By default the script + will try to determine the flavour based on the MPI library + currently available in your system path. For CRAY (CLE) + systems, the default flavour is mpich. Note that explicitly + setting --with-mpich, --with-openmpi or --with-intelmpi + options to values other than no will also switch --mpi-mode + to the respective mode. +--math-mode Selects which core math library to use. Available options + are: acml, cray, mkl, and openblas. The option "cray" + corresponds to cray libsci, and is the default for CRAY + (CLE) systems. For non-CRAY systems, if env variable MKLROOT + exists then mkl will be default, otherwise openblas is the + default option. Explicitly setting --with-acml, --with-mkl, + or --with-openblas options will switch --math-mode to the + respective modes. +--gpu-ver Selects the GPU architecture for which to compile. Available + options are: K20X, K40, K80, P100, V100, Mi50, Mi100, Mi250, + and no. + This setting determines the value of nvcc's '-arch' flag. + Default = no. +--log-lines Number of log file lines dumped in case of a non-zero exit code. + Default = 200 +--target-cpu Compile for the specified target CPU (e.g. haswell or generic), i.e. + do not optimize for the actual host system which is the default (native) +--no-arch-files Do not generate arch files +--dry-run Write only config files, but don't actually build packages. + +The --enable-FEATURE options follow the rules: + --enable-FEATURE=yes Enable this particular feature + --enable-FEATURE=no Disable this particular feature + --enable-FEATURE The option keyword alone is equivalent to + --enable-FEATURE=yes + + --enable-cuda Turn on GPU (CUDA) support (can be combined + with --enable-opencl). + Default = no + --enable-hip Turn on GPU (HIP) support. + Default = no + --enable-opencl Turn on OpenCL (GPU) support. Requires the OpenCL + development packages and runtime. If combined with + --enable-cuda, OpenCL alongside of CUDA is used. + Default = no + --enable-cray Turn on or off support for CRAY Linux Environment + (CLE) manually. By default the script will automatically + detect if your system is CLE, and provide support + accordingly. + +The --with-PKG options follow the rules: + --with-PKG=install Will download the package in \$PWD/build and + install the library package in \$PWD/install. + --with-PKG=system The script will then try to find the required + libraries of the package from the system path + variables such as PATH, LD_LIBRARY_PATH and + CPATH etc. + --with-PKG=no Do not use the package. + --with-PKG= The package will be assumed to be installed in + the given , and be linked accordingly. + --with-PKG The option keyword alone will be equivalent to + --with-PKG=install + + --with-gcc The GCC compiler to use to compile ABACUS. + Default = system + --with-intel Use the Intel compiler to compile ABACUS. + Default = system + --with-intel-classic Use the classic Intel compiler to compile ABACUS. + Default = no + --with-cmake Cmake utilities + Default = install + --with-openmpi OpenMPI, important if you want a parallel version of ABACUS. + Default = system + --with-mpich MPICH, MPI library like OpenMPI. one should + use only one of OpenMPI, MPICH or Intel MPI. + Default = system + --with-mpich-device Select the MPICH device, implies the use of MPICH as MPI library + Default = ch4 + --with-intelmpi Intel MPI, MPI library like OpenMPI. one should + use only one of OpenMPI, MPICH or Intel MPI. + Default = system + --with-libxc libxc, exchange-correlation library. Needed for + QuickStep DFT and hybrid calculations. + Default = install + --with-fftw FFTW3, library for fast fourier transform + Default = install + --with-acml AMD core maths library, which provides LAPACK and BLAS + Default = system + --with-mkl Intel Math Kernel Library, which provides LAPACK, and BLAS. + If MKL's FFTW3 interface is suitable (no FFTW-MPI support), + it replaces the FFTW library. If the ScaLAPACK component is + found, it replaces the one specified by --with-scalapack. + Default = system + --with-openblas OpenBLAS is a free high performance LAPACK and BLAS library, + the successor to GotoBLAS. + Default = install + --with-scalapack Parallel linear algebra library, needed for parallel + calculations. + Default = install + --with-cereal Enable cereal for ABACUS LCAO + Default = install + --with-elpa Eigenvalue SoLvers for Petaflop-Applications library. + Fast library for large parallel jobs. + Default = install + --with-libtorch Enable libtorch the machine learning framework needed for DeePKS + Default = no + --with-libnpy Enable libnpy the machine learning framework needed for DeePKS + Default = no + +FURTHER INSTRUCTIONS + +All packages to be installed locally will be downloaded and built inside +./build, and then installed into package specific directories inside +./install. + +Both ./build and ./install are safe to delete, as they contain +only the files and directories that are generated by this script. However, +once all the packages are installed, and you compile ABACUS using the arch +files provided by this script, then you must keep ./install in exactly +the same location as it was first created, as it contains tools and libraries +your version of ABACUS binary will depend on. + +It should be safe to terminate running of this script in the middle of a +build process. The script will know if a package has been successfully +installed, and will just carry on and recompile and install the last +package it is working on. This is true even if you lose the content of +the entire ./build directory. + + +----------------------------------------------------------------+ + | YOU SHOULD ALWAYS SOURCE ./install/setup BEFORE YOU RUN ABACUS | + | COMPILED WITH THIS TOOLCHAIN | + +----------------------------------------------------------------+ + +EOF +} + +# ------------------------------------------------------------------------ +# PACKAGE LIST: register all new dependent tools and libs here. Order +# is important, the first in the list gets installed first +# ------------------------------------------------------------------------ +tool_list="gcc intel cmake" +mpi_list="mpich openmpi intelmpi" +math_list="mkl acml openblas" +lib_list="fftw libxc scalapack elpa cereal libtorch libnpy" +package_list="${tool_list} ${mpi_list} ${math_list} ${lib_list}" +# ------------------------------------------------------------------------ + +# first set everything to __DONTUSE__ +for ii in ${package_list}; do + eval with_${ii}="__DONTUSE__" +done + +# ------------------------------------------------------------------------ +# Work out default settings +# ------------------------------------------------------------------------ + +# tools to turn on by default: +with_gcc="__SYSTEM__" + +# libs to turn on by default, the math and mpi libraries are chosen by there respective modes: +with_fftw="__INSTALL__" +with_libxc="__INSTALL__" +with_scalapack="__INSTALL__" +# default math library settings, MATH_MODE picks the math library +# to use, and with_* defines the default method of installation if it +# is picked. For non-CRAY systems defaults to mkl if $MKLROOT is +# available, otherwise defaults to openblas +if [ "${MKLROOT}" ]; then + export MATH_MODE="mkl" + with_mkl="__SYSTEM__" +else + export MATH_MODE="openblas" +fi +with_acml="__SYSTEM__" +with_openblas="__INSTALL__" +with_elpa="__INSTALL__" +with_cereal="__INSTALL__" +with_libtorch="__DONTUSE__" +with_libnpy="__DONTUSE__" +# for MPI, we try to detect system MPI variant +if (command -v mpiexec > /dev/null 2>&1); then + # check if we are dealing with openmpi, mpich or intelmpi + if (mpiexec --version 2>&1 | grep -s -q "HYDRA"); then + echo "MPI is detected and it appears to be MPICH" + export MPI_MODE="mpich" + with_mpich="__SYSTEM__" + elif (mpiexec --version 2>&1 | grep -s -q "OpenRTE"); then + echo "MPI is detected and it appears to be OpenMPI" + export MPI_MODE="openmpi" + with_openmpi="__SYSTEM__" + elif (mpiexec --version 2>&1 | grep -s -q "Intel"); then + echo "MPI is detected and it appears to be Intel MPI" + with_gcc="__DONTUSE__" + with_intel="__SYSTEM__" + with_intelmpi="__SYSTEM__" + export MPI_MODE="intelmpi" + else # default to mpich + echo "MPI is detected and defaults to MPICH" + export MPI_MODE="mpich" + with_mpich="__SYSTEM__" + fi +else + report_warning $LINENO "No MPI installation detected (ignore this message in Cray Linux Environment or when MPI installation was requested)." + export MPI_MODE="no" +fi + +# default enable options +dry_run="__FALSE__" +no_arch_files="__FALSE__" +enable_tsan="__FALSE__" +enable_opencl="__FALSE__" +enable_cuda="__FALSE__" +enable_hip="__FALSE__" +export intel_classic="yes" +# no, then icc->icx, icpc->icpx, +# which cannot compile elpa-2021 and fftw.3.3.10 in some place +# due to some so-called cross-compile problem +# and will lead to problem in force calculation +# but icx is recommended by intel compiler +# option: --with-intel-classic can change it to yes/no +# zhaoqing by 2023.08.31 +export GPUVER="no" +export MPICH_DEVICE="ch4" +export TARGET_CPU="native" + +# default for log file dump size +export LOG_LINES="200" + +# defaults for CRAY Linux Environment +if [ "${CRAY_LD_LIBRARY_PATH}" ]; then + enable_cray="__TRUE__" + export MATH_MODE="cray" + # Default MPI used by CLE is assumed to be MPICH, in any case + # do not use the installers for the MPI libraries + with_mpich="__DONTUSE__" + with_openmpi="__DONTUSE__" + with_intelmpi="__DONTUSE__" + export MPI_MODE="mpich" + # set default value for some installers appropriate for CLE + with_gcc="__DONTUSE__" + with_intel="__DONTUSE__" + with_fftw="__SYSTEM__" + with_scalapack="__DONTUSE__" +else + enable_cray="__FALSE__" +fi + +# ------------------------------------------------------------------------ +# parse user options +# ------------------------------------------------------------------------ +while [ $# -ge 1 ]; do + case ${1} in + -j) + case "${2}" in + -*) + export NPROCS_OVERWRITE="$(get_nprocs)" + ;; + [0-9]*) + shift + export NPROCS_OVERWRITE="${1}" + ;; + *) + report_error ${LINENO} \ + "The -j flag can only be followed by an integer number, found ${2}." + exit 1 + ;; + esac + ;; + -j[0-9]*) + export NPROCS_OVERWRITE="${1#-j}" + ;; + --no-check-certificate) + export DOWNLOADER_FLAGS="--no-check-certificate" + ;; + --install-all) + # set all package to the default installation status + for ii in ${package_list}; do + if [ "${ii}" != "intel" ] && [ "${ii}" != "intelmpi" ]; then + eval with_${ii}="__INSTALL__" + fi + done + # Use MPICH as default + export MPI_MODE="mpich" + ;; + --mpi-mode=*) + user_input="${1#*=}" + case "$user_input" in + mpich) + export MPI_MODE="mpich" + ;; + openmpi) + export MPI_MODE="openmpi" + ;; + intelmpi) + export MPI_MODE="intelmpi" + ;; + no) + export MPI_MODE="no" + ;; + *) + report_error ${LINENO} \ + "--mpi-mode currently only supports openmpi, mpich, intelmpi and no as options" + exit 1 + ;; + esac + ;; + --math-mode=*) + user_input="${1#*=}" + case "$user_input" in + cray) + export MATH_MODE="cray" + ;; + mkl) + export MATH_MODE="mkl" + ;; + acml) + export MATH_MODE="acml" + ;; + openblas) + export MATH_MODE="openblas" + ;; + *) + report_error ${LINENO} \ + "--math-mode currently only supports mkl, acml, and openblas as options" + ;; + esac + ;; + --gpu-ver=*) + user_input="${1#*=}" + case "${user_input}" in + K20X | K40 | K80 | P100 | V100 | A100 | Mi50 | Mi100 | Mi250 | no) + export GPUVER="${user_input}" + ;; + *) + report_error ${LINENO} \ + "--gpu-ver currently only supports K20X, K40, K80, P100, V100, A100, Mi50, Mi100, Mi250, and no as options" + exit 1 + ;; + esac + ;; + --target-cpu=*) + user_input="${1#*=}" + export TARGET_CPU="${user_input}" + ;; + --log-lines=*) + user_input="${1#*=}" + export LOG_LINES="${user_input}" + ;; + --no-arch-files) + no_arch_files="__TRUE__" + ;; + --dry-run) + dry_run="__TRUE__" + ;; + --enable-tsan*) + enable_tsan=$(read_enable $1) + if [ "${enable_tsan}" = "__INVALID__" ]; then + report_error "invalid value for --enable-tsan, please use yes or no" + exit 1 + fi + ;; + --enable-cuda*) + enable_cuda=$(read_enable $1) + if [ $enable_cuda = "__INVALID__" ]; then + report_error "invalid value for --enable-cuda, please use yes or no" + exit 1 + fi + ;; + --enable-hip*) + enable_hip=$(read_enable $1) + if [ "${enable_hip}" = "__INVALID__" ]; then + report_error "invalid value for --enable-hip, please use yes or no" + exit 1 + fi + ;; + --enable-opencl*) + enable_opencl=$(read_enable $1) + if [ $enable_opencl = "__INVALID__" ]; then + report_error "invalid value for --enable-opencl, please use yes or no" + exit 1 + fi + ;; + --enable-cray*) + enable_cray=$(read_enable $1) + if [ "${enable_cray}" = "__INVALID__" ]; then + report_error "invalid value for --enable-cray, please use yes or no" + exit 1 + fi + ;; + --with-gcc*) + with_gcc=$(read_with "${1}") + ;; + --with-cmake*) + with_cmake=$(read_with "${1}") + ;; + --with-mpich-device=*) + user_input="${1#*=}" + export MPICH_DEVICE="${user_input}" + export MPI_MODE=mpich + ;; + --with-mpich*) + with_mpich=$(read_with "${1}") + if [ "${with_mpich}" != "__DONTUSE__" ]; then + export MPI_MODE=mpich + fi + ;; + --with-openmpi*) + with_openmpi=$(read_with "${1}") + if [ "${with_openmpi}" != "__DONTUSE__" ]; then + export MPI_MODE=openmpi + fi + ;; + --with-intelmpi*) + with_intelmpi=$(read_with "${1}" "__SYSTEM__") + if [ "${with_intelmpi}" != "__DONTUSE__" ]; then + export MPI_MODE=intelmpi + fi + ;; + --with-intel-classic*) + intel_classic=$(read_with "${1}" "yes") # default yes + ;; + --with-intel*) + with_intel=$(read_with "${1}" "__SYSTEM__") + ;; + --with-libxc*) + with_libxc=$(read_with "${1}") + ;; + --with-fftw*) + with_fftw=$(read_with "${1}") + ;; + --with-mkl*) + with_mkl=$(read_with "${1}" "__SYSTEM__") + if [ "${with_mkl}" != "__DONTUSE__" ]; then + export MATH_MODE="mkl" + fi + ;; + --with-acml*) + with_acml=$(read_with "${1}") + if [ "${with_acml}" != "__DONTUSE__" ]; then + export MATH_MODE="acml" + fi + ;; + --with-openblas*) + with_openblas=$(read_with "${1}") + if [ "${with_openblas}" != "__DONTUSE__" ]; then + export MATH_MODE="openblas" + fi + ;; + --with-scalapack*) + with_scalapack=$(read_with "${1}") + ;; + --with-elpa*) + with_elpa=$(read_with "${1}") + ;; + --with-libtorch*) + with_libtorch=$(read_with "${1}") + ;; + --with-cereal*) + with_cereal=$(read_with "${1}") + ;; + --with-libnpy*) + with_libnpy=$(read_with "${1}") + ;; + --help*) + show_help + exit 0 + ;; + -h*) + show_help + exit 0 + ;; + *) + report_error "Unknown flag: $1" + exit 1 + ;; + esac + shift +done + +# consolidate settings after user input +export ENABLE_TSAN="${enable_tsan}" +export ENABLE_CUDA="${enable_cuda}" +export ENABLE_HIP="${enable_hip}" +export ENABLE_OPENCL="${enable_opencl}" +export ENABLE_CRAY="${enable_cray}" + +# ------------------------------------------------------------------------ +# Check and solve known conflicts before installations proceed +# ------------------------------------------------------------------------ +# Compiler conflicts +if [ "${with_intel}" != "__DONTUSE__" ] && [ "${with_gcc}" = "__INSTALL__" ]; then + echo "You have chosen to use the Intel compiler, therefore the installation of the GCC compiler will be skipped." + with_gcc="__SYSTEM__" +fi +# MPI library conflicts +if [ "${MPI_MODE}" = "no" ]; then + if [ "${with_scalapack}" != "__DONTUSE__" ]; then + echo "Not using MPI, so scalapack is disabled." + with_scalapack="__DONTUSE__" + fi + if [ "${with_elpa}" != "__DONTUSE__" ]; then + echo "Not using MPI, so ELPA is disabled." + with_elpa="__DONTUSE__" + fi +else + # if gcc is installed, then mpi needs to be installed too + if [ "${with_gcc}" = "__INSTALL__" ]; then + echo "You have chosen to install the GCC compiler, therefore MPI libraries have to be installed too" + case ${MPI_MODE} in + mpich) + with_mpich="__INSTALL__" + with_openmpi="__DONTUSE__" + ;; + openmpi) + with_mpich="__DONTUSE__" + with_openmpi="__INSTALL__" + ;; + esac + echo "and the use of the Intel compiler and Intel MPI will be disabled." + with_intel="__DONTUSE__" + with_intelmpi="__DONTUSE__" + fi + # Enable only one MPI implementation + case ${MPI_MODE} in + mpich) + with_openmpi="__DONTUSE__" + with_intelmpi="__DONTUSE__" + ;; + openmpi) + with_mpich="__DONTUSE__" + with_intelmpi="__DONTUSE__" + ;; + intelmpi) + with_mpich="__DONTUSE__" + with_openmpi="__DONTUSE__" + ;; + esac +fi + +# If CUDA or HIP are enabled, make sure the GPU version has been defined. +if [ "${ENABLE_CUDA}" = "__TRUE__" ] || [ "${ENABLE_HIP}" = "__TRUE__" ]; then + if [ "${GPUVER}" = "no" ]; then + report_error "Please choose GPU architecture to compile for with --gpu-ver" + exit 1 + fi +fi + +# several packages require cmake. +if [ "${with_scalapack}" = "__INSTALL__" ]; then + [ "${with_cmake}" = "__DONTUSE__" ] && with_cmake="__INSTALL__" +fi + + +# ------------------------------------------------------------------------ +# Preliminaries +# ------------------------------------------------------------------------ + +mkdir -p ${INSTALLDIR} + +# variables used for generating ABACUS ARCH file +export CP_DFLAGS="" +export CP_LIBS="" +export CP_CFLAGS="" +export CP_LDFLAGS="-Wl,--enable-new-dtags" + +# ------------------------------------------------------------------------ +# Start writing setup file +# ------------------------------------------------------------------------ +cat << EOF > "$SETUPFILE" +#!/bin/bash +source "${SCRIPTDIR}/tool_kit.sh" +export ABACUS_TOOLCHAIN_OPTIONS="${TOOLCHAIN_OPTIONS}" +EOF + +# ------------------------------------------------------------------------ +# Special settings for CRAY Linux Environment (CLE) +# TODO: CLE should be handle like gcc or Intel using a with_cray flag and +# this section should be moved to a separate file install_cray. +# ------------------------------------------------------------------------ +if [ "${ENABLE_CRAY}" = "__TRUE__" ]; then + echo "------------------------------------------------------------------------" + echo "CRAY Linux Environment (CLE) is detected" + echo "------------------------------------------------------------------------" + # add cray paths to system search path + export LIB_PATHS="CRAY_LD_LIBRARY_PATH ${LIB_PATHS}" + # set compilers to CLE wrappers + check_command cc + check_command ftn + check_command CC + export CC="cc" + export CXX="CC" + export FC="ftn" + export F90="${FC}" + export F77="${FC}" + export MPICC="${CC}" + export MPICXX="${CXX}" + export MPIFC="${FC}" + export MPIFORT="${MPIFC}" + export MPIF77="${MPIFC}" + # CRAY libsci should contains core math libraries, scalapack + # doesn't need LDFLAGS or CFLAGS, nor do the one need to + # explicitly link the math and scalapack libraries, as all is + # taken care of by the cray compiler wrappers. + if [ "$with_scalapack" = "__DONTUSE__" ]; then + export CP_DFLAGS="${CP_DFLAGS} IF_MPI(-D__SCALAPACK|)" + fi + case $MPI_MODE in + mpich) + if [ "$MPICH_DIR" ]; then + cray_mpich_include_path="$MPICH_DIR/include" + cray_mpich_lib_path="$MPICH_DIR/lib" + export INCLUDE_PATHS="$INCLUDE_PATHS cray_mpich_include_path" + export LIB_PATHS="$LIB_PATHS cray_mpich_lib_path" + fi + if [ "$with_mpich" = "__DONTUSE__" ]; then + add_include_from_paths MPI_CFLAGS "mpi.h" $INCLUDE_PATHS + add_include_from_paths MPI_LDFLAGS "libmpi.*" $LIB_PATHS + export MPI_CFLAGS + export MPI_LDFLAGS + export MPI_LIBS=" " + export CP_DFLAGS="${CP_DFLAGS} IF_MPI(-D__parallel|)" + fi + ;; + openmpi) + if [ "$with_openmpi" = "__DONTUSE__" ]; then + add_include_from_paths MPI_CFLAGS "mpi.h" $INCLUDE_PATHS + add_include_from_paths MPI_LDFLAGS "libmpi.*" $LIB_PATHS + export MPI_CFLAGS + export MPI_LDFLAGS + export MPI_LIBS="-lmpi -lmpi_cxx" + export CP_DFLAGS="${CP_DFLAGS} IF_MPI(-D__parallel|)" + fi + ;; + intelmpi) + if [ "$with_intelmpi" = "__DONTUSE__" ]; then + with_gcc="__DONTUSE__" + with_intel="__SYSTEM__" + add_include_from_paths MPI_CFLAGS "mpi.h" $INCLUDE_PATHS + add_include_from_paths MPI_LDFLAGS "libmpi.*" $LIB_PATHS + export MPI_CFLAGS + export MPI_LDFLAGS + export MPI_LIBS="-lmpi -lmpi_cxx" + export CP_DFLAGS="${CP_DFLAGS} IF_MPI(-D__parallel|)" + fi + ;; + esac + check_lib -lz + check_lib -ldl + export CRAY_EXTRA_LIBS="-lz -ldl" + # the space is intentional, so that the variable is non-empty and + # can pass require_env checks + export SCALAPACK_LDFLAGS=" " + export SCALAPACK_LIBS=" " +fi + +# ------------------------------------------------------------------------ +# Installing tools required for building ABACUS and associated libraries +# ------------------------------------------------------------------------ + +echo "Compiling with $(get_nprocs) processes for target ${TARGET_CPU}." + +# Select the correct compute number based on the GPU architecture +case ${GPUVER} in + K20X) + export ARCH_NUM="35" + ;; + K40) + export ARCH_NUM="35" + ;; + K80) + export ARCH_NUM="37" + ;; + P100) + export ARCH_NUM="60" + ;; + V100) + export ARCH_NUM="70" + ;; + A100) + export ARCH_NUM="80" + ;; + Mi50) + # TODO: export ARCH_NUM= + ;; + Mi100) + # TODO: export ARCH_NUM= + ;; + Mi250) + # TODO: export ARCH_NUM= + ;; + no) + export ARCH_NUM="no" + ;; + *) + report_error ${LINENO} \ + "--gpu-ver currently only supports K20X, K40, K80, P100, V100, A100, Mi50, Mi100, Mi250, and no as options" + exit 1 + ;; +esac + +write_toolchain_env ${INSTALLDIR} + +# write toolchain config +echo "tool_list=\"${tool_list}\"" > ${INSTALLDIR}/toolchain.conf +for ii in ${package_list}; do + install_mode="$(eval echo \${with_${ii}})" + echo "with_${ii}=\"${install_mode}\"" >> ${INSTALLDIR}/toolchain.conf +done + +# ------------------------------------------------------------------------ +# Build packages unless dry-run mode is enabled. +# ------------------------------------------------------------------------ +if [ "${dry_run}" = "__TRUE__" ]; then + echo "Wrote only configuration files (--dry-run)." +else + echo "# Leak suppressions" > ${INSTALLDIR}/lsan.supp + ./scripts/stage0/install_stage0.sh + ./scripts/stage1/install_stage1.sh + ./scripts/stage2/install_stage2.sh + ./scripts/stage3/install_stage3.sh + ./scripts/stage4/install_stage4.sh +fi + +cat << EOF +========================== usage ========================= +Done! +To use the installed tools and libraries and ABACUS version +compiled with it you will first need to execute at the prompt: + source ${SETUPFILE} +To build ABACUS by gnu-toolchain, just use: + ./build_abacus_gnu.sh +To build ABACUS by intel-toolchain, just use: + ./build_abacus_intel.sh +or you can modify the builder scripts to suit your needs. +""" +EOF + + +#EOF diff --git a/toolchain/install_requirements.sh b/toolchain/install_requirements.sh new file mode 100755 index 0000000000..4e25b56844 --- /dev/null +++ b/toolchain/install_requirements.sh @@ -0,0 +1,23 @@ +#!/bin/bash -e + +# author: Ole Schuett + +if (($# != 1)); then + echo "Usage: install_requirements.sh " + exit 1 +fi + +BASE_IMAGE=$1 + +if [[ ${BASE_IMAGE} == *ubuntu* ]]; then + ./install_requirements_ubuntu.sh + +elif [[ ${BASE_IMAGE} == *fedora* ]]; then + ./install_requirements_fedora.sh + +else + echo "Unknown base image: ${BASE_IMAGE}" + exit 1 +fi + +#EOF diff --git a/toolchain/install_requirements_fedora.sh b/toolchain/install_requirements_fedora.sh new file mode 100755 index 0000000000..8393fe043f --- /dev/null +++ b/toolchain/install_requirements_fedora.sh @@ -0,0 +1,37 @@ +#!/bin/bash -e + +# author: Ole Schuett + +# Install Fedora packages required for the toolchain. + +echo "Installing Fedora packages..." + +dnf -qy install \ + autoconf \ + autogen \ + automake \ + bzip2 \ + ca-certificates \ + diffutils \ + g++ \ + gcc \ + gfortran \ + git \ + less \ + libtool \ + make \ + nano \ + patch \ + perl-open \ + perl-FindBin \ + pkg-config \ + python3 \ + unzip \ + vim-common \ + wget \ + which \ + zlib-devel + +dnf clean -q all + +#EOF diff --git a/toolchain/install_requirements_ubuntu.sh b/toolchain/install_requirements_ubuntu.sh new file mode 100755 index 0000000000..d5238f4398 --- /dev/null +++ b/toolchain/install_requirements_ubuntu.sh @@ -0,0 +1,40 @@ +#!/bin/bash -e + +# author: Ole Schuett + +# Install Ubuntu packages required for the toolchain. + +echo "Installing Ubuntu packages..." + +export DEBIAN_FRONTEND=noninteractive +export DEBCONF_NONINTERACTIVE_SEEN=true + +apt-get update -qq + +apt-get install -qq --no-install-recommends \ + autoconf \ + autogen \ + automake \ + autotools-dev \ + bzip2 \ + ca-certificates \ + g++ \ + gcc \ + gfortran \ + git \ + less \ + libtool \ + libtool-bin \ + make \ + nano \ + patch \ + pkg-config \ + python3 \ + unzip \ + wget \ + xxd \ + zlib1g-dev + +rm -rf /var/lib/apt/lists/* + +#EOF diff --git a/toolchain/scripts/VERSION b/toolchain/scripts/VERSION new file mode 100644 index 0000000000..c71480cb08 --- /dev/null +++ b/toolchain/scripts/VERSION @@ -0,0 +1,2 @@ +# version file to force a rebuild of the entire toolchain +VERSION="2023.3" diff --git a/toolchain/scripts/arch_base.tmpl b/toolchain/scripts/arch_base.tmpl new file mode 100644 index 0000000000..804b6b0178 --- /dev/null +++ b/toolchain/scripts/arch_base.tmpl @@ -0,0 +1,18 @@ +CC = ${CC_arch} +CXX = ${CXX_arch} +AR = ar -r +FC = ${FC_arch} +LD = ${LD_arch} +# +DFLAGS = ${DFLAGS} +# +WFLAGS = ${WFLAGS} +# +FCDEBFLAGS = ${FCDEBFLAGS} +CFLAGS = ${CFLAGS} +FCFLAGS = ${FCFLAGS} +CXXFLAGS = ${CXXFLAGS} +# +LDFLAGS = ${LDFLAGS} +LDFLAGS_C = ${LDFLAGS_C} +LIBS = ${LIBS} diff --git a/toolchain/scripts/common_vars.sh b/toolchain/scripts/common_vars.sh new file mode 100755 index 0000000000..b7a6cd3fa6 --- /dev/null +++ b/toolchain/scripts/common_vars.sh @@ -0,0 +1,34 @@ +# Common variables used by the installation scripts + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all +# shellcheck shell=bash + +# directories and files used by the installer +ROOTDIR=${ROOTDIR:-"$(pwd -P)"} +SCRIPTDIR=${SCRIPTDIR:-"${ROOTDIR}/scripts"} +INSTALLDIR=${INSTALLDIR:-"${ROOTDIR}/install"} +BUILDDIR=${BUILDDIR:-"${ROOTDIR}/build"} +SETUPFILE=${SETUPFILE:-"${INSTALLDIR}/setup"} +ARCH_FILE_TEMPLATE=${ARCH_FILE_TEMPLATE:-"${SCRIPTDIR}/arch_base.tmpl"} +VERSION_FILE=${VERSION_FILE:-"${SCRIPTDIR}/VERSION"} + +# system arch gotten from OpenBLAS prebuild +OPENBLAS_ARCH=${OPENBLAS_ARCH:-"x86_64"} +OPENBLAS_LIBCORE=${OPENBLAS_LIBCORE:-''} + +# search paths +SYS_INCLUDE_PATH=${SYS_INCLUDE_PATH:-'/usr/local/include:/usr/include'} +SYS_LIB_PATH=${SYS_LIB_PATHS:-'/usr/local/lib64:/usr/local/lib:/usr/lib64:/usr/lib:/lib64:/lib'} +INCLUDE_PATHS=${INCLUDE_PATHS:-"CPATH SYS_INCLUDE_PATH"} +LIB_PATHS=${LIB_PATHS:-'LD_LIBRARY_PATH LIBRARY_PATH LD_RUN_PATH SYS_LIB_PATH'} + +# mode flags +ENABLE_OMP=${ENABLE_OMP:-"__TRUE__"} +ENABLE_CUDA=${ENABLE_CUDA:-"__FALSE__"} +ENABLE_HIP=${ENABLE_HIP:-"__FALSE__"} +ENABLE_CRAY=${ENABLE_CRAY:-"__FALSE__"} +MPI_MODE=${MPI_MODE:-openmpi} +MATH_MODE=${MATH_MODE:-openblas} + +export NVCC=${NVCC:-nvcc} diff --git a/toolchain/scripts/generate_arch_files.sh b/toolchain/scripts/generate_arch_files.sh new file mode 100755 index 0000000000..022087196c --- /dev/null +++ b/toolchain/scripts/generate_arch_files.sh @@ -0,0 +1,490 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")" && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +# ------------------------------------------------------------------------ +# generate arch file for compiling ABACUS +# ------------------------------------------------------------------------ + +echo "==================== generating arch files ====================" +echo "arch files can be found in the ${INSTALLDIR}/arch subdirectory" +! [ -f "${INSTALLDIR}/arch" ] && mkdir -p ${INSTALLDIR}/arch +cd ${INSTALLDIR}/arch + +# ------------------------- +# set compiler flags +# ------------------------- + +# need to switch between FC and MPICC etc in arch file, but cannot use +# same variable names, so use _arch suffix +CC_arch="IF_MPI(${MPICC}|${CC})" +CXX_arch="IF_MPI(${MPICXX}|${CXX})" +FC_arch="IF_MPI(${MPIFC}|${FC})" +LD_arch="IF_MPI(${MPIFC}|${FC})" + +# we always want good line information and backtraces +if [ "${with_intel}" != "__DONTUSE__" ]; then + if [ "${TARGET_CPU}" = "native" ]; then + BASEFLAGS="-fPIC -fp-model=precise -g -qopenmp -qopenmp-simd -traceback -xHost" + elif [ "${TARGET_CPU}" = "generic" ]; then + BASEFLAGS="-fPIC -fp-model=precise -g -mtune=$(TARGET_CPU) -qopenmp -qopenmp-simd -traceback" + else + BASEFLAGS="-fPIC -fp-model=precise -g -march=${TARGET_CPU} -mtune=$(TARGET_CPU) -qopenmp -qopenmp-simd -traceback" + fi + OPT_FLAGS="-O2 -funroll-loops" + LDFLAGS_C="-nofor-main" +else + BASEFLAGS="-fno-omit-frame-pointer -fopenmp -g -mtune=${TARGET_CPU} IF_ASAN(-fsanitize=address|)" + OPT_FLAGS="-O3 -funroll-loops" + LDFLAGS_C="" +fi + +NOOPT_FLAGS="-O1" + +# those flags that do not influence code generation are used always, the others if debug +if [ "${with_intel}" != "__DONTUSE__" ]; then + FCDEB_FLAGS="" + FCDEB_FLAGS_DEBUG="" +else + FCDEB_FLAGS="-fbacktrace -ffree-form -fimplicit-none -std=f2008" + FCDEB_FLAGS_DEBUG="-fsanitize=leak -fcheck=all,no-array-temps -ffpe-trap=invalid,zero,overflow -finit-derived -finit-real=snan -finit-integer=-42 -Werror=realloc-lhs -finline-matmul-limit=0" +fi + +# code coverage generation flags +COVERAGE_FLAGS="-O1 -coverage -fkeep-static-functions" +COVERAGE_DFLAGS="-D__NO_ABORT" + +# profile based optimization, see https://www.ABACUS.org/howto:pgo +PROFOPT_FLAGS="\$(PROFOPT)" + +# special flags for gfortran +# https://gcc.gnu.org/onlinedocs/gfortran/Error-and-Warning-Options.html +# we error out for these warnings (-Werror=uninitialized -Wno-maybe-uninitialized -> error on variables that must be used uninitialized) +WFLAGS_ERROR="-Werror=aliasing -Werror=ampersand -Werror=c-binding-type -Werror=intrinsic-shadow -Werror=intrinsics-std -Werror=line-truncation -Werror=tabs -Werror=target-lifetime -Werror=underflow -Werror=unused-but-set-variable -Werror=unused-variable -Werror=unused-dummy-argument -Werror=unused-parameter -Werror=unused-label -Werror=conversion -Werror=zerotrip -Wno-maybe-uninitialized" +# we just warn for those (that eventually might be promoted to WFLAGSERROR). It is useless to put something here with 100s of warnings. +WFLAGS_WARN="-Wuninitialized -Wuse-without-only" +# while here we collect all other warnings, some we'll ignore +# TODO: -Wpedantic with -std2008 requires an upgrade of the MPI interfaces from mpi to mpi_f08 +WFLAGS_WARNALL="-Wno-pedantic -Wall -Wextra -Wsurprising -Warray-temporaries -Wcharacter-truncation -Wconversion-extra -Wimplicit-interface -Wimplicit-procedure -Wreal-q-constant -Walign-commons -Wfunction-elimination -Wrealloc-lhs -Wcompare-reals -Wzerotrip" + +# IEEE_EXCEPTIONS dependency +IEEE_EXCEPTIONS_DFLAGS="-D__HAS_IEEE_EXCEPTIONS" + +# check all of the above flags, filter out incompatible flags for the +# current version of gcc in use +if [ "${with_intel}" == "__DONTUSE__" ]; then + OPT_FLAGS=$(allowed_gfortran_flags $OPT_FLAGS) + NOOPT_FLAGS=$(allowed_gfortran_flags $NOOPT_FLAGS) + FCDEB_FLAGS=$(allowed_gfortran_flags $FCDEB_FLAGS) + FCDEB_FLAGS_DEBUG=$(allowed_gfortran_flags $FCDEB_FLAGS_DEBUG) + COVERAGE_FLAGS=$(allowed_gfortran_flags $COVERAGE_FLAGS) + WFLAGS_ERROR=$(allowed_gfortran_flags $WFLAGS_ERROR) + WFLAGS_WARN=$(allowed_gfortran_flags $WFLAGS_WARN) + WFLAGS_WARNALL=$(allowed_gfortran_flags $WFLAGS_WARNALL) +else + WFLAGS_ERROR="" + WFLAGS_WARN="" + WFLAGS_WARNALL="" +fi + +# check if ieee_exeptions module is available for the current version +# of gfortran being used +if ! (check_gfortran_module ieee_exceptions); then + IEEE_EXCEPTIONS_DFLAGS="" +fi + +# concatenate the above flags into WFLAGS, FCDEBFLAGS, DFLAGS and +# finally into FCFLAGS and CFLAGS +WFLAGS="$WFLAGS_ERROR $WFLAGS_WARN IF_WARNALL(${WFLAGS_WARNALL}|)" +FCDEBFLAGS="$FCDEB_FLAGS IF_DEBUG($FCDEB_FLAGS_DEBUG|)" +DFLAGS="${CP_DFLAGS} IF_DEBUG($IEEE_EXCEPTIONS_DFLAGS -D__CHECK_DIAG|) IF_COVERAGE($COVERAGE_DFLAGS|)" +# language independent flags +G_CFLAGS="$BASEFLAGS" +G_CFLAGS="$G_CFLAGS IF_COVERAGE($COVERAGE_FLAGS|IF_DEBUG($NOOPT_FLAGS|$OPT_FLAGS))" +G_CFLAGS="$G_CFLAGS IF_DEBUG(|$PROFOPT_FLAGS)" +G_CFLAGS="$G_CFLAGS $CP_CFLAGS" +if [ "${with_intel}" == "__DONTUSE__" ]; then + # FCFLAGS, for gfortran + FCFLAGS="$G_CFLAGS \$(FCDEBFLAGS) \$(WFLAGS) \$(DFLAGS)" + FCFLAGS+=" IF_MPI($(allowed_gfortran_flags "-fallow-argument-mismatch")|)" +else + FCFLAGS="$G_CFLAGS \$(FCDEBFLAGS) \$(WFLAGS) \$(DFLAGS)" +fi +# CFLAGS, special flags for gcc + +# TODO: Remove -Wno-vla-parameter after upgrade to gcc 11.3. +# https://gcc.gnu.org/bugzilla//show_bug.cgi?id=101289 +if [ "${with_intel}" == "__DONTUSE__" ]; then + CFLAGS="$G_CFLAGS -std=c11 -Wall -Wextra -Werror -Wno-vla-parameter -Wno-deprecated-declarations \$(DFLAGS)" +else + CXXFLAGS="IF_MPI(-cxx=${I_MPI_CXX}|) $G_CFLAGS -std=c11 -Wall \$(DFLAGS)" + CFLAGS="IF_MPI(-cc=${I_MPI_CC}|) $G_CFLAGS -std=c11 -Wall \$(DFLAGS)" + FCFLAGS="IF_MPI(-fc=${I_MPI_FC}|) $FCFLAGS -diag-disable=8291 -diag-disable=8293 -fpp -fpscomp logicals -free" +fi + +# Linker flags +# About --whole-archive see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52590 +STATIC_FLAGS="-static -Wl,--whole-archive -lpthread -Wl,--no-whole-archive" +# Get unfortunately ignored: -static-libgcc -static-libstdc++ -static-libgfortran +LDFLAGS="IF_STATIC(${STATIC_FLAGS}|) \$(FCFLAGS) ${CP_LDFLAGS}" + +# Library flags +# add standard libs +LIBS="${CP_LIBS} -lstdc++" + +if [ "${with_intel}" == "__DONTUSE__" ]; then + CXXFLAGS+=" --std=c++14 \$(DFLAGS) -Wno-deprecated-declarations" +else + CXXFLAGS+=" ${G_CFLAGS} --std=c++14 \$(DFLAGS)" +fi +# CUDA handling +if [ "${ENABLE_CUDA}" = __TRUE__ ] && [ "${GPUVER}" != no ]; then + CUDA_LIBS="-lcudart -lnvrtc -lcuda -lcufft -lcublas -lrt IF_DEBUG(-lnvToolsExt|)" + CUDA_DFLAGS="-D__OFFLOAD_CUDA -D__DBCSR_ACC IF_DEBUG(-D__OFFLOAD_PROFILING|)" + if [ "${with_cusolvermp}" != "__DONTUSE__" ]; then + CUDA_LIBS+=" -lcusolverMp -lcusolver -lcal -lnvidia-ml" + CUDA_DFLAGS+=" -D__CUSOLVERMP" + fi + LIBS="${LIBS} IF_CUDA(${CUDA_LIBS}|)" + DFLAGS="IF_CUDA(${CUDA_DFLAGS}|) ${DFLAGS}" + NVFLAGS="-g -arch sm_${ARCH_NUM} -O3 -allow-unsupported-compiler -Xcompiler='-fopenmp -Wall -Wextra -Werror' --std=c++11 \$(DFLAGS)" + check_command nvcc "cuda" + check_lib -lcudart "cuda" + check_lib -lnvrtc "cuda" + check_lib -lcuda "cuda" + check_lib -lcufft "cuda" + check_lib -lcublas "cuda" + + # Set include flags + CUDA_FLAGS="" + add_include_from_paths CUDA_FLAGS "cuda.h" $INCLUDE_PATHS + NVFLAGS+=" ${CUDA_FLAGS}" + NVCC_TOPDIR="$(dirname $(command -v nvcc))/.." + CUDA_PATH="${CUDA_PATH:-${CUDA_HOME:-${NVCC_TOPDIR:-/CUDA_HOME-notset}}}" + CFLAGS+=" IF_CUDA(-I${CUDA_PATH}/include|)" + CXXFLAGS+=" IF_CUDA(-I${CUDA_PATH}/include|)" + + # Set LD-flags + CUDA_LDFLAGS="" + add_lib_from_paths CUDA_LDFLAGS "libcudart.*" $LIB_PATHS + add_lib_from_paths CUDA_LDFLAGS "libnvrtc.*" $LIB_PATHS + add_lib_from_paths CUDA_LDFLAGS "libcuda.*" $LIB_PATHS + add_lib_from_paths CUDA_LDFLAGS "libcufft.*" $LIB_PATHS + add_lib_from_paths CUDA_LDFLAGS "libcublas.*" $LIB_PATHS + export CUDA_LDFLAGS="${CUDA_LDFLAGS}" + LDFLAGS+=" IF_CUDA(${CUDA_LDFLAGS}|)" +fi + +# HIP handling +if [ "${ENABLE_HIP}" = __TRUE__ ] && [ "${GPUVER}" != no ]; then + check_command hipcc "hip" + check_lib -lhipblas "hip" + add_lib_from_paths HIP_LDFLAGS "libhipblas.*" $LIB_PATHS + check_lib -lhipfft "hip" + add_lib_from_paths HIP_LDFLAGS "libhipfft.*" $LIB_PATHS + + HIP_INCLUDES="-I${ROCM_PATH}/include" + case "${GPUVER}" in + Mi50) + check_lib -lamdhip64 "hip" + add_lib_from_paths HIP_LDFLAGS "libamdhip64.*" $LIB_PATHS + check_lib -lhipfft "hip" + add_lib_from_paths HIP_LDFLAGS "libhipfft.*" $LIB_PATHS + check_lib -lrocblas "hip" + add_lib_from_paths HIP_LDFLAGS "librocblas.*" $LIB_PATHS + check_lib -lroctx64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctx64.*" $LIB_PATHS + check_lib -lroctracer64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctracer64.*" $LIB_PATHS + HIP_FLAGS+="-fPIE -D__HIP_PLATFORM_AMD__ -g --offload-arch=gfx906 -O3 --std=c++11 -Wall -Wextra -Werror \$(DFLAGS)" + LIBS+=" IF_HIP(-lamdhip64 -lhipfft -lhipblas -lrocblas IF_DEBUG(-lroctx64 -lroctracer64|)|)" + DFLAGS+=" IF_HIP(-D__HIP_PLATFORM_AMD__ -D__OFFLOAD_HIP IF_DEBUG(-D__OFFLOAD_PROFILING|)|) -D__DBCSR_ACC" + CXXFLAGS+=" -fopenmp -Wall -Wextra -Werror" + ;; + Mi100) + check_lib -lamdhip64 "hip" + add_lib_from_paths HIP_LDFLAGS "libamdhip64.*" $LIB_PATHS + check_lib -lhipfft "hip" + add_lib_from_paths HIP_LDFLAGS "libhipfft.*" $LIB_PATHS + check_lib -lrocblas "hip" + add_lib_from_paths HIP_LDFLAGS "librocblas.*" $LIB_PATHS + check_lib -lroctx64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctx64.*" $LIB_PATHS + check_lib -lroctracer64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctracer64.*" $LIB_PATHS + HIP_FLAGS+="-fPIE -D__HIP_PLATFORM_AMD__ -g --offload-arch=gfx908 -O3 --std=c++11 -Wall -Wextra -Werror \$(DFLAGS)" + LIBS+=" IF_HIP(-lamdhip64 -lhipfft -lhipblas -lrocblas IF_DEBUG(-lroctx64 -lroctracer64|)|)" + DFLAGS+=" IF_HIP(-D__HIP_PLATFORM_AMD__ -D__OFFLOAD_HIP IF_DEBUG(-D__OFFLOAD_PROFILING|)|) -D__DBCSR_ACC" + CXXFLAGS+=" -fopenmp -Wall -Wextra -Werror" + ;; + Mi250) + check_lib -lamdhip64 "hip" + add_lib_from_paths HIP_LDFLAGS "libamdhip64.*" $LIB_PATHS + check_lib -lhipfft "hip" + add_lib_from_paths HIP_LDFLAGS "libhipfft.*" $LIB_PATHS + check_lib -lrocblas "hip" + add_lib_from_paths HIP_LDFLAGS "librocblas.*" $LIB_PATHS + check_lib -lroctx64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctx64.*" $LIB_PATHS + check_lib -lroctracer64 "hip" + add_lib_from_paths HIP_LDFLAGS "libroctracer64.*" $LIB_PATHS + HIP_FLAGS+="-fPIE -D__HIP_PLATFORM_AMD__ -g --offload-arch=gfx90a -O3 --std=c++11 -Wall -Wextra -Werror \$(DFLAGS)" + LIBS+=" IF_HIP(-lamdhip64 -lhipfft -lhipblas -lrocblas IF_DEBUG(-lroctx64 -lroctracer64|)|)" + DFLAGS+=" IF_HIP(-D__HIP_PLATFORM_AMD__ -D__OFFLOAD_HIP IF_DEBUG(-D__OFFLOAD_PROFILING|)|) -D__DBCSR_ACC" + CXXFLAGS+=" -fopenmp -Wall -Wextra -Werror" + ;; + *) + check_command nvcc "cuda" + check_lib -lcudart "cuda" + check_lib -lnvrtc "cuda" + check_lib -lcuda "cuda" + check_lib -lcufft "cuda" + check_lib -lcublas "cuda" + DFLAGS+=" IF_HIP(-D__HIP_PLATFORM_NVIDIA__ -D__HIP_PLATFORM_NVCC__ -D__OFFLOAD_HIP |) -D__DBCSR_ACC" + HIP_FLAGS+=" -g -arch sm_${ARCH_NUM} -O3 -Xcompiler='-fopenmp -Wall -Wextra -Werror' --std=c++11 \$(DFLAGS)" + add_include_from_paths CUDA_CFLAGS "cuda.h" $INCLUDE_PATHS + HIP_INCLUDES+=" -I${CUDA_PATH:-${CUDA_HOME:-/CUDA_HOME-notset}}/include" + # GCC issues lots of warnings for hip/nvidia_detail/hip_runtime_api.h + CFLAGS+=" -Wno-error ${CUDA_CFLAGS}" + CXXFLAGS+=" -Wno-error ${CUDA_CFLAGS}" + # Set LD-flags + # Multiple definition because of hip/include/hip/nvidia_detail/nvidia_hiprtc.h + LDFLAGS+=" -Wl,--allow-multiple-definition" + LIBS+=" -lhipfft -lhipblas -lhipfft -lnvrtc -lcudart -lcufft -lcublas -lcuda" + add_lib_from_paths HIP_LDFLAGS "libcudart.*" $LIB_PATHS + add_lib_from_paths HIP_LDFLAGS "libnvrtc.*" $LIB_PATHS + add_lib_from_paths HIP_LDFLAGS "libcuda.*" $LIB_PATHS + add_lib_from_paths HIP_LDFLAGS "libcufft.*" $LIB_PATHS + add_lib_from_paths HIP_LDFLAGS "libcublas.*" $LIB_PATHS + ;; + esac + + LDFLAGS+=" ${HIP_LDFLAGS}" + CFLAGS+=" ${HIP_INCLUDES}" + CXXFLAGS+=" ${HIP_INCLUDES}" +fi + +# OpenCL handling (GPUVER is not a prerequisite) +if [ "${ENABLE_OPENCL}" = __TRUE__ ]; then + OPENCL_DFLAGS="-D__DBCSR_ACC" + # avoid duplicating FLAGS + if [[ "${GPUVER}" == no || ("${ENABLE_CUDA}" != __TRUE__ && "${ENABLE_HIP}" != __TRUE__) ]]; then + OPENCL_FLAGS="${CFLAGS} ${OPENCL_DFLAGS} ${DFLAGS}" + DFLAGS="IF_OPENCL(${OPENCL_DFLAGS} ${DFLAGS}|)" + # Set include flags + OPENCL_INCLUDES="" + add_include_from_paths -p OPENCL_INCLUDES "CL" $INCLUDE_PATHS + if [ -e "${OPENCL_INCLUDES}/CL/cl.h" ]; then + OPENCL_FLAGS+=" ${OPENCL_INCLUDES}" + fi + fi + # Append OpenCL library to LIBS + LIBOPENCL=$(ldconfig -p 2> /dev/null | grep -m1 OpenCL | rev | cut -d' ' -f1 | rev) + if [ -e "${LIBOPENCL}" ]; then + echo "Found library ${LIBOPENCL}" + LIBS+=" IF_OPENCL(${LIBOPENCL}|)" + else + LIBS+=" IF_OPENCL(-lOpenCL|)" + fi +fi + +# ------------------------- +# generate the arch files +# ------------------------- + +# generator for ABACUS ARCH files +gen_arch_file() { + # usage: gen_arch_file file_name flags + # + # If the flags are present they are assumed to be on, otherwise + # they switched off + require_env ARCH_FILE_TEMPLATE + local __filename=$1 + shift + local __flags=$@ + local __full_flag_list="MPI DEBUG CUDA WARNALL COVERAGE" + local __flag="" + for __flag in $__full_flag_list; do + eval "local __${__flag}=off" + done + for __flag in $__flags; do + eval "__${__flag}=on" + done + # generate initial arch file + cat $ARCH_FILE_TEMPLATE > $__filename + # add additional parts + if [ "$__CUDA" = "on" ]; then + cat << EOF >> $__filename +# +GPUVER = \${GPUVER} +OFFLOAD_CC = \${NVCC} +OFFLOAD_FLAGS = \${NVFLAGS} +OFFLOAD_TARGET = cuda +EOF + fi + + if [ "$__HIP" = "on" ]; then + cat << EOF >> $__filename +# +GPUVER = \${GPUVER} +OFFLOAD_CC = \${ROCM_PATH}/hip/bin/hipcc +OFFLOAD_FLAGS = \${HIP_FLAGS} \${HIP_INCLUDES} +OFFLOAD_TARGET = hip +EOF + fi + + if [ "$__OPENCL" = "on" ]; then + cat << EOF >> $__filename +# +override DBCSR_USE_ACCEL = opencl +EOF + if [ "${OPENCL_FLAGS}" ]; then + cat << EOF >> $__filename +OFFLOAD_FLAGS = \${OPENCL_FLAGS} +EOF + fi + fi + + if [ "$__WARNALL" = "on" ]; then + cat << EOF >> $__filename +# +SHELL := bash +FC := set -o pipefail && \\\${FC} +CC := set -o pipefail && \\\${CC} +CXX := set -o pipefail && \\\${CXX} +LD := set -o pipefail && \\\${LD} +FCLOGPIPE = 2>&1 | tee \\\$(notdir \\\$<).warn +EOF + fi + if [ "$with_gcc" != "__DONTUSE__" ]; then + cat << EOF >> $__filename +# +FYPPFLAGS = -n --line-marker-format=gfortran5 +EOF + fi + if [ "${with_intel}" != "__DONTUSE__" ]; then + cat << EOF >> $__filename +# +# Required due to memory leak that occurs if high optimisations are used +mp2_optimize_ri_basis.o: mp2_optimize_ri_basis.F + \\\$(FC) -c \\\$(subst -O2,-O0,\\\$(FCFLAGS)) \\\$< +# Required due to SEGFAULTS occurring for higher optimisation levels +paw_basis_types.o: paw_basis_types.F + \\\$(FC) -c \\\$(subst -O2,-O1,\\\$(FCFLAGS)) \\\$< +# Reduce compilation time +hfx_contraction_methods.o: hfx_contraction_methods.F + \\\$(FC) -c \\\$(subst -O2,-O1,\\\$(FCFLAGS)) \\\$< +EOF + fi + # replace variable values in output file using eval + local __TMPL=$(cat $__filename) + eval "printf \"${__TMPL}\n\"" > $__filename + # pass this to parsers to replace all of the IF_XYZ statements + "${SCRIPTDIR}/parse_if.py" -i -f "${__filename}" $__flags + echo "Wrote ${INSTALLDIR}/arch/$__filename" +} + +rm -f ${INSTALLDIR}/arch/local* +# normal production arch files +if [ "${with_intel}" != "__DONTUSE__" ]; then + gen_arch_file "local.ssmp" + gen_arch_file "local.sdbg" DEBUG +else + gen_arch_file "local.ssmp" + gen_arch_file "local_static.ssmp" STATIC + gen_arch_file "local.sdbg" DEBUG + gen_arch_file "local_asan.ssmp" ASAN + gen_arch_file "local_coverage.sdbg" COVERAGE +fi +arch_vers="ssmp sdbg" + +if [ "$MPI_MODE" != no ]; then + if [ "${with_intel}" != "__DONTUSE__" ]; then + gen_arch_file "local.psmp" MPI + gen_arch_file "local.pdbg" MPI DEBUG + else + gen_arch_file "local.psmp" MPI + gen_arch_file "local.pdbg" MPI DEBUG + gen_arch_file "local_asan.psmp" MPI ASAN + gen_arch_file "local_static.psmp" MPI STATIC + gen_arch_file "local_warn.psmp" MPI WARNALL + gen_arch_file "local_coverage.pdbg" MPI COVERAGE + fi + arch_vers="${arch_vers} psmp pdbg" +fi + +# opencl enabled arch files +if [ "$ENABLE_OPENCL" = __TRUE__ ]; then + gen_arch_file "local_opencl.ssmp" OPENCL + gen_arch_file "local_opencl.sdbg" OPENCL DEBUG + if [ "$MPI_MODE" != no ]; then + gen_arch_file "local_opencl.psmp" OPENCL MPI + gen_arch_file "local_opencl.pdbg" OPENCL MPI DEBUG + gen_arch_file "local_opencl_warn.psmp" OPENCL MPI WARNALL + gen_arch_file "local_coverage_opencl.pdbg" OPENCL MPI COVERAGE + fi + DBCSR_OPENCL=OPENCL +fi + +# cuda enabled arch files +if [ "$ENABLE_CUDA" = __TRUE__ ]; then + gen_arch_file "local_cuda.ssmp" CUDA ${DBCSR_OPENCL} + gen_arch_file "local_cuda.sdbg" CUDA ${DBCSR_OPENCL} DEBUG + if [ "$MPI_MODE" != no ]; then + gen_arch_file "local_cuda.psmp" CUDA ${DBCSR_OPENCL} MPI + gen_arch_file "local_cuda.pdbg" CUDA ${DBCSR_OPENCL} MPI DEBUG + gen_arch_file "local_cuda_warn.psmp" CUDA ${DBCSR_OPENCL} MPI WARNALL + gen_arch_file "local_coverage_cuda.pdbg" CUDA ${DBCSR_OPENCL} MPI COVERAGE + fi +fi + +# hip enabled arch files +if [ "$ENABLE_HIP" = __TRUE__ ]; then + gen_arch_file "local_hip.ssmp" HIP + gen_arch_file "local_hip.sdbg" HIP DEBUG + if [ "$MPI_MODE" != no ]; then + gen_arch_file "local_hip.psmp" HIP MPI + gen_arch_file "local_hip.pdbg" HIP MPI DEBUG + gen_arch_file "local_hip_warn.psmp" HIP MPI WARNALL + gen_arch_file "local_coverage_hip.pdbg" HIP MPI COVERAGE + fi +fi + +cd "${ROOTDIR}" + +# ------------------------- +# print out user instructions +# ------------------------- + +cat << EOF +========================== usage ========================= +Done! +Now copy: + cp ${INSTALLDIR}/arch/* to the ABACUS/arch/ directory +To use the installed tools and libraries and ABACUS version +compiled with it you will first need to execute at the prompt: + source ${SETUPFILE} +To build ABACUS you should change directory: + cd ABACUS/ + make -j $(get_nprocs) ARCH=local VERSION="${arch_vers}" + +arch files for GPU enabled CUDA versions are named "local_cuda.*" +arch files for GPU enabled HIP versions are named "local_hip.*" +arch files for OpenCL (GPU) versions are named "local_opencl.*" +arch files for coverage versions are named "local_coverage.*" + +Note that these pre-built arch files are for the GNU compiler, users have to adapt them for other compilers. +It is possible to use the provided ABACUS arch files as guidance. +EOF + +#EOF diff --git a/toolchain/scripts/get_openblas_arch.sh b/toolchain/scripts/get_openblas_arch.sh new file mode 100755 index 0000000000..17b93eeb62 --- /dev/null +++ b/toolchain/scripts/get_openblas_arch.sh @@ -0,0 +1,61 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")" && pwd -P)" + +openblas_ver="0.3.23" # Keep in sync with install_openblas.sh +openblas_sha256="5d9491d07168a5d00116cdc068a40022c3455bf9293c7cb86a65b1054d7e5114" +openblas_pkg="OpenBLAS-${openblas_ver}.tar.gz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +find_openblas_dir() { + local __dir='' + for __dir in *OpenBLAS*; do + if [ -d "$__dir" ]; then + echo "$__dir" + return 0 + fi + done + echo '' +} + +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +echo "==================== Getting proc arch info using OpenBLAS tools ====================" +# find existing openblas source dir +openblas_dir="$(find_openblas_dir)" +# if cannot find openblas source dir, try download one +if ! [ "$openblas_dir" ]; then + if [ -f ${openblas_pkg} ]; then + echo "${openblas_pkg} is found" + else + download_pkg_from_ABACUS_org "${openblas_sha256}" "${openblas_pkg}" + fi + tar -xzf ${openblas_pkg} + openblas_dir="$(find_openblas_dir)" +fi +openblas_conf="${openblas_dir}/Makefile.conf" +# try find Makefile.config, if not then generate one with make lapack_prebuild +if ! [ -f "$openblas_conf" ]; then + cd "$openblas_dir" + make lapack_prebuild + cd .. +fi +OPENBLAS_LIBCORE="$(grep 'LIBCORE=' $openblas_conf | cut -f2 -d=)" +OPENBLAS_ARCH="$(grep 'ARCH=' $openblas_conf | cut -f2 -d=)" +echo "OpenBLAS detected LIBCORE = $OPENBLAS_LIBCORE" +echo "OpenBLAS detected ARCH = $OPENBLAS_ARCH" +# output setup file +cat << EOF > "${BUILDDIR}/openblas_arch" +export OPENBLAS_LIBCORE="${OPENBLAS_LIBCORE}" +export OPENBLAS_ARCH="${OPENBLAS_ARCH}" +EOF diff --git a/toolchain/scripts/parse_if.py b/toolchain/scripts/parse_if.py new file mode 100755 index 0000000000..32c34ff022 --- /dev/null +++ b/toolchain/scripts/parse_if.py @@ -0,0 +1,168 @@ +#!/usr/bin/env python3 + + +import sys +import argparse + + +class Parser: + "Parser for files with IF_XYZ(A|B) constructs" + + def __init__(self, switches): + self.mSwitches = switches + + def Switches(self): + "outputs the switches used by parser" + return self.mSwitches + + def SetSwitch(self, key, val): + "Set or add a (key,val) switch" + self.mSwitches[key] = val + + def ParseSingleIf(self, string, switch): + """ + switch should be a tuple (key, val) + ParseSingleIf(string, switch) will replace in string the + first occurrence of IF_key(A|B) with A if val = True; + B if val = False + """ + init = string.find("IF_" + switch[0]) + start = init + end = len(string) + mark = end + counter = 0 + ind = start + # determine the correct location for '|' + for cc in string[init:]: + if cc == "(": + if counter == 0: + start = ind + counter += 1 + elif cc == ")": + counter -= 1 + if counter == 0: + end = ind + break + elif cc == "|" and counter == 1: + mark = ind + ind += 1 + # resolve the option + if switch[1]: + result = ( + string[0:init] + + string[start + 1 : mark] + + string[end + 1 : len(string)] + ) + else: + result = ( + string[0:init] + string[mark + 1 : end] + string[end + 1 : len(string)] + ) + return result + + def ParseIf(self, string, switch): + """ + ParseIf(string, switch) will replace in string recursively + occurance of IF_key(A|B) statements with A if val = True; + B if val = False + """ + result = string + while result.find("IF_" + switch[0]) > -1: + result = self.ParseSingleIf(result, switch) + return result + + def ParseString(self, string): + """ + ParseString(string) will parse in string recursively + all of IF_key(A|B) statements for all (key, val) pairs + in dictionary self.mSwitches + """ + result = string + for switch in self.mSwitches.items(): + result = self.ParseIf(result, switch) + return result + + def ParseDocument(self, input_file, output_file): + """ + ParseDocument(input_file, output_fiel) will replace recursively + all of IF_key(A|B) statements for all (key, val) pairs + in dictionary self.mSwitches in the input_file stream and output + to the output_file stream. + """ + output = [] + for line in input_file: + output.append(self.ParseString(line)) + + if input_file == output_file: + output_file.seek(0) + output_file.truncate() + + # write to the same file + for line in output: + output_file.write(line) + + +# ------------------------------------------------------------------------ +# main program +# ------------------------------------------------------------------------ +if __name__ == "__main__": + argparser = argparse.ArgumentParser( + description="Resolve IF_*() macros based on given tags" + ) + argparser.add_argument( + "-f", + "--file", + metavar="FILENAME", + type=str, + help="read from given file instead of stdin", + ) + argparser.add_argument( + "-i", + "--inplace", + action="store_true", + help="do in-place replacement for the given file instead of echo to stdout", + ) + argparser.add_argument("--selftest", action="store_true", help="run self test") + argparser.add_argument( + "flags", + metavar="FLAG", + type=str, + nargs="*", + help="specified flags are set to true", + ) + + args = argparser.parse_args() + + # default list of switches used by the parser + switches = { + "MPI": False, + "CUDA": False, + "HIP": False, + "OPENCL": False, + "WARNALL": False, + "DEBUG": False, + "ASAN": False, + "STATIC": False, + "VALGRIND": False, + "COVERAGE": False, + } + + parser = Parser(switches) + + # set the list of switches given on the command line to True + for flag in args.flags: + parser.SetSwitch(flag, True) + + if args.selftest: + sys.exit(0) # TODO implement selftest + + # do parsing + + if not args.file: + parser.ParseDocument(sys.stdin, sys.stdout) + else: + if args.inplace: + with open(args.file, mode="r+") as fhandle: + parser.ParseDocument(fhandle, fhandle) + else: + with open(args.file, mode="r") as fhandle: + parser.ParseDocument(fhandle, sys.stdout) diff --git a/toolchain/scripts/signal_trap.sh b/toolchain/scripts/signal_trap.sh new file mode 100755 index 0000000000..8fd7f94260 --- /dev/null +++ b/toolchain/scripts/signal_trap.sh @@ -0,0 +1,7 @@ +# signal trapping + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all +# shellcheck shell=bash + +trap 'error_handler ${LINENO}' ERR diff --git a/toolchain/scripts/stage0/install_cmake.sh b/toolchain/scripts/stage0/install_cmake.sh new file mode 100755 index 0000000000..816c2eba17 --- /dev/null +++ b/toolchain/scripts/stage0/install_cmake.sh @@ -0,0 +1,78 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_cmake" ] && rm "${BUILDDIR}/setup_cmake" + +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_cmake}" in + __INSTALL__) + echo "==================== Installing CMake ====================" + cmake_ver="3.26.3" + if [ "${OPENBLAS_ARCH}" = "arm64" ]; then + cmake_arch="linux-aarch64" + cmake_sha256="b002c22b926aacd6fefe64bcf08620216088eb72f55ac532b7bcfd4d93443d50" + elif [ "${OPENBLAS_ARCH}" = "x86_64" ]; then + cmake_arch="linux-x86_64" + cmake_sha256="8ec0ef24375a1d0e78de2f790b4545d0718acc55fd7e2322ecb8e135696c77fe" + else + report_error ${LINENO} \ + "cmake installation for ARCH=${ARCH} is not supported. You can try to use the system installation using the flag --with-cmake=system instead." + exit 1 + fi + pkg_install_dir="${INSTALLDIR}/cmake-${cmake_ver}" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "cmake-${cmake_ver} is already installed, skipping it." + else + if [ -f cmake-${cmake_ver}-${cmake_arch}.sh ]; then + echo "cmake-${cmake_ver}-${cmake_arch}.sh is found" + else + download_pkg_from_ABACUS_org "${cmake_sha256}" "cmake-${cmake_ver}-${cmake_arch}.sh" + fi + echo "Installing from scratch into ${pkg_install_dir}" + mkdir -p ${pkg_install_dir} + /bin/sh cmake-${cmake_ver}-${cmake_arch}.sh --prefix=${pkg_install_dir} --skip-license > install.log 2>&1 || tail -n ${LOG_LINES} install.log + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage0/$(basename ${SCRIPT_NAME})" + fi + ;; + __SYSTEM__) + echo "==================== Finding CMake from system paths ====================" + check_command cmake "cmake" + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking CMake to user paths ====================" + pkg_install_dir="$with_cmake" + check_dir "${with_cmake}/bin" + ;; +esac +if [ "${with_cmake}" != "__DONTUSE__" ]; then + if [ "${with_cmake}" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_cmake" +prepend_path PATH "${pkg_install_dir}/bin" +export PATH="${pkg_install_dir}/bin:":${PATH} +EOF + cat "${BUILDDIR}/setup_cmake" >> $SETUPFILE + fi +fi + +load "${BUILDDIR}/setup_cmake" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "cmake" diff --git a/toolchain/scripts/stage0/install_gcc.sh b/toolchain/scripts/stage0/install_gcc.sh new file mode 100755 index 0000000000..5c8a12766d --- /dev/null +++ b/toolchain/scripts/stage0/install_gcc.sh @@ -0,0 +1,227 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +gcc_ver="13.1.0" +gcc_sha256="bacd4c614d8bd5983404585e53478d467a254249e0f1bb747c8bc6d787bd4fa2" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_gcc" ] && rm "${BUILDDIR}/setup_gcc" + +GCC_LDFLAGS="" +GCC_CFLAGS="" +TSANFLAGS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_gcc}" in + __INSTALL__) + echo "==================== Installing GCC ====================" + pkg_install_dir="${INSTALLDIR}/gcc-${gcc_ver}" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "gcc-${gcc_ver} is already installed, skipping it." + else + if [ -f gcc-${gcc_ver}.tar.gz ]; then + echo "gcc-${gcc_ver}.tar.gz is found" + else + download_pkg_from_ABACUS_org "${gcc_sha256}" "gcc-${gcc_ver}.tar.gz" + fi + [ -d gcc-${gcc_ver} ] && rm -rf gcc-${gcc_ver} + tar -xzf gcc-${gcc_ver}.tar.gz + + echo "Installing GCC from scratch into ${pkg_install_dir}" + cd gcc-${gcc_ver} + + # Download prerequisites from cp2k.org because gcc.gnu.org returns 403 when queried from GCP. + sed -i 's|http://gcc.gnu.org/pub/gcc/infrastructure/|https://cp2k.org/static/downloads/|' ./contrib/download_prerequisites + ./contrib/download_prerequisites > prereq.log 2>&1 || tail -n ${LOG_LINES} prereq.log + GCCROOT=${PWD} + mkdir obj + cd obj + # TODO: Maybe use --disable-libquadmath-support to improve static linking: + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46539 + # + # TODO: Maybe use --disable-gnu-unique-object to improve static linking: + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60348#c13 + # https://stackoverflow.com/questions/11931420 + # + # TODO: Unfortunately, we can not simply use --disable-shared, because + # it would break OpenBLAS build and probably others too. + COMMON_FLAGS="-O2 -fPIC -fno-omit-frame-pointer -fopenmp -g" + CFLAGS="${COMMON_FLAGS} -std=gnu99" + CXXFLAGS="${CFLAGS}" + FCFLAGS="${COMMON_FLAGS} -fbacktrace" + ${GCCROOT}/configure --prefix="${pkg_install_dir}" \ + --libdir="${pkg_install_dir}/lib" \ + --enable-languages=c,c++,fortran \ + --disable-multilib --disable-bootstrap \ + --enable-lto \ + --enable-plugins \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) \ + CFLAGS="${CFLAGS}" \ + CXXFLAGS="${CXXFLAGS}" \ + FCFLAGS="${FCFLAGS}" \ + > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make -j $(get_nprocs) install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + # thread sanitizer + if [ ${ENABLE_TSAN} = "__TRUE__" ]; then + # now the tricky bit... we need to recompile in particular + # libgomp with -fsanitize=thread.. there is not configure + # option for this (as far as I know). we need to go in + # the build tree and recompile / reinstall with proper + # options... this is likely to break for later version of + # gcc, tested with 5.1.0 based on + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55374#c10 + cd x86_64*/libgfortran + make clean > clean.log 2>&1 || tail -n ${LOG_LINES} clean.log + CFLAGS="${CFLAGS} -fsanitize=thread" + CXXFLAGS="${CXXFLAGS} -fsanitize=thread" + FCFLAGS="${FCFLAGS} -fsanitize=thread" + make -j $(get_nprocs) \ + CFLAGS="${CFLAGS}" \ + CXXFLAGS="${CXXFLAGS}" \ + FCFLAGS="${FCFLAGS}" \ + LDFLAGS="-B$(pwd)/../libsanitizer/tsan/.libs/ -Wl,-rpath,$(pwd)/../libsanitizer/tsan/.libs/ -fsanitize=thread" \ + > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd ../libgomp + make clean > clean.log 2>&1 || tail -n ${LOG_LINES} clean.log + make -j $(get_nprocs) \ + CFLAGS="${CFLAGS}" \ + CXXFLAGS="${CXXFLAGS}" \ + FCFLAGS="${FCFLAGS}" \ + LDFLAGS="-B$(pwd)/../libsanitizer/tsan/.libs/ -Wl,-rpath,$(pwd)/../libsanitizer/tsan/.libs/ -fsanitize=thread" \ + > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd ${GCCROOT}/obj/ + fi + cd ../.. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage0/$(basename ${SCRIPT_NAME})" + fi + check_install ${pkg_install_dir}/bin/gcc "gcc" && CC="${pkg_install_dir}/bin/gcc" || exit 1 + check_install ${pkg_install_dir}/bin/g++ "gcc" && CXX="${pkg_install_dir}/bin/g++" || exit 1 + check_install ${pkg_install_dir}/bin/gfortran "gcc" && FC="${pkg_install_dir}/bin/gfortran" || exit 1 + F90="${FC}" + F77="${FC}" + GCC_CFLAGS="-I'${pkg_install_dir}/include'" + GCC_LDFLAGS="-L'${pkg_install_dir}/lib64' -L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib64' -Wl,-rpath,'${pkg_install_dir}/lib64'" + ;; + __SYSTEM__) + echo "==================== Finding GCC from system paths ====================" + check_command gcc "gcc" && CC="$(command -v gcc)" || exit 1 + check_command g++ "gcc" && CXX="$(command -v g++)" || exit 1 + check_command gfortran "gcc" && FC="$(command -v gfortran)" || exit 1 + F90="${FC}" + F77="${FC}" + add_include_from_paths -p GCC_CFLAGS "c++" ${INCLUDE_PATHS} + add_lib_from_paths GCC_LDFLAGS "libgfortran.*" ${LIB_PATHS} + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking GCC to user paths ====================" + pkg_install_dir="${with_gcc}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/lib64" + check_dir "${pkg_install_dir}/include" + check_command ${pkg_install_dir}/bin/gcc "gcc" && CC="${pkg_install_dir}/bin/gcc" || exit 1 + check_command ${pkg_install_dir}/bin/g++ "gcc" && CXX="${pkg_install_dir}/bin/g++" || exit 1 + check_command ${pkg_install_dir}/bin/gfortran "gcc" && FC="${pkg_install_dir}/bin/gfortran" || exit 1 + F90="${FC}" + F77="${FC}" + GCC_CFLAGS="-I'${pkg_install_dir}/include'" + GCC_LDFLAGS="-L'${pkg_install_dir}/lib64' -L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib64' -Wl,-rpath,'${pkg_install_dir}/lib64'" + ;; +esac +if [ "${ENABLE_TSAN}" = "__TRUE__" ]; then + TSANFLAGS="-fsanitize=thread" +else + TSANFLAGS="" +fi +if [ "${with_gcc}" != "__DONTUSE__" ]; then + cat << EOF > "${BUILDDIR}/setup_gcc" +export CC="${CC}" +export CXX="${CXX}" +export FC="${FC}" +export F90="${F90}" +export F77="${F77}" +EOF + if [ "${with_gcc}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_gcc" +# needs full path for mpich/openmpi builds, triggers openblas bug +prepend_path PATH "${pkg_install_dir}/bin" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib64" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib64" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib64" +prepend_path CPATH "${pkg_install_dir}/include" +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_LIBRARY_PATH="${pkg_install_dir}/lib64":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib64":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib64":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH +EOF + fi + cat << EOF >> "${BUILDDIR}/setup_gcc" +export GCC_CFLAGS="${GCC_CFLAGS}" +export GCC_LDFLAGS="${GCC_LDFLAGS}" +export TSANFLAGS="${TSANFLAGS}" +EOF + cat "${BUILDDIR}/setup_gcc" >> ${SETUPFILE} +fi + +# ---------------------------------------------------------------------- +# Suppress reporting of known leaks +# ---------------------------------------------------------------------- + +# this might need to be adjusted for the versions of the software +# employed +cat << EOF >> ${INSTALLDIR}/lsan.supp +# known leak either related to mpi or scalapack (e.g. showing randomly for Fist/regtest-7-2/UO2-2x2x2-genpot_units.inp) +leak:__cp_fm_types_MOD_cp_fm_write_unformatted +# leak related to mpi or scalapack triggers sometimes for regtest-kp-2/cc2.inp +leak:Cblacs_gridmap +leak:blacs_gridmap_ +# leak due to compiler bug triggered by combination of OOP and ALLOCATABLE +leak:__dbcsr_tensor_types_MOD___copy_dbcsr_tensor_types_Dbcsr_tas_dist_t +leak:__dbcsr_tensor_types_MOD___copy_dbcsr_tensor_types_Dbcsr_tas_blk_size_t +EOF +cat << EOF >> ${INSTALLDIR}/tsan.supp +# tsan bugs likely related to gcc +# PR66756 +deadlock:_gfortran_st_open +mutex:_gfortran_st_open +# bugs related to removing/filtering blocks in DBCSR.. to be fixed +race:__dbcsr_block_access_MOD_dbcsr_remove_block +race:__dbcsr_operations_MOD_dbcsr_filter_anytype +race:__dbcsr_transformations_MOD_dbcsr_make_untransposed_blocks +EOF + +# need to also link to the .supp file in setup file +cat << EOF >> ${SETUPFILE} +export LSAN_OPTIONS=suppressions=${INSTALLDIR}/lsan.supp +export TSAN_OPTIONS=suppressions=${INSTALLDIR}/tsan.supp +EOF + +load "${BUILDDIR}/setup_gcc" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "gcc" diff --git a/toolchain/scripts/stage0/install_intel.sh b/toolchain/scripts/stage0/install_intel.sh new file mode 100755 index 0000000000..f0053a4360 --- /dev/null +++ b/toolchain/scripts/stage0/install_intel.sh @@ -0,0 +1,105 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=${0} +SCRIPT_DIR="$(cd "$(dirname "${SCRIPT_NAME}")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_intel" ] && rm "${BUILDDIR}/setup_intel" + +INTEL_CFLAGS="" +INTEL_LDFLAGS="" +INTEL_LIBS="" +mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_intel}" in + __INSTALL__) + echo "==================== Installing the Intel compiler ====================" + echo "__INSTALL__ is not supported; please install the Intel compiler manually" + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding Intel compiler from system paths ====================" + if [ "${intel_classic}" = "yes" ]; then + check_command icc "intel" && CC="$(realpath $(command -v icc))" || exit 1 + check_command icpc "intel" && CXX="$(realpath $(command -v icpc))" || exit 1 + check_command ifort "intel" && FC="$(realpath $(command -v ifort))" || exit 1 + else + check_command icx "intel" && CC="$(realpath $(command -v icx))" || exit 1 + check_command icpx "intel" && CXX="$(realpath $(command -v icpx))" || exit 1 + check_command ifort "intel" && FC="$(realpath $(command -v ifort))" || exit 1 + fi + F90="${FC}" + F77="${FC}" + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking Intel compiler to user paths ====================" + pkg_install_dir="${with_intel}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + if [ "${intel_classic}" = "yes" ]; then + check_command ${pkg_install_dir}/bin/icc "intel" && CC="${pkg_install_dir}/bin/icc" || exit 1 + check_command ${pkg_install_dir}/bin/icpc "intel" && CXX="${pkg_install_dir}/bin/icpc" || exit 1 + check_command ${pkg_install_dir}/bin/ifort "intel" && FC="${pkg_install_dir}/bin/ifort" || exit 1 + else + # abacus do not need icx, the key is mkl + check_command ${pkg_install_dir}/bin/icx "intel" && CC="${pkg_install_dir}/bin/icx" || exit 1 + check_command ${pkg_install_dir}/bin/icpx "intel" && CXX="${pkg_install_dir}/bin/icpx" || exit 1 + check_command ${pkg_install_dir}/bin/ifort "intel" && FC="${pkg_install_dir}/bin/ifort" || exit 1 + fi + F90="${FC}" + F77="${FC}" + INTEL_CFLAGS="-I'${pkg_install_dir}/include'" + INTEL_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "${with_intel}" != "__DONTUSE__" ]; then + echo "CC is ${CC}" + echo "CXX is ${CXX}" + echo "FC is ${FC}" + cat << EOF > "${BUILDDIR}/setup_intel" +export CC="${CC}" +export CXX="${CXX}" +export FC="${FC}" +export F90="${F90}" +export F77="${F77}" +EOF + if [ "${with_intel}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_intel" +prepend_path PATH "${pkg_install_dir}/bin" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path CPATH "${pkg_install_dir}/include" +export PATH="${pkg_install_dir}/bin":$PATH +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH +EOF + fi + cat << EOF >> "${BUILDDIR}/setup_intel" +export INTEL_CFLAGS="${INTEL_CFLAGS}" +export INTEL_LDFLAGS="${INTEL_LDFLAGS}" +export INTEL_LIBS="${INTEL_LIBS}" +EOF + cat "${BUILDDIR}/setup_intel" >> ${SETUPFILE} +fi + +load "${BUILDDIR}/setup_intel" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "intel" diff --git a/toolchain/scripts/stage0/install_stage0.sh b/toolchain/scripts/stage0/install_stage0.sh new file mode 100755 index 0000000000..a398fdc0fa --- /dev/null +++ b/toolchain/scripts/stage0/install_stage0.sh @@ -0,0 +1,11 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +./scripts/stage0/install_gcc.sh +./scripts/stage0/install_intel.sh +./scripts/stage0/setup_buildtools.sh +./scripts/stage0/install_cmake.sh + +#EOF diff --git a/toolchain/scripts/stage0/setup_buildtools.sh b/toolchain/scripts/stage0/setup_buildtools.sh new file mode 100755 index 0000000000..1d6c7ed569 --- /dev/null +++ b/toolchain/scripts/stage0/setup_buildtools.sh @@ -0,0 +1,72 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "${SCRIPT_NAME}")/.." && pwd -P)" + +source ${SCRIPT_DIR}/common_vars.sh +source ${SCRIPT_DIR}/tool_kit.sh +source ${SCRIPT_DIR}/signal_trap.sh +source ${INSTALLDIR}/toolchain.conf +source ${INSTALLDIR}/toolchain.env + +for ii in $tool_list; do + load "${BUILDDIR}/setup_${ii}" +done + +# ------------------------------------------------------------------------ +# Install or compile packages using newly installed tools +# ------------------------------------------------------------------------ + +# Setup compiler flags, leading to nice stack traces on crashes but still optimised +if [ "${with_intel}" != "__DONTUSE__" ]; then + CFLAGS="-O2 -fPIC -fp-model=precise -funroll-loops -g -qopenmp -qopenmp-simd -traceback" + if [ "${TARGET_CPU}" = "native" ]; then + CFLAGS="${CFLAGS} -xHost" + elif [ "${TARGET_CPU}" = "generic" ]; then + CFLAGS="${CFLAGS} -mtune=${TARGET_CPU}" + else + CFLAGS="${CFLAGS} -march=${TARGET_CPU} -mtune=${TARGET_CPU}" + fi + FFLAGS="${CFLAGS}" +else + CFLAGS="-O2 -fPIC -fno-omit-frame-pointer -fopenmp -g" + if [ "${TARGET_CPU}" = "generic" ]; then + CFLAGS="${CFLAGS} -mtune=generic ${TSANFLAGS}" + else + CFLAGS="${CFLAGS} -march=${TARGET_CPU} -mtune=${TARGET_CPU} ${TSANFLAGS}" + fi + FFLAGS="${CFLAGS} -fbacktrace" +fi +CXXFLAGS="${CFLAGS}" +F77FLAGS="${FFLAGS}" +F90FLAGS="${FFLAGS}" +FCFLAGS="${FFLAGS}" + +if [ "${with_intel}" == "__DONTUSE__" ]; then + export CFLAGS="$(allowed_gcc_flags ${CFLAGS})" + export FFLAGS="$(allowed_gfortran_flags ${FFLAGS})" + export F77FLAGS="$(allowed_gfortran_flags ${F77FLAGS})" + export F90FLAGS="$(allowed_gfortran_flags ${F90FLAGS})" + export FCFLAGS="$(allowed_gfortran_flags ${FCFLAGS})" + export CXXFLAGS="$(allowed_gxx_flags ${CXXFLAGS})" +else + # TODO Check functions for allowed Intel compiler flags + export CFLAGS + export FFLAGS + export F77FLAGS + export F90FLAGS + export FCFLAGS + export CXXFLAGS +fi +export LDFLAGS="${TSANFLAGS}" + +# get system arch information using OpenBLAS prebuild +${SCRIPTDIR}/get_openblas_arch.sh +load "${BUILDDIR}/openblas_arch" + +write_toolchain_env "${INSTALLDIR}" + +#EOF diff --git a/toolchain/scripts/stage1/install_intelmpi.sh b/toolchain/scripts/stage1/install_intelmpi.sh new file mode 100755 index 0000000000..ba7586d079 --- /dev/null +++ b/toolchain/scripts/stage1/install_intelmpi.sh @@ -0,0 +1,129 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ ${MPI_MODE} != "intelmpi" ] && exit 0 +rm -f "${BUILDDIR}/setup_intelmpi" + +INTELMPI_CFLAGS="" +INTELMPI_LDFLAGS="" +INTELMPI_LIBS="" +mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_intelmpi}" in + __INSTALL__) + echo "==================== Installing Intel MPI ====================" + echo '__INSTALL__ is not supported; please manually install Intel MPI' + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding Intel MPI from system paths ====================" + check_command mpiexec "intelmpi" && MPIRUN="$(realpath $(command -v mpiexec))" + if [ "${with_intel}" != "__DONTUSE__" ]; then + check_command mpiicc "intelmpi" && MPICC="$(realpath $(command -v mpiicc))" || exit 1 + check_command mpiicpc "intelmpi" && MPICXX="$(realpath $(command -v mpiicpc))" || exit 1 + check_command mpiifort "intelmpi" && MPIFC="$(realpath $(command -v mpiifort))" || exit 1 + else + echo "The use of Intel MPI is only supported with the Intel compiler" + exit 1 + fi + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + # include path is already handled by compiler wrapper scripts (can cause wrong mpi.mod with GNU Fortran) + # add_include_from_paths INTELMPI_CFLAGS "mpi.h" $INCLUDE_PATHS + add_lib_from_paths INTELMPI_LDFLAGS "libmpi.*" $LIB_PATHS + check_lib -lmpi "intelmpi" + check_lib -lmpicxx "intelmpi" + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking INTELMPI to user paths ====================" + pkg_install_dir="${with_intelmpi}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_command ${pkg_install_dir}/bin/mpiexec "intel" && MPIRUN="${pkg_install_dir}/bin/mpiexec" || exit 1 + if [ "${with_intel}" != "__DONTUSE__" ]; then + check_command ${pkg_install_dir}/bin/mpiicc "intel" && MPICC="${pkg_install_dir}/bin/mpiicc" || exit 1 + check_command ${pkg_install_dir}/bin/mpiicpc "intel" && MPICXX="${pkg_install_dir}/bin/mpiicpc" || exit 1 + check_command ${pkg_install_dir}/bin/mpiifort "intel" && MPIFC="${pkg_install_dir}/bin/mpiifort" || exit 1 + else + echo "The use of Intel MPI is only supported with the Intel compiler" + exit 1 + fi + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + # include path is already handled by compiler wrapper scripts (can cause wrong mpi.mod with GNU Fortran) + #INTELMPI_CFLAGS="-I'${pkg_install_dir}/include'" + INTELMPI_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "${with_intelmpi}" != "__DONTUSE__" ]; then + if [ "${intel_classic}" = "yes" ]; then + I_MPI_CXX="icpc" + I_MPI_CC="icc" + I_MPI_FC="ifort" + else + I_MPI_CXX="icpx" + I_MPI_CC="icx" + I_MPI_FC="ifort" + fi + INTELMPI_LIBS="-lmpi -lmpicxx" + echo "I_MPI_CXX is ${I_MPI_CXX}" + echo "I_MPI_CC is ${I_MPI_CC}" + echo "I_MPI_FC is ${I_MPI_FC}" + echo "MPICXX is ${MPICXX}" + echo "MPICC is ${MPICC}" + echo "MPIFC is ${MPIFC}" + cat << EOF > "${BUILDDIR}/setup_intelmpi" +export I_MPI_CXX="${I_MPI_CXX}" +export I_MPI_CC="${I_MPI_CC}" +export I_MPI_FC="${I_MPI_FC}" +export MPI_MODE="${MPI_MODE}" +export MPIRUN="${MPIRUN}" +export MPICC="${MPICC}" +export MPICXX="${MPICXX}" +export MPIFC="${MPIFC}" +export MPIFORT="${MPIFORT}" +export MPIF77="${MPIF77}" +export INTELMPI_CFLAGS="${INTELMPI_CFLAGS}" +export INTELMPI_LDFLAGS="${INTELMPI_LDFLAGS}" +export INTELMPI_LIBS="${INTELMPI_LIBS}" +export MPI_CFLAGS="${INTELMPI_CFLAGS}" +export MPI_LDFLAGS="${INTELMPI_LDFLAGS}" +export MPI_LIBS="${INTELMPI_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__parallel -D__MPI_F08|)" +export CP_CFLAGS="\${CP_CFLAGS} IF_MPI(${INTELMPI_CFLAGS}|)" +export CP_LDFLAGS="\${CP_LDFLAGS} IF_MPI(${INTELMPI_LDFLAGS}|)" +export CP_LIBS="\${CP_LIBS} IF_MPI(${INTELMPI_LIBS}|)" +EOF + if [ "${with_intelmpi}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_intelmpi" +export PATH="${pkg_install_dir}/bin":$PATH +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH +EOF + fi + cat "${BUILDDIR}/setup_intelmpi" >> ${SETUPFILE} +fi + +load "${BUILDDIR}/setup_intelmpi" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "intelmpi" diff --git a/toolchain/scripts/stage1/install_mpich.sh b/toolchain/scripts/stage1/install_mpich.sh new file mode 100755 index 0000000000..569a747ed1 --- /dev/null +++ b/toolchain/scripts/stage1/install_mpich.sh @@ -0,0 +1,168 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +# mpich_ver="4.0.3" +# mpich_sha256="17406ea90a6ed4ecd5be39c9ddcbfac9343e6ab4f77ac4e8c5ebe4a3e3b6c501" +mpich_ver="4.1.2" +mpich_sha256="3492e98adab62b597ef0d292fb2459b6123bc80070a8aa0a30be6962075a12f0" +mpich_pkg="mpich-${mpich_ver}.tar.gz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ ${MPI_MODE} != "mpich" ] && exit 0 +[ -f "${BUILDDIR}/setup_mpich" ] && rm "${BUILDDIR}/setup_mpich" + +MPICH_CFLAGS="" +MPICH_LDFLAGS="" +MPICH_LIBS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_mpich}" in + __INSTALL__) + echo "==================== Installing MPICH ====================" + pkg_install_dir="${INSTALLDIR}/mpich-${mpich_ver}" + #pkg_install_dir="${HOME}/apps/mpich/${mpich_ver}-intel" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "mpich-${mpich_ver} is already installed, skipping it." + else + if [ -f ${mpich_pkg} ]; then + echo "${mpich_pkg} is found" + else + download_pkg_from_ABACUS_org "${mpich_sha256}" "${mpich_pkg}" + fi + echo "Installing from scratch into ${pkg_install_dir} for MPICH device ${MPICH_DEVICE}" + [ -d mpich-${mpich_ver} ] && rm -rf mpich-${mpich_ver} + tar -xzf ${mpich_pkg} + cd mpich-${mpich_ver} + unset F90 + unset F90FLAGS + + # workaround for compilation with GCC >= 10, until properly fixed: + # https://github.com/pmodels/mpich/issues/4300 + if ("${FC}" --version | grep -q 'GNU'); then + compat_flag=$(allowed_gfortran_flags "-fallow-argument-mismatch") + fi + ./configure \ + --prefix="${pkg_install_dir}" \ + --libdir="${pkg_install_dir}/lib" \ + MPICC="" \ + FFLAGS="${FCFLAGS} ${compat_flag}" \ + FCFLAGS="${FCFLAGS} ${compat_flag}" \ + --with-device=${MPICH_DEVICE} \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd .. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage1/$(basename ${SCRIPT_NAME})" + fi + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_install ${pkg_install_dir}/bin/mpiexec "mpich" && MPIRUN="${pkg_install_dir}/bin/mpiexec" || exit 1 + check_install ${pkg_install_dir}/bin/mpicc "mpich" && MPICC="${pkg_install_dir}/bin/mpicc" || exit 1 + check_install ${pkg_install_dir}/bin/mpicxx "mpich" && MPICXX="${pkg_install_dir}/bin/mpicxx" || exit 1 + check_install ${pkg_install_dir}/bin/mpifort "mpich" && MPIFC="${pkg_install_dir}/bin/mpifort" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + MPICH_CFLAGS="-I'${pkg_install_dir}/include'" + MPICH_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding MPICH from system paths ====================" + check_command mpiexec "mpich" && MPIRUN="$(command -v mpiexec)" + check_command mpicc "mpich" && MPICC="$(command -v mpicc)" || exit 1 + if [ $(command -v mpic++ > /dev/null 2>&1) ]; then + check_command mpic++ "mpich" && MPICXX="$(command -v mpic++)" || exit 1 + else + check_command mpicxx "mpich" && MPICXX="$(command -v mpicxx)" || exit 1 + fi + check_command mpifort "mpich" && MPIFC="$(command -v mpifort)" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + check_lib -lmpifort "mpich" + check_lib -lmpicxx "mpich" + check_lib -lmpi "mpich" + add_include_from_paths MPICH_CFLAGS "mpi.h" ${INCLUDE_PATHS} + add_lib_from_paths MPICH_LDFLAGS "libmpi.*" ${LIB_PATHS} + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking MPICH to user paths ====================" + pkg_install_dir="${with_mpich}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_command ${pkg_install_dir}/bin/mpiexec "mpich" && MPIRUN="${pkg_install_dir}/bin/mpiexec" || exit 1 + check_command ${pkg_install_dir}/bin/mpicc "mpich" && MPICC="${pkg_install_dir}/bin/mpicc" || exit 1 + check_command ${pkg_install_dir}/bin/mpicxx "mpich" && MPICXX="${pkg_install_dir}/bin/mpicxx" || exit 1 + check_command ${pkg_install_dir}/bin/mpifort "mpich" && MPIFC="${pkg_install_dir}/bin/mpifort" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + MPICH_CFLAGS="-I'${pkg_install_dir}/include'" + MPICH_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "${with_mpich}" != "__DONTUSE__" ]; then + if [ "${with_mpich}" != "__SYSTEM__" ]; then + mpi_bin="${pkg_install_dir}/bin/mpiexec" + else + mpi_bin="mpiexec" + fi + MPICH_LIBS="-lmpifort -lmpicxx -lmpi" + cat << EOF > "${BUILDDIR}/setup_mpich" +export MPI_MODE="${MPI_MODE}" +export MPIRUN="${MPIRUN}" +export MPICC="${MPICC}" +export MPICXX="${MPICXX}" +export MPIFC="${MPIFC}" +export MPIFORT="${MPIFORT}" +export MPIF77="${MPIF77}" +export MPICH_CFLAGS="${MPICH_CFLAGS}" +export MPICH_LDFLAGS="${MPICH_LDFLAGS}" +export MPICH_LIBS="${MPICH_LIBS}" +export MPI_CFLAGS="${MPICH_CFLAGS}" +export MPI_LDFLAGS="${MPICH_LDFLAGS}" +export MPI_LIBS="${MPICH_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__parallel|)" +export CP_CFLAGS="\${CP_CFLAGS} IF_MPI(${MPICH_CFLAGS}|)" +export CP_LDFLAGS="\${CP_LDFLAGS} IF_MPI(${MPICH_LDFLAGS}|)" +export CP_LIBS="\${CP_LIBS} IF_MPI(${MPICH_LIBS}|)" +EOF + if [ "${with_mpich}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_mpich" +export PATH="${pkg_install_dir}/bin":$PATH +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH + +EOF + fi + cat "${BUILDDIR}/setup_mpich" >> ${SETUPFILE} +fi + +# Update leak suppression file +cat << EOF >> ${INSTALLDIR}/lsan.supp +# MPICH 3.3.2 with GCC 10.3.0 +leak:MPIR_Find_local_and_external +leak:MPIU_Find_local_and_external +EOF + +load "${BUILDDIR}/setup_mpich" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "mpich" diff --git a/toolchain/scripts/stage1/install_openmpi.sh b/toolchain/scripts/stage1/install_openmpi.sh new file mode 100755 index 0000000000..3fc5ab85bb --- /dev/null +++ b/toolchain/scripts/stage1/install_openmpi.sh @@ -0,0 +1,242 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +openmpi_ver="4.1.5" +openmpi_sha256="c018b127619d2a2a30c1931f316fc8a245926d0f5b4ebed4711f9695e7f70925" +openmpi_pkg="openmpi-${openmpi_ver}.tar.gz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ ${MPI_MODE} != "openmpi" ] && exit 0 +[ -f "${BUILDDIR}/setup_openmpi" ] && rm "${BUILDDIR}/setup_openmpi" + +OPENMPI_CFLAGS="" +OPENMPI_LDFLAGS="" +OPENMPI_LIBS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_openmpi}" in + __INSTALL__) + echo "==================== Installing OpenMPI ====================" + pkg_install_dir="${INSTALLDIR}/openmpi-${openmpi_ver}" + #pkg_install_dir="${HOME}/apps/openmpi/${openmpi_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "openmpi-${openmpi_ver} is already installed, skipping it." + else + if [ -f ${openmpi_pkg} ]; then + echo "${openmpi_pkg} is found" + else + download_pkg_from_ABACUS_org "${openmpi_sha256}" "${openmpi_pkg}" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d openmpi-${openmpi_ver} ] && rm -rf openmpi-${openmpi_ver} + tar -xzf ${openmpi_pkg} + cd openmpi-${openmpi_ver} + if [ "${OPENBLAS_ARCH}" = "x86_64" ]; then + # can have issue with older glibc libraries, in which case + # we need to add the -fgnu89-inline to CFLAGS. We can check + # the version of glibc using ldd --version, as ldd is part of + # glibc package + glibc_version=$(ldd --version | awk '/ldd/{print $NF}') + glibc_major_ver=${glibc_version%%.*} + glibc_minor_ver=${glibc_version##*.} + if [ $glibc_major_ver -lt 2 ] || + [ $glibc_major_ver -eq 2 -a $glibc_minor_ver -lt 12 ]; then + CFLAGS="${CFLAGS} -fgnu89-inline" + fi + fi + if [ $(command -v srun) ]; then + echo "Slurm installation found. OpenMPI will be configured with --with-pmi." + EXTRA_CONFIGURE_FLAGS="--with-pmi" + else + EXTRA_CONFIGURE_FLAGS="" + fi + # We still require MPI-1.0-compatability for PTSCOTCH + ./configure CFLAGS="${CFLAGS}" \ + --prefix=${pkg_install_dir} \ + --libdir="${pkg_install_dir}/lib" \ + --enable-mpi1-compatibility \ + ${EXTRA_CONFIGURE_FLAGS} \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make -j $(get_nprocs) install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd .. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage1/$(basename ${SCRIPT_NAME})" + fi + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_install ${pkg_install_dir}/bin/mpiexec "openmpi" && MPIRUN="${pkg_install_dir}/bin/mpiexec" || exit 1 + check_install ${pkg_install_dir}/bin/mpicc "openmpi" && MPICC="${pkg_install_dir}/bin/mpicc" || exit 1 + check_install ${pkg_install_dir}/bin/mpicxx "openmpi" && MPICXX="${pkg_install_dir}/bin/mpicxx" || exit 1 + check_install ${pkg_install_dir}/bin/mpifort "openmpi" && MPIFC="${pkg_install_dir}/bin/mpifort" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + OPENMPI_CFLAGS="-I'${pkg_install_dir}/include'" + OPENMPI_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding OpenMPI from system paths ====================" + check_command mpiexec "openmpi" && MPIRUN="$(command -v mpiexec)" + check_command mpicc "openmpi" && MPICC="$(command -v mpicc)" || exit 1 + check_command mpic++ "openmpi" && MPICXX="$(command -v mpic++)" || exit 1 + check_command mpifort "openmpi" && MPIFC="$(command -v mpifort)" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + # Fortran code in ABACUS is built via the mpifort wrapper, but we may need additional + # libraries and linker flags for C/C++-based MPI codepaths, pull them in at this point. + OPENMPI_CFLAGS="$(mpicxx --showme:compile)" + OPENMPI_LDFLAGS="$(mpicxx --showme:link)" + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking OpenMPI to user paths ====================" + pkg_install_dir="${with_openmpi}" + check_dir "${pkg_install_dir}/bin" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + check_command ${pkg_install_dir}/bin/mpiexec "openmpi" && MPIRUN="${pkg_install_dir}/bin/mpiexec" || exit 1 + check_command ${pkg_install_dir}/bin/mpicc "openmpi" && MPICC="${pkg_install_dir}/bin/mpicc" || exit 1 + check_command ${pkg_install_dir}/bin/mpic++ "openmpi" && MPICXX="${pkg_install_dir}/bin/mpic++" || exit 1 + check_command ${pkg_install_dir}/bin/mpifort "openmpi" && MPIFC="${pkg_install_dir}/bin/mpifort" || exit 1 + MPIFORT="${MPIFC}" + MPIF77="${MPIFC}" + OPENMPI_CFLAGS="-I'${pkg_install_dir}/include'" + OPENMPI_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "${with_openmpi}" != "__DONTUSE__" ]; then + if [ "${with_openmpi}" != "__SYSTEM__" ]; then + mpi_bin="${pkg_install_dir}/bin/mpiexec" + mpicxx_bin="${pkg_install_dir}/bin/mpicxx" + else + mpi_bin="mpiexec" + mpicxx_bin="mpicxx" + fi + # check openmpi version as reported by mpiexec + raw_version=$(${mpi_bin} --version 2>&1 | + grep "(Open MPI)" | awk '{print $4}') + major_version=$(echo ${raw_version} | cut -d '.' -f 1) + minor_version=$(echo ${raw_version} | cut -d '.' -f 2) + OPENMPI_LIBS="" + # grab additional runtime libs (for C/C++) from the mpicxx wrapper, + # and remove them from the LDFLAGS if present + for lib in $("${mpicxx_bin}" --showme:libs); do + OPENMPI_LIBS+=" -l${lib}" + OPENMPI_LDFLAGS="${OPENMPI_LDFLAGS//-l${lib}/}" + done + cat << EOF > "${BUILDDIR}/setup_openmpi" +export MPI_MODE="${MPI_MODE}" +export MPIRUN="${MPIRUN}" +export MPICC="${MPICC}" +export MPICXX="${MPICXX}" +export MPIFC="${MPIFC}" +export MPIFORT="${MPIFORT}" +export MPIF77="${MPIF77}" +export OPENMPI_CFLAGS="${OPENMPI_CFLAGS}" +export OPENMPI_LDFLAGS="${OPENMPI_LDFLAGS}" +export OPENMPI_LIBS="${OPENMPI_LIBS}" +export MPI_CFLAGS="${OPENMPI_CFLAGS}" +export MPI_LDFLAGS="${OPENMPI_LDFLAGS}" +export MPI_LIBS="${OPENMPI_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__parallel|)" +# For proper mpi_f08 support, we need at least GCC version 9 (asynchronous keyword) +# Other compilers should work + if ! [ "$(gfortran -dumpversion | cut -d. -f1)" -lt 9 ]; then + export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__MPI_F08|)" + fi +export CP_CFLAGS="\${CP_CFLAGS} IF_MPI(${OPENMPI_CFLAGS}|)" +export CP_LDFLAGS="\${CP_LDFLAGS} IF_MPI(${OPENMPI_LDFLAGS}|)" +export CP_LIBS="\${CP_LIBS} IF_MPI(${OPENMPI_LIBS}|)" +EOF + if [ "${with_openmpi}" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_openmpi" +export PATH="${pkg_install_dir}/bin":$PATH +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH +export MANPATH"${pkg_install_dir}/share/man":$MANPATH +EOF + fi + cat "${BUILDDIR}/setup_openmpi" >> ${SETUPFILE} +fi + +# ---------------------------------------------------------------------- +# Suppress reporting of known leaks +# ---------------------------------------------------------------------- +cat << EOF >> ${INSTALLDIR}/valgrind.supp +{ + + Memcheck:Leak + ... + fun:*alloc + ... + fun:ompi_mpi_init +} +{ + + Memcheck:Leak + ... + fun:*alloc + ... + fun:ompi_mpi_finalize +} +{ + + Memcheck:Leak + ... + fun:malloc + fun:opal_free_list_grow_st + ... + fun:mpi_alloc_mem +} +{ + + Memcheck:Leak + ... + fun:malloc + ... + fun:progress_engine + ... + fun:clone +} +{ + + Memcheck:Leak + ... + fun:malloc + ... + fun:query_2_0_0 + ... + fun:ompi_comm_activate +} +EOF +cat << EOF >> ${INSTALLDIR}/lsan.supp +# leaks related to OpenMPI +leak:query_2_0_0 +leak:ompi_init_f +leak:ompi_finalize_f +leak:ompi_file_open_f +leak:progress_engine +leak:__GI___strdup +EOF + +load "${BUILDDIR}/setup_openmpi" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "openmpi" diff --git a/toolchain/scripts/stage1/install_stage1.sh b/toolchain/scripts/stage1/install_stage1.sh new file mode 100755 index 0000000000..e6d0e9fc21 --- /dev/null +++ b/toolchain/scripts/stage1/install_stage1.sh @@ -0,0 +1,10 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +./scripts/stage1/install_mpich.sh +./scripts/stage1/install_openmpi.sh +./scripts/stage1/install_intelmpi.sh + +#EOF diff --git a/toolchain/scripts/stage2/install_acml.sh b/toolchain/scripts/stage2/install_acml.sh new file mode 100755 index 0000000000..7b0bd8a721 --- /dev/null +++ b/toolchain/scripts/stage2/install_acml.sh @@ -0,0 +1,74 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_acml" ] && rm "${BUILDDIR}/setup_acml" + +ACML_CFLAGS='' +ACML_LDFLAGS='' +ACML_LIBS='' +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_acml" in + __INSTALL__) + echo "==================== Installing ACML ====================" + report_error $LINENO "To install ACML you should either contact your system administrator or go to https://developer.amd.com/tools-and-sdks/archive/amd-core-math-library-acml/acml-downloads-resources/ and download the correct version for your system." + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding ACML from system paths ====================" + check_lib -lacml "ACML" + add_include_from_paths ACML_CFLAGS "acml.h" $INCLUDE_PATHS + add_lib_from_paths ACML_LDFLAGS "libacml.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking ACML to user paths ====================" + pkg_install_dir="$with_acml" + check_dir "${pkg_install_dir}/lib" + ACML_CFLAGS="-I'${pkg_install_dir}/include'" + ACML_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "$with_acml" != "__DONTUSE__" ]; then + ACML_LIBS="-lacml" + if [ "$with_acml" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_acml" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path CPATH "$pkg_install_dir/include" +export LD_LIBRARY_PATH="$pkg_install_dir/lib":${LD_LIBRARY_PATH} +export LD_RUN_PATH="$pkg_install_dir/lib":${LD_RUN_PATH} +export LIBRARY_PATH="$pkg_install_dir/lib":${LIBRARY_PATH} +export CPATH="$pkg_install_dir/include":${CPATH} +EOF + cat "${BUILDDIR}/setup_acml" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_acml" +export ACML_CFLAGS="${ACML_CFLAGS}" +export ACML_LDFLAGS="${ACML_LDFLAGS}" +export ACML_LIBS="${ACML_LIBS}" +export MATH_CFLAGS="\${MATH_CFLAGS} ${ACML_CFLAGS}" +export MATH_LDFLAGS="\${MATH_LDFLAGS} ${ACML_LDFLAGS}" +export MATH_LIBS="\${MATH_LIBS} ${ACML_LIBS}" +EOF +fi + +load "${BUILDDIR}/setup_acml" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "acml" diff --git a/toolchain/scripts/stage2/install_mathlibs.sh b/toolchain/scripts/stage2/install_mathlibs.sh new file mode 100755 index 0000000000..b4ed3df22b --- /dev/null +++ b/toolchain/scripts/stage2/install_mathlibs.sh @@ -0,0 +1,48 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +export MATH_CFLAGS='' +export MATH_LDFLAGS='' +export MATH_LIBS='' + +write_toolchain_env "${INSTALLDIR}" + +case "$MATH_MODE" in + mkl) + "${SCRIPTDIR}"/stage2/install_mkl.sh "${with_mkl}" + load "${BUILDDIR}/setup_mkl" + ;; + acml) + "${SCRIPTDIR}"/stage2/install_acml.sh "${with_acml}" + load "${BUILDDIR}/setup_acml" + ;; + openblas) + "${SCRIPTDIR}"/stage2/install_openblas.sh "${with_openblas}" + load "${BUILDDIR}/setup_openblas" + ;; + cray) + # note the space is intentional so that the variable is + # non-empty and can pass require_env checks + export MATH_LDFLAGS="${MATH_LDFLAGS} " + export MATH_LIBS="${MATH_LIBS} ${CRAY_EXTRA_LIBS}" + ;; +esac + +export CP_CFLAGS="${CP_CFLAGS} ${MATH_CFLAGS}" +export CP_LDFLAGS="${CP_LDFLAGS} ${MATH_LDFLAGS}" +export CP_LIBS="${CP_LIBS} ${MATH_LIBS}" + +write_toolchain_env "${INSTALLDIR}" + +#EOF diff --git a/toolchain/scripts/stage2/install_mkl.sh b/toolchain/scripts/stage2/install_mkl.sh new file mode 100755 index 0000000000..d6ad016afa --- /dev/null +++ b/toolchain/scripts/stage2/install_mkl.sh @@ -0,0 +1,134 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_mkl" ] && rm "${BUILDDIR}/setup_mkl" + +MKL_CFLAGS="" +MKL_LDFLAGS="" +MKL_LIBS="" +MKL_FFTW="yes" + +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_mkl}" in + __INSTALL__) + echo "==================== Installing MKL ====================" + report_error ${LINENO} "To install MKL, please contact your system administrator." + exit 1 + ;; + __SYSTEM__) + echo "==================== Finding MKL from system paths ====================" + if ! [ -z "${MKLROOT}" ]; then + echo "MKLROOT is found to be ${MKLROOT}" + else + report_error ${LINENO} "Cannot find env variable MKLROOT, the script relies on it being set. Please check in MKL installation and use --with-mkl= to pass the path to MKL root directory to this script." + exit 1 + fi + check_lib -lm + check_lib -ldl + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking MKL to user paths ====================" + check_dir "${with_mkl}" + MKLROOT="${with_mkl}" + ;; +esac +if [ "${with_mkl}" != "__DONTUSE__" ]; then + case ${OPENBLAS_ARCH} in + x86_64) + mkl_arch_dir="intel64" + MKL_CFLAGS="-m64" + ;; + i386) + mkl_arch_dir="ia32" + MKL_CFLAGS="-m32" + ;; + *) + report_error $LINENO "MKL only supports intel64 (x86_64) and ia32 (i386) at the moment, and your arch obtained from OpenBLAS prebuild is $OPENBLAS_ARCH" + exit 1 + ;; + esac + mkl_lib_dir="${MKLROOT}/lib/${mkl_arch_dir}" + # check we have required libraries + mkl_required_libs="libmkl_gf_lp64.so libmkl_sequential.so libmkl_core.so" + for ii in $mkl_required_libs; do + if [ ! -f "$mkl_lib_dir/${ii}" ]; then + report_error $LINENO "missing MKL library ${ii}" + exit 1 + fi + done + + case ${MPI_MODE} in + intelmpi | mpich) + mkl_scalapack_lib="IF_MPI(-lmkl_scalapack_lp64|)" + mkl_blacs_lib="IF_MPI(-lmkl_blacs_intelmpi_lp64|)" + ;; + openmpi) + mkl_scalapack_lib="IF_MPI(-lmkl_scalapack_lp64|)" + mkl_blacs_lib="IF_MPI(-lmkl_blacs_openmpi_lp64|)" + ;; + *) + echo "Not using MKL provided ScaLAPACK and BLACS" + mkl_scalapack_lib="" + mkl_blacs_lib="" + ;; + esac + + # set the correct lib flags from MLK link adviser + MKL_LIBS="-L${mkl_lib_dir} -Wl,-rpath,${mkl_lib_dir} ${mkl_scalapack_lib}" + MKL_LIBS+=" -Wl,--start-group -lmkl_gf_lp64 -lmkl_sequential -lmkl_core" + MKL_LIBS+=" ${mkl_blacs_lib} -Wl,--end-group -lpthread -lm -ldl" + # setup_mkl disables using separate FFTW library (see below) + MKL_CFLAGS="${MKL_CFLAGS} -I${MKLROOT}/include" + if [ "${MKL_FFTW}" != "no" ]; then + MKL_CFLAGS+=" -I${MKLROOT}/include/fftw" + fi + + # write setup files + cat << EOF > "${BUILDDIR}/setup_mkl" +export MKLROOT="${MKLROOT}" +export MKL_CFLAGS="${MKL_CFLAGS}" +export MKL_LIBS="${MKL_LIBS}" +export MATH_CFLAGS="\${MATH_CFLAGS} ${MKL_CFLAGS}" +export MATH_LIBS="\${MATH_LIBS} ${MKL_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} -D__MKL -D__FFTW3 IF_COVERAGE(IF_MPI(|-U__FFTW3)|)" +EOF + if [ -n "${mkl_scalapack_lib}" ]; then + cat << EOF >> "${BUILDDIR}/setup_mkl" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__SCALAPACK|)" +export with_scalapack="__DONTUSE__" +EOF + fi + if [ "${MKL_FFTW}" != "no" ]; then + cat << EOF >> "${BUILDDIR}/setup_mkl" +export with_fftw="__DONTUSE__" +export FFTW3_INCLUDES="${MKL_CFLAGS}" +export FFTW3_LIBS="${MKL_LIBS}" +export FFTW_CFLAGS="${MKL_CFLAGS}" +export FFTW_LDFLAGS="${MKL_LDFLAGS}" +export FFTW_LIBS="${MKL_LIBS}" +EOF + fi + cat "${BUILDDIR}/setup_mkl" >> ${SETUPFILE} +fi + +load "${BUILDDIR}/setup_mkl" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "mkl" diff --git a/toolchain/scripts/stage2/install_openblas.sh b/toolchain/scripts/stage2/install_openblas.sh new file mode 100755 index 0000000000..fd11d4dc2a --- /dev/null +++ b/toolchain/scripts/stage2/install_openblas.sh @@ -0,0 +1,185 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +openblas_ver="0.3.23" # Keep in sync with get_openblas_arch.sh +openblas_sha256="5d9491d07168a5d00116cdc068a40022c3455bf9293c7cb86a65b1054d7e5114" +openblas_pkg="OpenBLAS-${openblas_ver}.tar.gz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_openblas" ] && rm "${BUILDDIR}/setup_openblas" + +OPENBLAS_CFLAGS="" +OPENBLAS_LDFLAGS="" +OPENBLAS_LIBS="" +OPENBLAS_ROOT="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_openblas}" in + __INSTALL__) + echo "==================== Installing OpenBLAS ====================" + pkg_install_dir="${INSTALLDIR}/openblas-${openblas_ver}" + #pkg_install_dir="${HOME}/lib/openblas/${openblas_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "openblas-${openblas_ver} is already installed, skipping it." + else + if [ -f ${openblas_pkg} ]; then + echo "${openblas_pkg} is found" + else + download_pkg_from_ABACUS_org "${openblas_sha256}" "${openblas_pkg}" + fi + + echo "Installing from scratch into ${pkg_install_dir}" + [ -d OpenBLAS-${openblas_ver} ] && rm -rf OpenBLAS-${openblas_ver} + tar -zxf ${openblas_pkg} + cd OpenBLAS-${openblas_ver} + + # First attempt to make openblas using auto detected + # TARGET, if this fails, then make with forced + # TARGET=NEHALEM + # + # wrt NUM_THREADS=64: this is what the most common Linux distros seem to choose atm + # for a good compromise between memory usage and scalability + # + # Unfortunately, NO_SHARED=1 breaks ScaLAPACK build. + case "${TARGET_CPU}" in + "generic") + TARGET="NEHALEM" + ;; + "native") + TARGET=${OPENBLAS_LIBCORE} + ;; + "broadwell" | "skylake") + TARGET="HASWELL" + ;; + "skylake-avx512") + TARGET="SKYLAKEX" + ;; + *) + TARGET=${TARGET_CPU} + ;; + esac + TARGET=$(echo ${TARGET} | tr '[:lower:]' '[:upper:]') + echo "Installing OpenBLAS library for target ${TARGET}" + ( + make -j $(get_nprocs) \ + MAKE_NB_JOBS=0 \ + TARGET=${TARGET} \ + NUM_THREADS=64 \ + USE_THREAD=1 \ + USE_OPENMP=1 \ + NO_AFFINITY=1 \ + CC="${CC}" \ + FC="${FC}" \ + PREFIX="${pkg_install_dir}" \ + > make.log 2>&1 || tail -n ${LOG_LINES} make.log + ) || ( + make -j $(get_nprocs) \ + MAKE_NB_JOBS=0 \ + TARGET=NEHALEM \ + NUM_THREADS=64 \ + USE_THREAD=1 \ + USE_OPENMP=1 \ + NO_AFFINITY=1 \ + CC="${CC}" \ + FC="${FC}" \ + PREFIX="${pkg_install_dir}" \ + > make.nehalem.log 2>&1 || tail -n ${LOG_LINES} make.nehalem.log + ) + make -j $(get_nprocs) \ + MAKE_NB_JOBS=0 \ + TARGET=${TARGET} \ + NUM_THREADS=64 \ + USE_THREAD=1 \ + USE_OPENMP=1 \ + NO_AFFINITY=1 \ + CC="${CC}" \ + FC="${FC}" \ + PREFIX="${pkg_install_dir}" \ + install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd .. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage2/$(basename ${SCRIPT_NAME})" + fi + OPENBLAS_CFLAGS="-I'${pkg_install_dir}/include'" + OPENBLAS_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + OPENBLAS_ROOT="${pkg_install_dir}" + OPENBLAS_LIBS="-lopenblas" + ;; + __SYSTEM__) + echo "==================== Finding LAPACK from system paths ====================" + # assume that system openblas is threaded + check_lib -lopenblas "OpenBLAS" + OPENBLAS_LIBS="-lopenblas" + # detect separate omp builds + check_lib -lopenblas_openmp 2> /dev/null && OPENBLAS_LIBS="-lopenblas_openmp" + check_lib -lopenblas_omp 2> /dev/null && OPENBLAS_LIBS="-lopenblas_omp" + add_include_from_paths OPENBLAS_CFLAGS "openblas_config.h" $INCLUDE_PATHS + add_lib_from_paths OPENBLAS_LDFLAGS "libopenblas.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking LAPACK to user paths ====================" + pkg_install_dir="$with_openblas" + check_dir "${pkg_install_dir}/include" + check_dir "${pkg_install_dir}/lib" + OPENBLAS_CFLAGS="-I'${pkg_install_dir}/include'" + OPENBLAS_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + OPENBLAS_LIBS="-lopenblas" + # detect separate omp builds + (__libdir="${pkg_install_dir}/lib" LIB_PATHS="__libdir" check_lib -lopenblas_openmp 2> /dev/null) && + OPENBLAS_LIBS="-lopenblas_openmp" + (__libdir="${pkg_install_dir}/lib" LIB_PATHS="__libdir" check_lib -lopenblas_omp 2> /dev/null) && + OPENBLAS_LIBS="-lopenblas_omp" + ;; +esac +if [ "$with_openblas" != "__DONTUSE__" ]; then + if [ "$with_openblas" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_openblas" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +prepend_path CPATH "$pkg_install_dir/include" +export LD_LIBRARY_PATH="$pkg_install_dir/lib:"${LD_LIBRARY_PATH} +export LD_RUN_PATH="$pkg_install_dir/lib:"${LD_RUN_PATH} +export LIBRARY_PATH="$pkg_install_dir/lib:"${LIBRARY_PATH} +export CPATH="$pkg_install_dir/include:"${CPATH} +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig:"${PKG_CONFIG_PATH} +export CMAKE_PREFIX_PATH="$pkg_install_dir:"${CMAKE_PREFIX_PATH} +export OPENBLAS_ROOT=${pkg_install_dir} +EOF + cat "${BUILDDIR}/setup_openblas" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_openblas" +export OPENBLAS_ROOT="${pkg_install_dir}" +export OPENBLAS_CFLAGS="${OPENBLAS_CFLAGS}" +export OPENBLAS_LDFLAGS="${OPENBLAS_LDFLAGS}" +export OPENBLAS_LIBS="${OPENBLAS_LIBS}" +export MATH_CFLAGS="\${MATH_CFLAGS} ${OPENBLAS_CFLAGS}" +export MATH_LDFLAGS="\${MATH_LDFLAGS} ${OPENBLAS_LDFLAGS}" +export MATH_LIBS="\${MATH_LIBS} ${OPENBLAS_LIBS}" +export PKG_CONFIG_PATH="${pkg_install_dir}/lib/pkgconfig" +export CMAKE_PREFIX_PATH="${pkg_install_dir}" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +EOF +fi + +load "${BUILDDIR}/setup_openblas" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "openblas" diff --git a/toolchain/scripts/stage2/install_stage2.sh b/toolchain/scripts/stage2/install_stage2.sh new file mode 100755 index 0000000000..650c41f0d4 --- /dev/null +++ b/toolchain/scripts/stage2/install_stage2.sh @@ -0,0 +1,8 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +./scripts/stage2/install_mathlibs.sh + +#EOF diff --git a/toolchain/scripts/stage3/install_cereal.sh b/toolchain/scripts/stage3/install_cereal.sh new file mode 100755 index 0000000000..ad1bb2ac8d --- /dev/null +++ b/toolchain/scripts/stage3/install_cereal.sh @@ -0,0 +1,87 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all +# CEREAL is not need any complex setting +# Only problem is the installation from github.com + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +cereal_ver="1.3.2" +cereal_sha256="16a7ad9b31ba5880dac55d62b5d6f243c3ebc8d46a3514149e56b5e7ea81f85f" +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_cereal" ] && rm "${BUILDDIR}/setup_cereal" + +CEREAL_CFLAGS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_cereal" in + __INSTALL__) + echo "==================== Installing CEREAL ====================" + pkg_install_dir="${INSTALLDIR}/cereal-${cereal_ver}" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "cereal-${cereal_ver} is already installed, skipping it." + else + url="https://github.com/USCiLab/cereal/archive/refs/tags/${cereal_ver}.tar.gz" + filename="cereal-v${cereal_ver}.tar.gz" + if [ -f $filename ]; then + echo "$filename is found" + else + # download from github.com and checksum + echo "wget ${DOWNLOADER_FLAGS} --quiet $url -O $filename" + if ! wget ${DOWNLOADER_FLAGS} --quiet $url -O $filename; then + report_error "failed to download $url" + fi + # checksum + checksum "$filename" "$cereal_sha256" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d cereal-${cereal_ver} ] && rm -rf cereal-${cereal_ver} + tar -xzf $filename + cp -r cereal-${cereal_ver}/ "${pkg_install_dir}/" + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage3/$(basename ${SCRIPT_NAME})" + fi + ;; + __SYSTEM__) + echo "==================== CANNOT Finding CEREAL from system paths NOW ====================" + # How to do it in cereal? -- Zhaoqing in 2023/08/23 + # check_lib -lxcf03 "libxc" + # check_lib -lxc "libxc" + # add_include_from_paths LIBXC_CFLAGS "xc.h" $INCLUDE_PATHS + # add_lib_from_paths LIBXC_LDFLAGS "libxc.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + *) + echo "==================== Linking CEREAL to user paths ====================" + check_dir "${pkg_install_dir}" + CEREAL_CFLAGS="-I'${pkg_install_dir}'" + ;; +esac +if [ "$with_cereal" != "__DONTUSE__" ]; then + if [ "$with_cereal" != "__SYSTEM__" ]; then + # LibRI deps should find cereal include in CPATH + cat << EOF > "${BUILDDIR}/setup_cereal" +prepend_path CPATH "$pkg_install_dir/include" +export CPATH="${pkg_install_dir}/include:"${CPATH} +EOF + cat "${BUILDDIR}/setup_cereal" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_cereal" +export CEREAL_CFLAGS="${CEREAL_CFLAGS}" +export CEREAL_ROOT="$pkg_install_dir" +EOF +fi + +load "${BUILDDIR}/setup_cereal" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "cereal" diff --git a/toolchain/scripts/stage3/install_elpa.sh b/toolchain/scripts/stage3/install_elpa.sh new file mode 100755 index 0000000000..50e36728b5 --- /dev/null +++ b/toolchain/scripts/stage3/install_elpa.sh @@ -0,0 +1,221 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +# From https://elpa.mpcdf.mpg.de/software/tarball-archive/ELPA_TARBALL_ARCHIVE.html +elpa_ver="2021.11.002" +elpa_sha256="576f1caeed7883b81396640fda0f504183866cf6cbd4bc71d1383ba2208f1f97" # 2021.11.002, by sha256sum +#elpa_ver="2022.11.001" +# elpa_sha256="35e397d7c0af95bb43bc7bef7fff29425c1da400fa0cd86ae8d3bd2ff2f9d999" # 2022.11.001 + + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_elpa" ] && rm "${BUILDDIR}/setup_elpa" + +ELPA_CFLAGS='' +ELPA_LDFLAGS='' +ELPA_LIBS='' +elpa_dir_openmp="_openmp" + +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +# elpa only works with MPI switched on +if [ $MPI_MODE = no ]; then + report_warning $LINENO "MPI is disabled, skipping elpa installation" + cat << EOF > "${BUILDDIR}/setup_elpa" +with_elpa="__FALSE__" +EOF + exit 0 +fi + +case "$with_elpa" in + __INSTALL__) + echo "==================== Installing ELPA ====================" + pkg_install_dir="${INSTALLDIR}/elpa-${elpa_ver}" + #pkg_install_dir="${HOME}/lib/elpa/${elpa_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + enable_openmp="yes" + + # specific settings needed on CRAY Linux Environment + if [ "$ENABLE_CRAY" = "__TRUE__" ]; then + if [ ${CRAY_PRGENVCRAY} ]; then + # extra LDFLAGS needed + cray_ldflags="-dynamic" + fi + enable_openmp="no" + fi + + if verify_checksums "${install_lock_file}"; then + echo "elpa-${elpa_ver} is already installed, skipping it." + else + require_env MATH_LIBS + if [ -f elpa-${elpa_ver}.tar.gz ]; then + echo "elpa-${elpa_ver}.tar.gz is found" + else + download_pkg_from_ABACUS_org "${elpa_sha256}" "elpa-${elpa_ver}.tar.gz" + fi + [ -d elpa-${elpa_ver} ] && rm -rf elpa-${elpa_ver} + tar -xzf elpa-${elpa_ver}.tar.gz + + # elpa expect FC to be an mpi fortran compiler that is happy + # with long lines, and that a bunch of libs can be found + cd elpa-${elpa_ver} + + # ELPA-2017xxxx enables AVX2 by default, switch off if machine doesn't support it. + AVX_flag="" + AVX512_flags="" + FMA_flag="" + SSE4_flag="" + config_flags="--disable-avx --disable-avx2 --disable-avx512 --disable-sse --disable-sse-assembly" + if [ "${TARGET_CPU}" = "native" ]; then + if [ -f /proc/cpuinfo ] && [ "${OPENBLAS_ARCH}" = "x86_64" ]; then + has_AVX=$(grep '\bavx\b' /proc/cpuinfo 1> /dev/null && echo 'yes' || echo 'no') + [ "${has_AVX}" = "yes" ] && AVX_flag="-mavx" || AVX_flag="" + has_AVX2=$(grep '\bavx2\b' /proc/cpuinfo 1> /dev/null && echo 'yes' || echo 'no') + [ "${has_AVX2}" = "yes" ] && AVX_flag="-mavx2" + has_AVX512=$(grep '\bavx512f\b' /proc/cpuinfo 1> /dev/null && echo 'yes' || echo 'no') + [ "${has_AVX512}" = "yes" ] && AVX512_flags="-mavx512f" + FMA_flag=$(grep '\bfma\b' /proc/cpuinfo 1> /dev/null && echo '-mfma' || echo '-mno-fma') + SSE4_flag=$(grep '\bsse4_1\b' /proc/cpuinfo 1> /dev/null && echo '-msse4' || echo '-mno-sse4') + grep '\bavx512dq\b' /proc/cpuinfo 1> /dev/null && AVX512_flags+=" -mavx512dq" + grep '\bavx512cd\b' /proc/cpuinfo 1> /dev/null && AVX512_flags+=" -mavx512cd" + grep '\bavx512bw\b' /proc/cpuinfo 1> /dev/null && AVX512_flags+=" -mavx512bw" + grep '\bavx512vl\b' /proc/cpuinfo 1> /dev/null && AVX512_flags+=" -mavx512vl" + config_flags="--enable-avx=${has_AVX} --enable-avx2=${has_AVX2} --enable-avx512=${has_AVX512}" + fi + fi + for TARGET in "cpu" "nvidia"; do + [ "$TARGET" = "nvidia" ] && [ "$ENABLE_CUDA" != "__TRUE__" ] && continue + echo "Installing from scratch into ${pkg_install_dir}/${TARGET}" + + mkdir -p "build_${TARGET}" + cd "build_${TARGET}" + ../configure --prefix="${pkg_install_dir}/${TARGET}/" \ + --libdir="${pkg_install_dir}/${TARGET}/lib" \ + --enable-openmp=${enable_openmp} \ + --enable-shared=yes \ + --enable-static=yes \ + --disable-c-tests \ + --disable-cpp-tests \ + ${config_flags} \ + --enable-nvidia-gpu=$([ "$TARGET" = "nvidia" ] && echo "yes" || echo "no") \ + --with-cuda-path=${CUDA_PATH:-${CUDA_HOME:-/CUDA_HOME-notset}} \ + --with-NVIDIA-GPU-compute-capability=$([ "$TARGET" = "nvidia" ] && echo "sm_$ARCH_NUM" || echo "sm_35") \ + CUDA_CFLAGS="-std=c++14 -allow-unsupported-compiler" \ + OMPI_MCA_plm_rsh_agent=/bin/false \ + FC=${MPIFC} \ + CC=${MPICC} \ + CXX=${MPICXX} \ + CPP="cpp -E" \ + FCFLAGS="${FCFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} -ffree-line-length-none ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + CFLAGS="${CFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + CXXFLAGS="${CXXFLAGS} ${MATH_CFLAGS} ${SCALAPACK_CFLAGS} ${AVX_flag} ${FMA_flag} ${SSE4_flag} ${AVX512_flags} -fno-lto" \ + LDFLAGS="-Wl,--allow-multiple-definition -Wl,--enable-new-dtags ${MATH_LDFLAGS} ${SCALAPACK_LDFLAGS} ${cray_ldflags}" \ + LIBS="${SCALAPACK_LIBS} $(resolve_string "${MATH_LIBS}" "MPI")" \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd .. + # link elpa + link=${pkg_install_dir}/${TARGET}/include/elpa + if [ ! -f $link ]; then + ln -s ${pkg_install_dir}/${TARGET}/include/elpa_openmp-${elpa_ver}/elpa $link + fi + done + cd .. + + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage3/$(basename ${SCRIPT_NAME})" + fi + [ "$enable_openmp" != "yes" ] && elpa_dir_openmp="" + ELPA_CFLAGS="-I'${pkg_install_dir}/IF_CUDA(nvidia|cpu)/include/elpa${elpa_dir_openmp}-${elpa_ver}/modules' -I'${pkg_install_dir}/IF_CUDA(nvidia|cpu)/include/elpa${elpa_dir_openmp}-${elpa_ver}/elpa'" + ELPA_LDFLAGS="-L'${pkg_install_dir}/IF_CUDA(nvidia|cpu)/lib' -Wl,-rpath,'${pkg_install_dir}/IF_CUDA(nvidia|cpu)/lib'" + ;; + __SYSTEM__) + echo "==================== Finding ELPA from system paths ====================" + check_lib -lelpa_openmp "ELPA" + # get the include paths + elpa_include="$(find_in_paths "elpa_openmp-*" $INCLUDE_PATHS)" + if [ "$elpa_include" != "__FALSE__" ]; then + echo "ELPA include directory threaded version is found to be $elpa_include" + ELPA_CFLAGS="-I'$elpa_include/modules' -I'$elpa_include/elpa'" + else + echo "Cannot find elpa_openmp-${elpa_ver} from paths $INCLUDE_PATHS" + exit 1 + fi + # get the lib paths + add_lib_from_paths ELPA_LDFLAGS "libelpa.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking ELPA to user paths ====================" + pkg_install_dir="$with_elpa" + check_dir "${pkg_install_dir}/include" + check_dir "${pkg_install_dir}/lib" + user_include_path="$pkg_install_dir/include" + elpa_include="$(find_in_paths "elpa_openmp-*" user_include_path)" + if [ "$elpa_include" != "__FALSE__" ]; then + echo "ELPA include directory threaded version is found to be $elpa_include/modules" + check_dir "$elpa_include/modules" + ELPA_CFLAGS="-I'$elpa_include/modules' -I'$elpa_include/elpa'" + else + echo "Cannot find elpa_openmp-* from path $user_include_path" + exit 1 + fi + ELPA_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "$with_elpa" != "__DONTUSE__" ]; then + ELPA_LIBS="-lelpa${elpa_dir_openmp}" + cat << EOF > "${BUILDDIR}/setup_elpa" +prepend_path CPATH "$elpa_include" +EOF + if [ "$with_elpa" != "__SYSTEM__" ]; then + cat << EOF >> "${BUILDDIR}/setup_elpa" +prepend_path PATH "$pkg_install_dir/bin" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path CPATH "$pkg_install_dir/include" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +export PATH="$pkg_install_dir/bin":$PATH +export LD_LIBRARY_PATH="$pkg_install_dir/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="$pkg_install_dir/lib":$LD_RUN_PATH +export LIBRARY_PATH="$pkg_install_dir/lib":$LIBRARY_PATH +export CPATH="$pkg_install_dir/include":$CPATH +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig":$PKG_CONFIG_PATH +export CMAKE_PREFIX_PATH="$pkg_install_dir":$CMAKE_PREFIX_PATH +export ELPA_ROOT="$pkg_install_dir" +EOF + fi + cat "${BUILDDIR}/setup_elpa" >> $SETUPFILE + cat << EOF >> "${BUILDDIR}/setup_elpa" +export ELPA_CFLAGS="${ELPA_CFLAGS}" +export ELPA_LDFLAGS="${ELPA_LDFLAGS}" +export ELPA_LIBS="${ELPA_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__ELPA IF_CUDA(-D__ELPA_NVIDIA_GPU|)|)" +export CP_CFLAGS="\${CP_CFLAGS} IF_MPI(${ELPA_CFLAGS}|)" +export CP_LDFLAGS="\${CP_LDFLAGS} IF_MPI(${ELPA_LDFLAGS}|)" +export CP_LIBS="IF_MPI(${ELPA_LIBS}|) \${CP_LIBS}" +export ELPA_ROOT="$pkg_install_dir" +export ELPA_VERSION="${elpa_ver}" +EOF + +fi + +load "${BUILDDIR}/setup_elpa" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "elpa" diff --git a/toolchain/scripts/stage3/install_fftw.sh b/toolchain/scripts/stage3/install_fftw.sh new file mode 100755 index 0000000000..2451b67873 --- /dev/null +++ b/toolchain/scripts/stage3/install_fftw.sh @@ -0,0 +1,153 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +fftw_ver="3.3.10" +fftw_sha256="56c932549852cddcfafdab3820b0200c7742675be92179e59e6215b340e26467" +fftw_pkg="fftw-${fftw_ver}.tar.gz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_fftw" ] && rm "${BUILDDIR}/setup_fftw" + +FFTW_CFLAGS='' +FFTW_LDFLAGS='' +FFTW_LIBS='' +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_fftw" in + __INSTALL__) + require_env MPI_LIBS + echo "==================== Installing FFTW ====================" + pkg_install_dir="${INSTALLDIR}/fftw-${fftw_ver}" + #pkg_install_dir="${HOME}/lib/fftw/${fftw_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + + if verify_checksums "${install_lock_file}"; then + echo "fftw-${fftw_ver} is already installed, skipping it." + else + if [ -f ${fftw_pkg} ]; then + echo "${fftw_pkg} is found" + else + download_pkg_from_ABACUS_org "${fftw_sha256}" "${fftw_pkg}" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d fftw-${fftw_ver} ] && rm -rf fftw-${fftw_ver} + tar -xzf ${fftw_pkg} + cd fftw-${fftw_ver} + FFTW_FLAGS="--enable-openmp --enable-shared --enable-static" + # fftw has mpi support but not compiled by default. so compile it if we build with mpi. + # it will create a second library to link with if needed + [ "${MPI_MODE}" != "no" ] && FFTW_FLAGS="--enable-mpi ${FFTW_FLAGS}" + if [ "${TARGET_CPU}" = "native" ]; then + if [ -f /proc/cpuinfo ]; then + grep '\bavx\b' /proc/cpuinfo 1> /dev/null && FFTW_FLAGS="${FFTW_FLAGS} --enable-avx" + grep '\bavx2\b' /proc/cpuinfo 1> /dev/null && FFTW_FLAGS="${FFTW_FLAGS} --enable-avx2" + grep '\bavx512f\b' /proc/cpuinfo 1> /dev/null && FFTW_FLAGS="${FFTW_FLAGS} --enable-avx512" + fi + fi + # ABACUS need float version and double version fftw at the same time + # install float version fftw + echo "install float version fftw" + ./configure --prefix=${pkg_install_dir} --libdir="${pkg_install_dir}/lib" ${FFTW_FLAGS} --enable-float \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + # install double version fftw + echo "clean" + make distclean > /dev/null 2>&1 || true + echo "install double version fftw" + ./configure --prefix=${pkg_install_dir} --libdir="${pkg_install_dir}/lib" ${FFTW_FLAGS} \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd .. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage3/$(basename ${SCRIPT_NAME})" + fi + FFTW_CFLAGS="-I'${pkg_install_dir}/include'" + FFTW_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding FFTW from system paths ====================" + check_lib -lfftw3 "FFTW" + check_lib -lfftw3_omp "FFTW" + [ "${MPI_MODE}" != "no" ] && check_lib -lfftw3_mpi "FFTW" + add_include_from_paths FFTW_CFLAGS "fftw3.h" FFTW_INC ${INCLUDE_PATHS} + add_lib_from_paths FFTW_LDFLAGS "libfftw3.*" ${LIB_PATHS} + ;; + __DONTUSE__) + # Nothing to do + ;; + *) + echo "==================== Linking FFTW to user paths ====================" + pkg_install_dir="$with_fftw" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + FFTW_CFLAGS="-I'${pkg_install_dir}/include'" + FFTW_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "$with_fftw" != "__DONTUSE__" ]; then + [ "$MPI_MODE" != "no" ] && FFTW_LIBS="IF_MPI(-lfftw3_mpi|)" + FFTW_LIBS+="-lfftw3 -lfftw3_omp" + if [ "$with_fftw" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_fftw" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path CPATH "$pkg_install_dir/include" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +export LD_LIBRARY_PATH="$pkg_install_dir/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="$pkg_install_dir/lib":$LD_RUN_PATH +export LIBRARY_PATH="$pkg_install_dir/lib":$LIBRARY_PATH +export CPATH="$pkg_install_dir/include":$CPATH +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig":$PKG_CONFIG_PATH +export CMAKE_PREFIX_PATH="$pkg_install_dir":$CMAKE_PREFIX_PATH +EOF + fi + # we may also want to cover FFT_SG + cat << EOF >> "${BUILDDIR}/setup_fftw" +export FFTW3_INCLUDES="${FFTW_CFLAGS}" +export FFTW3_LIBS="${FFTW_LIBS}" +export FFTW_CFLAGS="${FFTW_CFLAGS}" +export FFTW_LDFLAGS="${FFTW_LDFLAGS}" +export FFTW_LIBS="${FFTW_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} -D__FFTW3 IF_COVERAGE(IF_MPI(|-U__FFTW3)|)" +export CP_CFLAGS="\${CP_CFLAGS} ${FFTW_CFLAGS}" +export CP_LDFLAGS="\${CP_LDFLAGS} ${FFTW_LDFLAGS}" +export CP_LIBS="${FFTW_LIBS} \${CP_LIBS}" +export FFTW_ROOT=${FFTW_ROOT:-${pkg_install_dir}} +export FFTW3_ROOT=${pkg_install_dir} +EOF + cat "${BUILDDIR}/setup_fftw" >> $SETUPFILE +fi +cd "${ROOTDIR}" + +# ---------------------------------------------------------------------- +# Suppress reporting of known leaks +# ---------------------------------------------------------------------- +cat << EOF >> ${INSTALLDIR}/valgrind.supp +{ + + Memcheck:Addr32 + fun:cdot + ... + fun:invoke_solver + fun:search0 +} +EOF + +load "${BUILDDIR}/setup_fftw" +write_toolchain_env "${INSTALLDIR}" + +report_timing "fftw" diff --git a/toolchain/scripts/stage3/install_libxc.sh b/toolchain/scripts/stage3/install_libxc.sh new file mode 100755 index 0000000000..10e4e299de --- /dev/null +++ b/toolchain/scripts/stage3/install_libxc.sh @@ -0,0 +1,108 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +libxc_ver="6.2.2" +libxc_sha256="a0f6f1bba7ba5c0c85b2bfe65aca1591025f509a7f11471b4cd651a79491b045" +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_libxc" ] && rm "${BUILDDIR}/setup_libxc" + +LIBXC_CFLAGS="" +LIBXC_LDFLAGS="" +LIBXC_LIBS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_libxc" in + __INSTALL__) + echo "==================== Installing LIBXC ====================" + pkg_install_dir="${INSTALLDIR}/libxc-${libxc_ver}" + #pkg_install_dir="${HOME}/lib/libxc/${libxc_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "libxc-${libxc_ver} is already installed, skipping it." + else + if [ -f libxc-${libxc_ver}.tar.gz ]; then + echo "libxc-${libxc_ver}.tar.gz is found" + else + download_pkg_from_ABACUS_org "${libxc_sha256}" "libxc-${libxc_ver}.tar.gz" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d libxc-${libxc_ver} ] && rm -rf libxc-${libxc_ver} + tar -xzf libxc-${libxc_ver}.tar.gz + cd libxc-${libxc_ver} + # using cmake method to install libxc is neccessary for abacus + mkdir build && cd build + cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=${pkg_install_dir} \ + -DBUILD_SHARED_LIBS=YES -DCMAKE_INSTALL_LIBDIR=lib -DENABLE_FORTRAN=ON -DENABLE_PYTHON=OFF -DBUILD_TESTING=NO .. \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install > install.log 2>&1 || tail -n ${LOG_LINES} install.log + cd ../.. + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage3/$(basename ${SCRIPT_NAME})" + fi + LIBXC_CFLAGS="-I'${pkg_install_dir}/include'" + LIBXC_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding LIBXC from system paths ====================" + check_lib -lxcf03 "libxc" + check_lib -lxc "libxc" + add_include_from_paths LIBXC_CFLAGS "xc.h" $INCLUDE_PATHS + add_lib_from_paths LIBXC_LDFLAGS "libxc.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + *) + echo "==================== Linking LIBXC to user paths ====================" + pkg_install_dir="$with_libxc" + check_dir "${pkg_install_dir}/lib" + check_dir "${pkg_install_dir}/include" + LIBXC_CFLAGS="-I'${pkg_install_dir}/include'" + LIBXC_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "$with_libxc" != "__DONTUSE__" ]; then + LIBXC_LIBS="-lxcf03 -lxc" + if [ "$with_libxc" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_libxc" +prepend_path LD_LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path LD_RUN_PATH "$pkg_install_dir/lib" +prepend_path LIBRARY_PATH "$pkg_install_dir/lib" +prepend_path CPATH "$pkg_install_dir/include" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +export LD_LIBRARY_PATH="$pkg_install_dir/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="$pkg_install_dir/lib":$LD_RUN_PATH +export LIBRARY_PATH="$pkg_install_dir/lib":$LIBRARY_PATH +export CPATH="$pkg_install_dir/include":$CPATH +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig":$PKG_CONFIG_PATH +export CMAKE_PREFIX_PATH="$pkg_install_dir":$CMAKE_PREFIX_PATH +EOF + cat "${BUILDDIR}/setup_libxc" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_libxc" +export LIBXC_CFLAGS="${LIBXC_CFLAGS}" +export LIBXC_LDFLAGS="${LIBXC_LDFLAGS}" +export LIBXC_LIBS="${LIBXC_LIBS}" +export CP_DFLAGS="\${CP_DFLAGS} -D__LIBXC" +export CP_CFLAGS="\${CP_CFLAGS} ${LIBXC_CFLAGS}" +export CP_LDFLAGS="\${CP_LDFLAGS} ${LIBXC_LDFLAGS}" +export CP_LIBS="${LIBXC_LIBS} \${CP_LIBS}" +export LIBXC_ROOT="$pkg_install_dir" +EOF +fi + +load "${BUILDDIR}/setup_libxc" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "libxc" diff --git a/toolchain/scripts/stage3/install_scalapack.sh b/toolchain/scripts/stage3/install_scalapack.sh new file mode 100755 index 0000000000..b920cafa34 --- /dev/null +++ b/toolchain/scripts/stage3/install_scalapack.sh @@ -0,0 +1,123 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +scalapack_ver="2.2.1" +scalapack_sha256="4aede775fdb28fa44b331875730bcd5bab130caaec225fadeccf424c8fcb55aa" +scalapack_pkg="scalapack-${scalapack_ver}.tgz" + +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_scalapack" ] && rm "${BUILDDIR}/setup_scalapack" + +SCALAPACK_CFLAGS='' +SCALAPACK_LDFLAGS='' +SCALAPACK_LIBS='' +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_scalapack" in + __INSTALL__) + echo "==================== Installing ScaLAPACK ====================" + pkg_install_dir="${INSTALLDIR}/scalapack-${scalapack_ver}" + #pkg_install_dir="${HOME}/lib/scalapack/${scalapack_ver}-gcc8" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "scalapack-${scalapack_ver} is already installed, skipping it." + else + require_env MATH_LIBS + if [ -f ${scalapack_pkg} ]; then + echo "${scalapack_pkg} is found" + else + download_pkg_from_ABACUS_org "${scalapack_sha256}" "${scalapack_pkg}" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d scalapack-${scalapack_ver} ] && rm -rf scalapack-${scalapack_ver} + tar -xzf ${scalapack_pkg} + + mkdir -p "scalapack-${scalapack_ver}/build" + pushd "scalapack-${scalapack_ver}/build" > /dev/null + + flags="" + if ("${FC}" --version | grep -q 'GNU'); then + flags=$(allowed_gfortran_flags "-fallow-argument-mismatch") + fi + FFLAGS=$flags cmake -DCMAKE_FIND_ROOT_PATH="$ROOTDIR" \ + -DCMAKE_INSTALL_PREFIX="${pkg_install_dir}" \ + -DCMAKE_INSTALL_LIBDIR="lib" \ + -DBUILD_SHARED_LIBS=YES \ + -DCMAKE_BUILD_TYPE=Release .. \ + -DBUILD_TESTING=NO \ + -DSCALAPACK_BUILD_TESTS=NO \ + > configure.log 2>&1 || tail -n ${LOG_LINES} configure.log + make -j $(get_nprocs) > make.log 2>&1 || tail -n ${LOG_LINES} make.log + make install >> make.log 2>&1 || tail -n ${LOG_LINES} make.log + + popd > /dev/null + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage3/$(basename ${SCRIPT_NAME})" + fi + SCALAPACK_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding ScaLAPACK from system paths ====================" + check_lib -lscalapack "ScaLAPACK" + add_lib_from_paths SCALAPACK_LDFLAGS "libscalapack.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking ScaLAPACK to user paths ====================" + pkg_install_dir="$with_scalapack" + check_dir "${pkg_install_dir}/lib" + SCALAPACK_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath,'${pkg_install_dir}/lib'" + ;; +esac +if [ "$with_scalapack" != "__DONTUSE__" ]; then + SCALAPACK_LIBS="-lscalapack" + if [ "$with_scalapack" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_scalapack" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +export LD_LIBRARY_PATH="$pkg_install_dir/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="$pkg_install_dir/lib":$LD_RUN_PATH +export LIBRARY_PATH="$pkg_install_dir/lib":$LIBRARY_PATH +export PKG_CONFIG_PATH="$pkg_install_dir/lib/pkgconfig":$PKG_CONFIG_PATH +export CMAKE_PREFIX_PATH="$pkg_install_dir":$CMAKE_PREFIX_PATH +export SCALAPACK_ROOT="${pkg_install_dir}" +EOF + cat "${BUILDDIR}/setup_scalapack" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_scalapack" +export SCALAPACK_LDFLAGS="${SCALAPACK_LDFLAGS}" +export SCALAPACK_LIBS="${SCALAPACK_LIBS}" +export SCALAPACK_ROOT="${pkg_install_dir}" +export CP_DFLAGS="\${CP_DFLAGS} IF_MPI(-D__SCALAPACK|)" +export CP_LDFLAGS="\${CP_LDFLAGS} IF_MPI(${SCALAPACK_LDFLAGS}|)" +export CP_LIBS="IF_MPI(-lscalapack|) \${CP_LIBS}" +EOF +fi +cd "${ROOTDIR}" + +# ---------------------------------------------------------------------- +# Suppress reporting of known leaks +# ---------------------------------------------------------------------- +cat << EOF >> ${INSTALLDIR}/lsan.supp +# leaks related to SCALAPACK +leak:pdpotrf_ +EOF + +load "${BUILDDIR}/setup_scalapack" +write_toolchain_env "${INSTALLDIR}" + +report_timing "scalapack" diff --git a/toolchain/scripts/stage3/install_stage3.sh b/toolchain/scripts/stage3/install_stage3.sh new file mode 100755 index 0000000000..415343f0b6 --- /dev/null +++ b/toolchain/scripts/stage3/install_stage3.sh @@ -0,0 +1,12 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +./scripts/stage3/install_cereal.sh +./scripts/stage3/install_fftw.sh +./scripts/stage3/install_libxc.sh +./scripts/stage3/install_scalapack.sh +./scripts/stage3/install_elpa.sh + +# EOF diff --git a/toolchain/scripts/stage4/install_libnpy.sh b/toolchain/scripts/stage4/install_libnpy.sh new file mode 100755 index 0000000000..c88e0410a2 --- /dev/null +++ b/toolchain/scripts/stage4/install_libnpy.sh @@ -0,0 +1,86 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all +# libnpy is not need any complex setting +# Only problem is the installation from github.com + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +libnpy_ver="0.1.0" +libnpy_sha256="2fae61694df5acbd750a1fe1bf106e9df705873258aaa5bc6aa49b30b3a21f98" +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_libnpy" ] && rm "${BUILDDIR}/setup_libnpy" + +LIBNPY_CFLAGS="" +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "$with_libnpy" in + __INSTALL__) + echo "==================== Installing LIBNPY ====================" + pkg_install_dir="${INSTALLDIR}/libnpy-${libnpy_ver}" + install_lock_file="$pkg_install_dir/install_successful" + if verify_checksums "${install_lock_file}"; then + echo "libnpy-${libnpy_ver} is already installed, skipping it." + else + url="https://github.com/llohse/libnpy/archive/refs/tags/${libnpy_ver}.tar.gz" + filename="libnpy-v${libnpy_ver}.tar.gz" + if [ -f $filename ]; then + echo "$filename is found" + else + # download from github.com and checksum + echo "wget ${DOWNLOADER_FLAGS} --quiet $url -O $filename" + if ! wget ${DOWNLOADER_FLAGS} --quiet $url -O $filename; then + report_error "failed to download $url" + fi + # checksum + checksum "$filename" "$libnpy_sha256" + fi + echo "Installing from scratch into ${pkg_install_dir}" + [ -d libnpy-${libnpy_ver} ] && rm -rf libnpy-${libnpy_ver} + tar -xzf $filename + cp -r libnpy-${libnpy_ver} "${pkg_install_dir}/" + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage4/$(basename ${SCRIPT_NAME})" + fi + ;; + __SYSTEM__) + echo "==================== CANNOT Finding LIBNPY from system paths NOW ====================" + # How to do it in libnpy? -- Zhaoqing in 2023/08/23 + # check_lib -lxcf03 "libxc" + # check_lib -lxc "libxc" + # add_include_from_paths LIBXC_CFLAGS "xc.h" $INCLUDE_PATHS + # add_lib_from_paths LIBXC_LDFLAGS "libxc.*" $LIB_PATHS + ;; + __DONTUSE__) ;; + *) + echo "==================== Linking LIBNPY to user paths ====================" + check_dir "${pkg_install_dir}" + LIBNPY_CFLAGS="-I'${pkg_install_dir}'" + ;; +esac +if [ "$with_libnpy" != "__DONTUSE__" ]; then + if [ "$with_libnpy" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_libnpy" +prepend_path CPATH "$pkg_install_dir/include" +export CPATH="${pkg_install_dir}/include":${CPATH} +EOF + cat "${BUILDDIR}/setup_libnpy" >> $SETUPFILE + fi + cat << EOF >> "${BUILDDIR}/setup_libnpy" +export LIBNPY_CFLAGS="${LIBNPY_CFLAGS}" +export LIBNPY_ROOT="$pkg_install_dir" +EOF +fi + +load "${BUILDDIR}/setup_libnpy" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "libnpy" diff --git a/toolchain/scripts/stage4/install_libtorch.sh b/toolchain/scripts/stage4/install_libtorch.sh new file mode 100755 index 0000000000..a57c7011f8 --- /dev/null +++ b/toolchain/scripts/stage4/install_libtorch.sh @@ -0,0 +1,121 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. + +# shellcheck disable=all + +[ "${BASH_SOURCE[0]}" ] && SCRIPT_NAME="${BASH_SOURCE[0]}" || SCRIPT_NAME=$0 +SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_NAME")/.." && pwd -P)" + +# From https://pytorch.org/get-started/locally/ +libtorch_ver="1.12.1" # stable +libtorch_sha256="82c7be80860f2aa7963f8700004a40af8205e1d721298f2e09b700e766a9d283" +#libtorch_ver="2.0.1" # newest, but will lead to lots of warning during build process +#libtorch_sha256="137a842d1cf1e9196b419390133a1623ef92f8f84dc7a072f95ada684f394afd" + +# shellcheck source=/dev/null +source "${SCRIPT_DIR}"/common_vars.sh +source "${SCRIPT_DIR}"/tool_kit.sh +source "${SCRIPT_DIR}"/signal_trap.sh +source "${INSTALLDIR}"/toolchain.conf +source "${INSTALLDIR}"/toolchain.env + +[ -f "${BUILDDIR}/setup_libtorch" ] && rm "${BUILDDIR}/setup_libtorch" + +! [ -d "${BUILDDIR}" ] && mkdir -p "${BUILDDIR}" +cd "${BUILDDIR}" + +case "${with_libtorch}" in + __INSTALL__) + echo "==================== Installing libtorch ====================" + pkg_install_dir="${INSTALLDIR}/libtorch-${libtorch_ver}" + #pkg_install_dir="${HOME}/lib/libtorch/${libtorch_ver}" + install_lock_file="${pkg_install_dir}/install_successful" + archive_file="libtorch-cxx11-abi-shared-with-deps-${libtorch_ver}+cpu.zip" + + if verify_checksums "${install_lock_file}"; then + echo "libtorch-${libtorch_ver} is already installed, skipping it." + else + if [ -f ${archive_file} ]; then + echo "${archive_file} is found" + else + download_pkg_from_ABACUS_org "${libtorch_sha256}" "${archive_file}" + fi + + echo "Installing from scratch into ${pkg_install_dir}" + [ -d libtorch ] && rm -rf libtorch + [ -d ${pkg_install_dir} ] && rm -rf ${pkg_install_dir} + unzip -q ${archive_file} + mv libtorch ${pkg_install_dir} + + write_checksums "${install_lock_file}" "${SCRIPT_DIR}/stage4/$(basename "${SCRIPT_NAME}")" + fi + LIBTORCH_CXXFLAGS="-I${pkg_install_dir}/include" + LIBTORCH_LDFLAGS="-L'${pkg_install_dir}/lib' -Wl,-rpath='${pkg_install_dir}/lib'" + ;; + __SYSTEM__) + echo "==================== Finding libtorch from system paths ====================" + check_lib -ltorch "libtorch" + add_include_from_paths LIBTORCH_CXXFLAGS "libtorch.h" $INCLUDE_PATHS + add_lib_from_paths LIBTORCH_LDFLAGS "libtorch.*" "$LIB_PATHS" + ;; + __DONTUSE__) ;; + + *) + echo "==================== Linking libtorch to user paths ====================" + pkg_install_dir="${with_libtorch}" + + # use the lib64 directory if present (multi-abi distros may link lib/ to lib32/ instead) + LIBTORCH_LIBDIR="${pkg_install_dir}/lib" + [ -d "${pkg_install_dir}/lib64" ] && LIBTORCH_LIBDIR="${pkg_install_dir}/lib64" + + check_dir "${LIBTORCH_LIBDIR}" + LIBTORCH_CXXFLAGS="-I${pkg_install_dir}/include" + if [ "$ENABLE_CUDA" = "__TRUE__" ]; then + LIBTORCH_LDFLAGS="-Wl,--no-as-needed,-L'${LIBTORCH_LIBDIR}' -Wl,--no-as-needed,-rpath='${LIBTORCH_LIBDIR}'" + else + LIBTORCH_LDFLAGS="-L'${LIBTORCH_LIBDIR}' -Wl,-rpath='${LIBTORCH_LIBDIR}'" + fi + ;; +esac + +if [ "$with_libtorch" != "__DONTUSE__" ]; then + if [ "$with_libtorch" != "__SYSTEM__" ]; then + cat << EOF > "${BUILDDIR}/setup_libtorch" +prepend_path LD_LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path LD_RUN_PATH "${pkg_install_dir}/lib" +prepend_path LIBRARY_PATH "${pkg_install_dir}/lib" +prepend_path PKG_CONFIG_PATH "$pkg_install_dir/lib/pkgconfig" +prepend_path CMAKE_PREFIX_PATH "$pkg_install_dir" +export LD_LIBRARY_PATH="${pkg_install_dir}/lib":$LD_LIBRARY_PATH +export LD_RUN_PATH="${pkg_install_dir}/lib":$LD_RUN_PATH +export LIBRARY_PATH="${pkg_install_dir}/lib":$LIBRARY_PATH +export CPATH="${pkg_install_dir}/include":$CPATH +export PKG_CONFIG_PATH="${pkg_install_dir}/lib/pkgconfig":$PKG_CONFIG_PATH +export CMAKE_PREFIX_PATH="${pkg_install_dir}":$CMAKE_PREFIX_PATH +EOF + fi + if [ "$ENABLE_CUDA" = "__TRUE__" ]; then + cat << EOF >> "${BUILDDIR}/setup_libtorch" +export CP_DFLAGS="\${CP_DFLAGS} -D__LIBTORCH" +export CXXFLAGS="\${CXXFLAGS} ${LIBTORCH_CXXFLAGS}" +export CP_LDFLAGS="\${CP_LDFLAGS} ${LIBTORCH_LDFLAGS}" +export CP_LIBS="\${CP_LIBS} -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch" +EOF + cat "${BUILDDIR}/setup_libtorch" >> "${SETUPFILE}" + else + cat << EOF >> "${BUILDDIR}/setup_libtorch" +export CP_DFLAGS="\${CP_DFLAGS} -D__LIBTORCH" +export CXXFLAGS="\${CXXFLAGS} ${LIBTORCH_CXXFLAGS}" +export CP_LDFLAGS="\${CP_LDFLAGS} ${LIBTORCH_LDFLAGS}" +export CP_LIBS="\${CP_LIBS} -lc10 -ltorch_cpu -ltorch" +EOF + cat "${BUILDDIR}/setup_libtorch" >> "${SETUPFILE}" + fi +fi + +load "${BUILDDIR}/setup_libtorch" +write_toolchain_env "${INSTALLDIR}" + +cd "${ROOTDIR}" +report_timing "libtorch" diff --git a/toolchain/scripts/stage4/install_stage4.sh b/toolchain/scripts/stage4/install_stage4.sh new file mode 100755 index 0000000000..97fa4d3a20 --- /dev/null +++ b/toolchain/scripts/stage4/install_stage4.sh @@ -0,0 +1,9 @@ +#!/bin/bash -e + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all + +./scripts/stage4/install_libtorch.sh +./scripts/stage4/install_libnpy.sh + +# EOF diff --git a/toolchain/scripts/tool_kit.sh b/toolchain/scripts/tool_kit.sh new file mode 100755 index 0000000000..82dce2df20 --- /dev/null +++ b/toolchain/scripts/tool_kit.sh @@ -0,0 +1,688 @@ +# A set of tools used in the toolchain installer, intended to be used + +# TODO: Review and if possible fix shellcheck errors. +# shellcheck disable=all +# shellcheck shell=bash + +# by sourcing this file inside other scripts. + +SYS_INCLUDE_PATH=${SYS_INCLUDE_PATH:-"/usr/local/include:/usr/include"} +SYS_LIB_PATH=${SYS_LIB_PATH:-"/usr/local/lib64:/usr/local/lib:/usr/lib64:/usr/lib:/lib64:/lib"} +INCLUDE_PATHS=${INCLUDE_PATHS:-"CPATH SYS_INCLUDE_PATH"} +LIB_PATHS=${LIB_PATHS:-"LIBRARY_PATH LD_LIBRARY_PATH LD_RUN_PATH SYS_LIB_PATH"} +time_start=$(date +%s) + +# report timing +report_timing() { + time_stop=$(date +%s) + printf "Step %s took %0.2f seconds.\n" $1 $((time_stop - time_start)) +} + +# report a warning message with script name and line number +report_warning() { + if [ $# -gt 1 ]; then + local __lineno=", line $1" + local __message="$2" + else + local __lineno='' + local __message="$1" + fi + echo "WARNING: (${SCRIPT_NAME}${__lineno}) $__message" >&2 +} + +# report an error message with script name and line number +report_error() { + if [ $# -gt 1 ]; then + local __lineno=", line $1" + local __message="$2" + else + local __lineno='' + local __message="$1" + fi + echo "ERROR: (${SCRIPT_NAME}${__lineno}) $__message" >&2 +} + +# error handler for line trap from set -e +error_handler() { + local __lineno="$1" + report_error $1 "Non-zero exit code detected." + exit 1 +} + +# source a file if it exists, otherwise do nothing +load() { + if [ -f "$1" ]; then + source "$1" + fi +} + +# A more portable command that will give the full path, removing +# symlinks, of a given path. This is more portable than readlink -f +# which does not work on Mac OS X +realpath() { + local __path="$1" + if [ "x$__path" = x ]; then + return 0 + fi + local __basename=$(basename "$__path") + if [ -e "$__path" ]; then + echo $( + cd "$(dirname "$__path")" + pwd -P + )/"$__basename" + return 0 + else + return 1 + fi +} + +# given a list, outputs a list with duplicated items filtered out +unique() ( + # given a list, outputs a list with duplicated items filtered out. + # If -d option exists, then output the list delimited + # by ; note that this option does not effect the input. + local __result='' + local __delimiter=' ' + local __item='' + if [ "$1" = "-d" ]; then + shift + __delimiter="$1" + shift + fi + # It is essential that we quote $@, which makes it equivalent to + # "$1" "$2" ... So this works if any of the arguments contains + # space. And we use \n to separate the fields in the + # __result for now, so that fields that contain spaces are + # correctly grepped. + for __item in "$@"; do + if [ x"$__result" = x ]; then + __result="${__item}" + # Note that quoting $__result after echo is essential to + # retain the \n in the variable from the output of echo. Also + # remember grep only works on a line by line basis, so if + # items are delimited by newlines, then for grep search it + # should be delimited by ^ and $ (beginning and end of line) + elif ! (echo "$__result" | + grep -s -q -e "^$__item\$"); then + __result="${__result} +${__item}" + fi + done + __result="$(echo "$__result" | paste -s -d "$__delimiter" -)" + # quoting $__result below is again essential for correct + # behaviour if IFS is set to be the same $__delimiter in the + # parent shell calling this macro + echo "$__result" +) + +# reverse a list +reverse() ( + # given a list, output a list with reversed order. If -d + # option exists, then output the list delimited by + # ; note that this option does not effect the input. + local __result='' + local __delimiter=' ' + local __item='' + if [ "$1" = "-d" ]; then + shift + __delimiter="$1" + shift + fi + for __item in "$@"; do + if [ x"$__result" = x ]; then + __result="$__item" + else + __result="${__item}${__delimiter}${__result}" + fi + done + echo "$__result" +) + +# get the number of processes available for compilation +get_nprocs() { + if [ -n "${NPROCS_OVERWRITE}" ]; then + echo ${NPROCS_OVERWRITE} | sed 's/^0*//' + elif $(command -v nproc > /dev/null 2>&1); then + echo $(nproc --all) + elif $(command -v sysctl > /dev/null 2>&1); then + echo $(sysctl -n hw.ncpu) + else + echo 1 + fi +} + +# convert a list of paths to -L ... used by ld +paths_to_ld() { + # need to define the POSIX default IFS values here, cannot just do + # __ifs=$IFS first, because IFS can be unset, and if so __ifs will + # becomes an empty string (null) and NOT unset, so later when IFS + # is set to __ifs it becomes null rather than unset, and thus + # causing wrong behaviour. So if IFS is unset, __ifs should be + # the POSIX default value. Further more, due to shell + # automatically remove the tailing "\n" in a string during + # variable assignment, we need to add x after \n and then remove + # it. + local __paths=$@ + local __name='' + local __raw_path='' + local __dir='' + local __lib_dirs='' + # set default IFS first + local __ifs=$(printf " \t\nx") + __ifs="${__ifs%x}" + [ "$IFS" ] && __ifs="$IFS" + for __name in $__paths; do + eval __raw_path=\$"$__name" + # change internal field separator to : + IFS=':' + # loop over all dirs in path, and filter out duplications + for __dir in $__raw_path; do + if ! [ x"$__dir" = x ]; then + if ! [[ "$__lib_dirs" =~ (^|[[:space:]])"-L'$__dir'"($|[[:space:]]) ]]; then + __lib_dirs="$__lib_dirs -L'$__dir'" + fi + fi + done + IFS="$__ifs" + done + echo $__lib_dirs +} + +# Find a file from directories given in a list of paths, each has the +# same format as env variable PATH. If the file is found, then echoes +# the full path of the file. If the file is not found, then echoes +# __FALSE__. The file name can also contain wildcards that are +# acceptable for bash, and in that case the full path of the first +# matching file will be echoed. +find_in_paths() { + local __target=$1 + shift + local __paths=$@ + local __name='' + local __raw_path='' + local __dir='' + local __file='' + local __files='' + # use the IFS variable to take care of possible spaces in file/dir names + local __ifs="$(printf " \t\nx")" + __ifs="${__ifs%x}" + [ "$IFS" ] && __ifs="$IFS" + for __name in $__paths; do + eval __raw_path=\$"$__name" + # fields in paths are separated by : + IFS=':' + for __dir in $__raw_path; do + # files in possible glob expansion are to be delimited by "\n\b" + IFS="$(printf "\nx")" + IFS="${IFS%x}" + for __file in $__dir/$__target; do + if [ -e "$__file" ]; then + echo $(realpath "$__file") + # must remember to change IFS back when exiting + IFS="$__ifs" + return 0 + fi + done + IFS=':' + done + IFS=$__ifs + done + echo "__FALSE__" +} + +# search through a list of given paths, try to find the required file +# or directory, and if found then add full path of dirname file, or +# directory, to the -I include list for CFLAGS and append to a user +# specified variable (__cflags_name). If not found, then nothing is +# done. If the option -p is present, then if the search target is a +# directory, then the parent directory of the directory is used for -I +# instead. The search target accepts bash wildcards, and in this case +# the first match will be used. +add_include_from_paths() { + local __parent_dir_only=false + if [ $1 = "-p" ]; then + __parent_dir_only=true + shift + fi + local __cflags_name=$1 + shift + local __search_target=$1 + shift + local __paths=$@ + local __found_target="" + local __cflags="" + __found_target="$(find_in_paths "$__search_target" \ + $__paths)" + if [ "$__found_target" != "__FALSE__" ]; then + if [ -f "$__found_target" ] || $__parent_dir_only; then + __found_target="$(dirname "$__found_target")" + fi + echo "Found include directory $__found_target" + eval __cflags=\$"${__cflags_name}" + __cflags="${__cflags} -I'${__found_target}'" + # remove possible duplicates + __cflags="$(unique $__cflags)" + # must escape all quotes again before the last eval, as + # otherwise all quotes gets interpreted by the shell when + # assigning to variable because eval will reduce one escape + # level + __cflags="${__cflags//'/\\'/}" + eval $__cflags_name=\"$__cflags\" + fi +} + +# search through a list of given paths, try to find the required file +# or directory, and if found then add full path of dirname file, or +# directory, to the -L library list (including -Wl,-rpath) for LDFLAGS +# and append to a user specified variable (__ldflags_name). If not +# found, then nothing is done. If the option -p is present, then if +# the search target is a directory, then the parent directory of the +# directory is used for -L instead. The search target accepts bash +# wildcards, and in this case the first match will be used. +add_lib_from_paths() { + local __parent_dir_only=false + if [ $1 = "-p" ]; then + __parent_dir_only=true + shift + fi + local __ldflags_name=$1 + shift + local __search_target=$1 + shift + local __paths=$@ + local __found_target="" + local __ldflags="" + __found_target="$(find_in_paths "$__search_target" \ + $__paths)" + if [ "$__found_target" != "__FALSE__" ]; then + if [ -f "$__found_target" ] || $__parent_dir_only; then + __found_target="$(dirname "$__found_target")" + fi + echo "Found lib directory $__found_target" + eval __ldflags=\$"${__ldflags_name}" + __ldflags="${__ldflags} -L'${__found_target}' -Wl,-rpath,'${__found_target}'" + # remove possible duplicates + __ldflags="$(unique $__ldflags)" + # must escape all quotes again before the last eval, as + # otherwise all quotes gets interpreted by the shell when + # assigning to variable because eval will reduce one escape + # level + __ldflags="${__ldflags//'/\\'/}" + eval $__ldflags_name=\"$__ldflags\" + fi +} + +# check if environment variable is assigned and non-empty +# https://serverfault.com/questions/7503/how-to-determine-if-a-bash-variable-is-empty +require_env() { + local __env_var_name=$1 + local __env_var="$(eval echo \"\$$__env_var_name\")" + if [ -z "${__env_var+set}" ]; then + report_error "requires environment variable $__env_var_name to work" + return 1 + fi +} + +resolve_string() { + local __to_resolve=$1 + shift + local __flags=$@ + + echo $("${SCRIPTDIR}/parse_if.py" $__flags <<< "${__to_resolve}") +} + +# check if a command is available +check_command() { + local __command=${1} + if [ $# -eq 1 ]; then + local __package=${1} + elif [ $# -gt 1 ]; then + local __package=${2} + fi + if $(command -v ${__command} > /dev/null 2>&1); then + echo "path to ${__command} is $(realpath $(command -v ${__command}))" + else + report_error "Cannot find ${__command}, please check if the package ${__package} is installed or in system search path" + return 1 + fi +} + +# check if directory exists +check_dir() { + local __dir=$1 + if [ -d "$__dir" ]; then + echo "Found directory $__dir" + else + report_error "Cannot find $__dir" + return 1 + fi +} + +# check if a command has been installed correctly +check_install() { + local __command=${1} + if [ $# -eq 1 ]; then + local __package=${1} + elif [ $# -gt 1 ]; then + local __package=${2} + fi + if $(command -v ${__command} > /dev/null 2>&1); then + echo "$(basename ${__command}) is installed as $(command -v ${__command})" + else + report_error "cannot find ${__command}, please check if the package ${__package} has been installed correctly" + return 1 + fi +} + +# check if a library can be found by ld, library names should in the +# format -lname, which would then referred to libname.a or libname.so +# by ld +check_lib() { + local __libname="${1#-l}" + if [ $# -eq 1 ]; then + local __package=lib"$__libname" + elif [ $# -gt 1 ]; then + local __package=$2 + fi + # Note that LD_LIBRARY_PATH is NOT used by ld during linking + # stage, and is only used for searching to the shared libraries + # required by the executable AFTER it has already been compiled, to + # override its internal search paths built into the binary when it + # was compiled. Here, we explicitly include the commonly defined + # library search paths---including LD_LIBRARY_PATH---in the -L + # search paths of ld. This is the only way ld can include + # non-standard directories in its search path. If we use gcc + # instead of ld for linker then we can use LIBRARY_PATH, which IS + # used during link stage. However, I think using ld is more + # general, as in most systems LIBRARY_PATH is rarely defined, and + # we would have to rely on gcc. + local __search_engine="ld -o /dev/null" + local __search_paths="$LIB_PATHS" + # convert a list of paths to -L list used by ld + __search_engine="$__search_engine $(paths_to_ld $__search_paths)" + # needed the eval to interpret the quoted directories correctly (somehow) + if (eval $__search_engine -l$__libname 2>&1 | grep -q -s "\-l$__libname"); then + # if library not found, ld will return error message + # containing the library name + report_error \ + "ld cannot find -l$__libname, please check if $__package is installed or in system search path" + return 1 + else + # if library is found, then ld will return error message about + # not able to find _start or _main symbol + echo "lib$__libname is found in ld search path" + fi +} + +# check if a module is available for the current version of gfortran, +# returns 0 if available and 1 if not +check_gfortran_module() { + local __module_name=$1 + local __FC=${FC:-gfortran} + cat << EOF | $__FC -c -o /dev/null -xf95 -ffree-form - > /dev/null 2>&1 +PROGRAM check_gfortran_module +USE ${__module_name} +IMPLICIT NONE +PRINT *, "PASS" +END PROGRAM check_gfortran_module +EOF +} + +# check if a flag is allowed for the current version of +# gfortran. returns 0 if allowed and 1 if not +check_gfortran_flag() { + local __flag=$1 + local __FC=${FC:-gfortran} + # no need to do a full compilation, just -E -cpp would do for + # checking flags + cat << EOF | $__FC -E -cpp $__flag -xf95 -ffree-form - > /dev/null 2>&1 +PROGRAM test_code + IMPLICIT NONE + PRINT *, "PASS" +END PROGRAM test_code +EOF +} + +# check if a flag is allowed for the current version of +# gcc. returns 0 if allowed and 1 if not +check_gcc_flag() { + local __flag=$1 + local __CC=${CC:-gcc} + # no need to do a full compilation, just -E -cpp would do for + # checking flags + cat << EOF | $__CC -E -cpp $__flag -xc - > /dev/null 2>&1 +#include +int main() { + printf("PASS\n"); +} +EOF +} + +# check if a flag is allowed for the current version of +# g++. returns 0 if allowed and 1 if not +check_gxx_flag() { + local __flag=$1 + local __CXX=${CXX:-g++} + # no need to do a full compilation, just -E -cpp would do for + # checking flags + cat << EOF | $__CXX -E -cpp $__flag -xc - > /dev/null 2>&1 +#include +int main() { + printf("PASS\n"); +} +EOF +} + +# given a list of flags, only print out what is allowed by the current +# version of gfortran +allowed_gfortran_flags() { + local __flags=$@ + local __flag='' + local __result='' + for __flag in $__flags; do + if (check_gfortran_flag $__flag); then + [ -z "$__result" ] && __result="$__flag" || __result="$__result $__flag" + fi + done + echo $__result +} + +# given a list of flags, only print out what is allowed by the current +# version of gcc +allowed_gcc_flags() { + local __flags=$@ + local __flag='' + local __result='' + for __flag in $__flags; do + if (check_gcc_flag $__flag); then + [ -z "$__result" ] && __result="$__flag" || __result="$__result $__flag" + fi + done + echo $__result +} + +# given a list of flags, only print out what is allowed by the current +# version of g++ +allowed_gxx_flags() { + local __flags=$@ + local __flag='' + local __result='' + for __flag in $__flags; do + if (check_gxx_flag $__flag); then + [ -z "$__result" ] && __result="$__flag" || __result="$__result $__flag" + fi + done + echo $__result +} + +# remove a directory to a given path +remove_path() { + local __path_name=$1 + local __directory=$2 + local __path="$(eval echo \$$__path_name)" + # must remove all the middle ones first before treating two ends, + # otherwise there can be cases where not all __directory are + # removed. + __path=${__path//:$__directory:/:} + __path=${__path#$__directory:} + __path=${__path%:$__directory} + __path=$(echo "$__path" | sed "s:^$__directory\$::g") + eval $__path_name=\"$__path\" + export $__path_name +} + +# prepend a directory to a given path +prepend_path() { + # prepend directory to $path_name and then export path_name. If + # the directory already exists in path, bring the directory to the + # front of the list. + # $1 is path name + # $2 is directory + remove_path "$1" "$2" + eval $1=\"$2\${$1:+\":\$$1\"}\" + eval export $1 +} + +# append a directory to a given path +append_path() { + # append directory to $path_name and then export path_name. If + # the directory already exists in path, bring the directory to the + # back of the list. + # $1 is path name + # $2 is directory + remove_path "$1" "$2" + eval $1=\"\${$1:+\"\$$1:\"}$2\" + eval export $1 +} + +# helper routine for reading --enable=* input options +read_enable() { + local __input_var="${1#*=}" + case $__input_var in + "$1") + # if there is no "=" then treat as "yes" + echo "__TRUE__" + ;; + yes) + echo "__TRUE__" + ;; + no) + echo "__FALSE__" + ;; + *) + echo "__INVALID__" + ;; + esac +} + +# helper routine for reading --with=* input options +read_with() { + local __input_var="${1#--with*=}" + case $__input_var in + "${1}") + # if there is no "=" then treat as "install" + if [ ${#} -gt 1 ]; then + echo "${2}" + else + echo "__INSTALL__" + fi + ;; + install) + echo "__INSTALL__" + ;; + system) + echo "__SYSTEM__" + ;; + no) + echo "__DONTUSE__" + ;; + *) + echo "${__input_var//\~/$HOME}" + ;; + esac +} + +# helper routine to check integrity of downloaded files +checksum() { + local __filename=$1 + local __sha256=$2 + local __shasum_command='sha256sum' + # check if we have sha256sum command, Mac OS X does not have + # sha256sum, but has an equivalent with shasum -a 256 + command -v "$__shasum_command" > /dev/null 2>&1 || + __shasum_command="shasum -a 256" + if echo "$__sha256 $__filename" | ${__shasum_command} --check; then + echo "Checksum of $__filename Ok" + else + rm -v ${__filename} + report_error "Checksum of $__filename could not be verified, abort." + return 1 + fi +} + +# downloader for the package tars, includes checksum +download_pkg_from_ABACUS_org() { + # usage: download_pkg_from_cp2k_org sha256 filename + local __sha256="$1" + local __filename="$2" + local __url="https://www.cp2k.org/static/downloads/$__filename" + # download + echo "wget ${DOWNLOADER_FLAGS} --quiet $__url" + if ! wget ${DOWNLOADER_FLAGS} --quiet $__url; then + report_error "failed to download $__url" + return 1 + fi + # checksum + checksum "$__filename" "$__sha256" +} + +# verify the checksums inside the given checksum file +verify_checksums() { + local __checksum_file=$1 + local __shasum_command='sha256sum' + + # check if we have sha256sum command, Mac OS X does not have + # sha256sum, but has an equivalent with shasum -a 256 + command -v "$__shasum_command" > /dev/null 2>&1 || + __shasum_command="shasum -a 256" + + ${__shasum_command} --check "${__checksum_file}" > /dev/null 2>&1 +} + +# write a checksum file $1 containing checksums for each given file $2, $3, ... (plus the $VERSION_FILE) +write_checksums() { + local __checksum_file=$1 + shift # remove output file from arguments to be able to pass them along properly quoted + local __shasum_command='sha256sum' + + # check if we have sha256sum command, Mac OS X does not have + # sha256sum, but has an equivalent with shasum -a 256 + command -v "$__shasum_command" > /dev/null 2>&1 || + __shasum_command="shasum -a 256" + + ${__shasum_command} "${VERSION_FILE}" "$@" > "${__checksum_file}" +} + +# generate a filtered toolchain.env +write_toolchain_env() { + local __installdir=$1 + + # run the following in a subshell to not affect the currently running shell + # we do not need to achieve complete filtering, it is sufficient to + # remove problematic variables (TERM/TERMCAP/COLORTERM) which may trigger + # 'too many arguments' (since the environment vars are stored in the same memory block as command line arguments) + # or which may not be valid anymore the next time the user runs the toolchain scripts, + # like the proxy vars which may affect fetching tarballs + ( + unset COLORTERM DISPLAY EDITOR LESS LESSOPEN LOGNAME LS_COLORS PAGER + unset TERM TERMCAP USER + unset ftp_proxy http_proxy no_proxy + unset GPG_AGENT_INFO SSH_AGENT_PID SSH_AUTH_SOCK SSH_CLIENT SSH_CONNECTION SSH_TTY + unset LS_COLORS LS_OPTIONS + unset STY WINDOW XAUTHORITY + unset XDG_CURRENT_DESKTOP XDG_RUNTIME_DIR XDG_SEAT XDG_SESSION_CLASS XDG_SESSION_DESKTOP XDG_SESSION_ID XDG_SESSION_TYPE XDG_VTNR XDG_CONFIG_DIRS XDG_DATA_DIRS + unset DBUS_SESSION_BUS_ADDRESS + + export -p + ) > "${__installdir}/toolchain.env" +} diff --git a/toolchain/toolchain_gnu.sh b/toolchain/toolchain_gnu.sh new file mode 100755 index 0000000000..6565f23ba1 --- /dev/null +++ b/toolchain/toolchain_gnu.sh @@ -0,0 +1,21 @@ +#!/bin/bash +#SBATCH -J install +#SBATCH -N 1 +#SBATCH -n 64 + +# JamesMisaka in 2023-08-31 +# install abacus by gnu-toolchain +# one can use mpich or openmpi +# libtorch and libnpy are for deepks support, which can be =no + +./install_abacus_toolchain.sh --with-openmpi=install \ +--with-intel=no --with-gcc=system \ +--with-cmake=install \ +--with-scalapack=install \ +--with-libxc=install \ +--with-fftw=install \ +--with-elpa=install \ +--with-cereal=install \ +--with-libtorch=no \ +--with-libnpy=no \ +| tee compile.log \ No newline at end of file diff --git a/toolchain/toolchain_intel-mpich.sh b/toolchain/toolchain_intel-mpich.sh new file mode 100755 index 0000000000..280432f6d4 --- /dev/null +++ b/toolchain/toolchain_intel-mpich.sh @@ -0,0 +1,26 @@ +#!/bin/bash +#SBATCH -J install +#SBATCH -N 1 +#SBATCH -n 64 + +# JamesMisaka in 2023-08-25 +# install abacus by intel-toolchain +# use mkl , and mpich instead of intelmpi +# libtorch and libnpy are for deepks support, which can be =no +# can support deepmd + +# module load mkl compiler + +./install_abacus_toolchain.sh \ +--with-intel=system --math-mode=mkl \ +--with-gcc=no --with-mpich=install \ +--with-cmake=install \ +--with-scalapack=no \ +--with-libxc=install \ +--with-fftw=no \ +--with-elpa=install \ +--with-cereal=install \ +--with-libtorch=install \ +--with-libnpy=install \ +--with-intel-classic=yes \ +| tee compile.log \ No newline at end of file diff --git a/toolchain/toolchain_intel.sh b/toolchain/toolchain_intel.sh new file mode 100755 index 0000000000..c35b1d5c0b --- /dev/null +++ b/toolchain/toolchain_intel.sh @@ -0,0 +1,26 @@ +#!/bin/bash +#SBATCH -J install +#SBATCH -N 1 +#SBATCH -n 16 + +# JamesMisaka in 2023-08-31 +# install abacus by intel-toolchain +# use mkl and intelmpi +# but mpich and openmpi can also be tried +# libtorch and libnpy are for deepks support, which can be =no + +# module load mkl mpi compiler + +./install_abacus_toolchain.sh \ +--with-intel=system --math-mode=mkl \ +--with-gcc=no --with-intelmpi=system \ +--with-cmake=install \ +--with-scalapack=no \ +--with-libxc=install \ +--with-fftw=no \ +--with-elpa=install \ +--with-cereal=install \ +--with-libtorch=install \ +--with-libnpy=install \ +--with-intel-classic=yes \ +| tee compile.log \ No newline at end of file