Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
101 commits
Select commit Hold shift + click to select a range
25ddfe4
Start adding a new class, GIST_PME, that will inherit from
drroe Mar 25, 2021
a4b24c9
Add recip and self6 calcs
drroe Mar 25, 2021
767e796
Add long range correction for gist
drroe Mar 25, 2021
b225b45
Add direct space sum routine.
drroe Mar 25, 2021
ca54133
Bring HelPME up to PR #54
drroe Mar 25, 2021
4b50682
Expose sumq and vdw_recip_term via functions
drroe Mar 25, 2021
432bf91
Add some per-atom vdw stuff needed for PME_GIST
drroe Mar 25, 2021
87a6925
Expose more of Ewald to inheriting classes
drroe Mar 25, 2021
fe35b81
Expose more of Ewald_ParticleMesh for inherting classes. Add
drroe Mar 25, 2021
fc6dcf8
Add direct space calcs
drroe Mar 25, 2021
5b02ce2
Make switch function available to inheriting classes
drroe Mar 25, 2021
edb0972
Fix exclusion array type
drroe Mar 25, 2021
bc58629
GIST with LJPME not ready for primetime
drroe Mar 26, 2021
c6ca56a
Add PmeOptions class.
drroe Mar 26, 2021
0c6ed64
Add help keywords. Ensure all variables are initialized.
drroe Mar 26, 2021
818fd3f
Add functions to return private vars
drroe Mar 26, 2021
8c7d477
Add Init for Ewald_ParticleMesh with PmeOptions
drroe Mar 26, 2021
1896bf5
Start adding PME to GIST. Initialize some uninitialized vars
drroe Mar 26, 2021
3973c97
Create non-inlined version of adjust for GIST_PME
drroe Mar 26, 2021
0c050e4
Make LJ PME keywords a separate function for things that do not support
drroe Mar 26, 2021
35b89ff
Start adding PME GIST data sets
drroe Mar 26, 2021
4e4740f
LJ PME not yet allowed for GIST
drroe Mar 26, 2021
c5d662c
Ensure energy calc is done for any occupied voxel (occupancy > 0 instead
drroe Mar 26, 2021
83f9580
Update tip4p and tip5p tests for occupancy threshold change
drroe Mar 26, 2021
a45e21a
Finish PME init
drroe Mar 26, 2021
4059e93
Add debug level. Do PME init and setup.
drroe Mar 26, 2021
f7f15a4
Add solute/water id and solute index arrays
drroe Mar 26, 2021
1cc994c
Start adding actual PME calc. Need to save whether atom is solvent or
drroe Mar 26, 2021
94b6f4e
atom_voxel_ and atomIsSolute_ arrays will be accessed by atom #
drroe Mar 26, 2021
613ce3a
Add PME solute grid assignment
drroe Mar 26, 2021
de825a2
Un-comment the pme calc function
drroe Mar 26, 2021
25953ad
Do the order calc for CUDA as well - not sure why that was behind the…
drroe Mar 26, 2021
d7580a3
Enable pme calc
drroe Mar 26, 2021
e1b8619
Add avg voxel energy calc for pme data
drroe Mar 26, 2021
7035e3b
Print out sums - they seem to be only for debug
drroe Mar 27, 2021
27941f7
Start adding separate avg routine for non pme energy
drroe Mar 27, 2021
25ca4bb
Use new averaging routine
drroe Mar 27, 2021
f833f4d
Add nopme keywords
drroe Mar 27, 2021
87a6302
Add orthogonal pme test
drroe Mar 27, 2021
71503f0
Fix default lj pme assignment
drroe Mar 27, 2021
aee5485
Add non-orthogonal test
drroe Mar 27, 2021
1d060ff
Add info comparisons
drroe Mar 29, 2021
ce8130c
Move DEBYE_EA to Constants
drroe Mar 29, 2021
35d45b1
Add separate PME printout for testing
drroe Mar 29, 2021
47a433b
Add headers for PME output
drroe Mar 29, 2021
281dbd8
Dipole calc should be done whether or not we skip energy
drroe Mar 29, 2021
d0bf2c7
Remove duplicated code
drroe Mar 29, 2021
4b840cf
Add code docs
drroe Mar 29, 2021
f32ac27
Print options when using pme
drroe Mar 29, 2021
3d035bd
Add regular ewald options.
drroe Mar 30, 2021
93d14e1
Rename ; will use for all Ewald
drroe Mar 30, 2021
371485e
Change to EwaldOptions
drroe Mar 30, 2021
5f1dfa3
Use EwaldOptions in PME
drroe Mar 30, 2021
1dfccd5
Use Ewald_Regular
drroe Mar 30, 2021
8449acf
Use EwaldOptions
drroe Mar 30, 2021
8776400
ewcoefflj keyword can turn on LJ pme
drroe Mar 30, 2021
3c693fb
Have GIST PME use EwaldOptions. Update depends
drroe Mar 30, 2021
b9ea59b
Fix up help for energy. Pass LJ switch width for regular ewald
drroe Mar 30, 2021
3afb4c6
Remove old code
drroe Mar 30, 2021
8b31f5d
Fix printout of LJ options; now all in EwaldOptions
drroe Mar 30, 2021
575c173
Reenable some timers
drroe Mar 30, 2021
327ceed
Move var closer to where it is set
drroe Mar 30, 2021
fecd294
Start fixing openmp
drroe Mar 30, 2021
a977800
Add more internal arrays
drroe Mar 30, 2021
fc86742
Internal arrays are per atom, not voxel...
drroe Mar 30, 2021
1c6df98
Add doc
drroe Mar 30, 2021
b95fdf9
Ensure direct arrays are zeroed out.
drroe Mar 31, 2021
ac549ce
Ensure contributions from other threads are summed into 0 arrays
drroe Mar 31, 2021
f7e5f71
atom_voxel was unused
drroe Mar 31, 2021
9a280d4
Add access to internal arrays
drroe Mar 31, 2021
730d025
Add function to return energy on a specified atom
drroe Mar 31, 2021
2912889
Use reworked GIST_PME. Make numthreads a class variable
drroe Mar 31, 2021
76fbe46
Comment out some unused stuff.
drroe Mar 31, 2021
b5a00ab
Remove old code. Add Ewald timing to output
drroe Mar 31, 2021
472f860
The PME GIST grid arrays do not need to be threaded
drroe Mar 31, 2021
cf31e1f
Hide some debug info. Fix citation in output
drroe Mar 31, 2021
71e0d37
Minor version bump for GIST nw_total > 0 fix and addition of PME
drroe Mar 31, 2021
3ffb0d3
doeij does not work with PME, trap it. Also make doeij with cuda an
drroe Mar 31, 2021
fff6355
Fix spacing
drroe Mar 31, 2021
d5b0a17
Make function const
drroe Mar 31, 2021
4e307d5
Was accidentally doing the order calculation twice.
drroe Apr 1, 2021
7ab7428
Do not run the PME tests for cuda. Slightly increase the test tolerance
drroe Apr 1, 2021
ea612ea
Try to fix CUDA compile. Need a better way to determine arch flags...
drroe Apr 1, 2021
d1a36c5
Add list of cuda flags.
drroe Apr 1, 2021
35e2503
Try a better way to set up the shader model flags
drroe Apr 1, 2021
c7be209
Add shader model cuda version check
drroe Apr 1, 2021
4793d12
Remove old configure logic
drroe Apr 1, 2021
86560cd
Consolidate direct space energy calc into one function. Add adjust
drroe Apr 2, 2021
f900982
Compare PME GIST output if present. Not uploading it because its too big
drroe Apr 2, 2021
464d3ec
Merge branch 'master' into addGistPme
drroe Apr 2, 2021
f6e0018
Make GIST PME off the default for now until the output stabilizes.
drroe Apr 2, 2021
8a346cb
Fix help option
drroe Apr 2, 2021
f77ad43
Fix GIST and energy command entries
drroe Apr 2, 2021
f307307
Protect when no LIBPME
drroe Apr 3, 2021
9c1c496
Add tolerance to the info comparisons.
drroe Apr 3, 2021
af62ed4
Break up giant apt-get install command into separate ones to make it …
drroe Apr 3, 2021
a1ca9d0
These are better described as to-do
drroe Apr 3, 2021
fa82060
Try to DL and build our own netcdf
drroe Apr 3, 2021
b082da0
Try to make sure netcdf binaries are in the PATH. Try to fix cmake
drroe Apr 3, 2021
37303b8
Make var point to actual library
drroe Apr 4, 2021
9f64683
Cmake build seems to have problems with the static netcdf compile. Try
drroe Apr 4, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 19 additions & 15 deletions .github/workflows/merge-gate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,19 +41,13 @@ jobs:
steps:
- name: Install prerequisite packages
run: |
sudo apt-get install gfortran \
libbz2-dev \
libblas-dev \
liblapack-dev \
libnetcdf-dev \
libfftw3-dev \
netcdf-bin \
clang \
openmpi-bin \
openmpi-common \
libopenmpi-dev \
cmake-data \
cmake
sudo apt-get install gfortran
sudo apt-get install libbz2-dev
sudo apt-get install libblas-dev liblapack-dev
sudo apt-get install libfftw3-dev
sudo apt-get install clang
sudo apt-get install openmpi-bin openmpi-common libopenmpi-dev
sudo apt-get install cmake-data cmake

- name: Checkout source code
uses: actions/checkout@v2
Expand All @@ -68,6 +62,15 @@ jobs:
mkdir -p include && mv AmberTools/src/sander/sander.h include
mv lib include $HOME

curl -OL ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-4.6.1.tar.gz
tar -zxf netcdf-4.6.1.tar.gz
cd netcdf-4.6.1
./configure --disable-netcdf-4 --disable-dap --disable-doxygen --prefix=$HOME
make -j2
make install
cd ..
export PATH=$HOME/bin:$PATH

if [ $USE_OPENMP = "yes" ]; then
export OPT="openmp"
export OMP_NUM_THREADS=4
Expand All @@ -92,13 +95,14 @@ jobs:
cd build
cmake .. $BUILD_FLAGS -DCOMPILER=${COMPILER^^} -DINSTALL_HEADERS=FALSE \
-DCMAKE_INSTALL_PREFIX=$installdir -DCMAKE_LIBRARY_PATH=$HOME/lib \
-DPRINT_PACKAGING_REPORT=TRUE
-DPRINT_PACKAGING_REPORT=TRUE -DNetCDF_LIBRARIES_C=$HOME/lib/libnetcdf.so \
-DNetCDF_INCLUDES=$HOME/include
make -j2 install
cd ..
export PATH=$installdir/bin:$PATH
else
export LD_LIBRARY_PATH=$HOME/lib:${LD_LIBRARY_PATH}
./configure ${BUILD_FLAGS} ${COMPILER}
./configure --with-netcdf=$HOME ${BUILD_FLAGS} ${COMPILER}
make -j2 install
fi
cd test && make $TEST_TYPE
162 changes: 122 additions & 40 deletions configure
Original file line number Diff line number Diff line change
Expand Up @@ -75,16 +75,21 @@ UsageFull() {
echo " NVCCFLAGS : Flags to pass to the nvcc compiler."
echo " DBGFLAGS : Any additional flags to pass to all compilers."
echo " SHADER_MODEL : (-cuda) Should be set to 'sm_XX', where XX is CUDA compute architecture."
echo " SM6.2 = GP10B"
echo " SM6.1 = GP106 = GTX-1070, GP104 = GTX-1080, GP102 = Titan-X[P]"
echo " SM6.0 = GP100 / P100 = DGX-1"
echo " SM5.3 = GM200 [Grid] = M60, M40?"
echo " SM5.2 = GM200 = GTX-Titan-X, M6000 etc."
echo " SM5.0 = GM204 = GTX980, 970 etc"
echo " SM3.7 = GK210 = K80"
echo " SM3.5 = GK110 = K20[x], K40, GTX780, GTX-Titan, GTX-Titan-Black, GTX-Titan-Z"
echo " SM3.0 = GK104 = K10, GTX680, 690 etc."
echo " SM2.0 = All GF variants = C2050, 2075, M2090, GTX480, GTX580 etc."
echo " sm_86 = GA102, 104, 106, 107"
echo " sm_80 = GA100"
echo " sm_75 = Turing"
echo " sm_72 = GV10B"
echo " sm_70 = GV100"
echo " sm_62 = GP10B"
echo " sm_61 = GP106 = GTX-1070, GP104 = GTX-1080, GP102 = Titan-X[P]"
echo " sm_60 = GP100 / P100 = DGX-1"
echo " sm_53 = GM200 [Grid] = M60, M40?"
echo " sm_52 = GM200 = GTX-Titan-X, M6000 etc."
echo " sm_50 = GM204 = GTX980, 970 etc"
echo " sm_37 = GK210 = K80"
echo " sm_35 = GK110 = K20[x], K40, GTX780, GTX-Titan, GTX-Titan-Black, GTX-Titan-Z"
echo " sm_30 = GK104 = K10, GTX680, 690 etc."
echo " sm_20 = All GF variants = C2050, 2075, M2090, GTX480, GTX580 etc."
echo " EXPERIMENTAL OPTIONS:"
echo " --compile-verbose : Turn on compile details."
echo " -profile : Use Gnu compiler profiling (>= V4.5)*"
Expand Down Expand Up @@ -1204,8 +1209,18 @@ SetupLibraries() {
fi
lflag=${LIB_FLAG[$i]}
else
# Lib home specified
linc="-I$lhome/include"
# Lib home specified.
# Determine include directory.
incdir="$lhome/include"
linc="-I$incdir"
if [ ! -d "$incdir" ] ; then
# include dir is not in the usual place, happens with e.g. some CUDA installs.
if [ -d "$lhome/targets/x86_64-linux/include" ] ; then
linc="-I$lhome/targets/x86_64-linux/include"
else
WrnMsg "Include dir $incdir not found. Linking ${LIB_CKEY[$i]} may fail."
fi
fi
# Check if architecture-specific lib dir exists. Use that if so.
lhdir="$lhome/lib"
ladir="$lhome/lib$NBITS"
Expand Down Expand Up @@ -1688,6 +1703,65 @@ SetupMKL() {
fi
fi
}
# ------------------------------------------------------------------------------
# Define different shader models/compute architectures and their CUDA limits
# CUDA version
# 3.0-3.1 ...
# 3.2 .........
# 4.0-4.2 .........
# 5.X ...........................
# 6.0 ...........................
# 6.5 ...................................................
# 7.X ...................................................
# 8.X .....................................................................
# 9.X .....................................................................
# 10.X ...........................................................................
# 11.X ...........................................................................
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53 sm_60 sm_61 sm_62 sm_70 sm_72 sm_75 sm_80 sm_86'

# SetSupportedSM <major v> <minor v>
# Set Shader models supported by current cuda version
SetSupportedSM() {
if [ $1 -lt 3 ] ; then
Err "CUDA < 3 not supported."
fi
if [ $1 -eq 3 ] ; then
if [ $2 -ge 2 ] ; then
CUDA_SM_LIST='sm_20 sm_21'
else
CUDA_SM_LIST='sm_20'
fi
elif [ $1 -eq 4 ] ; then
CUDA_SM_LIST='sm_20 sm_21'
elif [ $1 -eq 5 ] ; then
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35'
elif [ $1 -eq 6 ] ; then
if [ $2 -ge 5 ] ; then
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53'
else
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35'
fi
elif [ $1 -eq 7 ] ; then
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53'
elif [ $1 -eq 8 ] ; then
CUDA_SM_LIST='sm_20 sm_21 sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53 sm_60 sm_61 sm_62'
elif [ $1 -eq 9 ] ; then
CUDA_SM_LIST='sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53 sm_60 sm_61 sm_62 sm_70 sm_72'
elif [ $1 -eq 10 ] ; then
CUDA_SM_LIST='sm_30 sm_32 sm_35 sm_37 sm_50 sm_52 sm_53 sm_60 sm_61 sm_62 sm_70 sm_72 sm_75'
else # >= 11
CUDA_SM_LIST='sm_35 sm_37 sm_50 sm_52 sm_53 sm_60 sm_61 sm_62 sm_70 sm_72 sm_75 sm_80 sm_86'
fi
echo " Supported shader models: $CUDA_SM_LIST"
}

# SetCudaArch <sm>
# Set CUDA_ARCH variable with compute_XX value for given SM
SetCudaArch() {
smversion=${1#sm_}
CUDA_ARCH="compute_$smversion"
#echo "$1 $CUDA_ARCH"
}

# ------------------------------------------------------------------------------
# Check that CUDA_HOME is defined and set up flags for nvcc
Expand All @@ -1696,44 +1770,52 @@ SetupCUDA() {
Err "CUDA_HOME not set. Set CUDA_HOME to point to your NVIDIA tools installation."
fi
if [ ! -x "$CUDA_HOME/bin/nvcc" ]; then
Err "Error: nvcc cuda compiler not found in $CUDA_HOME/bin"
Err "nvcc cuda compiler not found in $CUDA_HOME/bin"
fi
if [ -z "$NVCC" ]; then NVCC="$CUDA_HOME/bin/nvcc"; fi
cuda_version=`$NVCC --version | grep 'release' | cut -d' ' -f5 | cut -d',' -f1`
cuda_major_version=`echo "$cuda_version" | awk 'BEGIN{FS=".";}{printf("%i", $1);}'`
cuda_minor_version=`echo "$cuda_version" | awk 'BEGIN{FS=".";}{printf("%i", $2);}'`
echo " CUDA version $cuda_version detected."
SM_CONFIG="Configuring for $SHADER_MODEL"
# A zero version indicates version detection failed.
if [ $cuda_major_version -lt 1 ] ; then
Err "CUDA version detection failed."
fi
SetSupportedSM $cuda_major_version $cuda_minor_version

if [ -z "$NVCCFLAGS" -a -z "$SHADER_MODEL" ] ; then
# Compile for multiple shader models
WrnMsg "SHADER_MODEL not set. Compiling for multiple architectures."
WrnMsg "To compile for a specific architecture set SHADER_MODEL"
WrnMsg "to 'sm_XX', where XX is the shader model version."
# NOTE: From AmberTools configure2
#Note at present we do not include SM3.5 or SM3.7 since they sometimes show performance
#regressions over just using SM3.0.
# TODO fix for volta?
sm70flags='-gencode arch=compute_60,code=sm_70'
sm62flags='-gencode arch=compute_62,code=sm_62'
sm61flags='-gencode arch=compute_61,code=sm_61'
sm60flags='-gencode arch=compute_60,code=sm_60'
sm53flags='-gencode arch=compute_53,code=sm_53'
sm52flags='-gencode arch=compute_52,code=sm_52'
sm50flags='-gencode arch=compute_50,code=sm_50'
sm37flags='-gencode arch=compute_37,code=sm_37'
sm35flags='-gencode arch=compute_35,code=sm_35'
sm30flags='-gencode arch=compute_30,code=sm_30'
sm20flags='-gencode arch=compute_20,code=sm_20'
if [ "$cuda_version" = '9.0' -o "$cuda_version" = '9.1' -o "$cuda_version" = '9.2' -o "$cuda_version" = "10.0" -o "$cuda_version" = "10.1" ] ; then
SM_CONFIG="Configuring for SM3.0, SM5.0, SM5.2, SM5.3, SM6.0, SM6.1, and SM7.0"
NVCCFLAGS="$sm30flags $sm50flags $sm52flags $sm53flags $sm60flags $sm61flags $sm70flags"
elif [ "$cuda_version" = '8.0' ] ; then
SM_CONFIG="Configuring for SM2.0, SM3.0, SM5.0, SM5.2, SM5.3, SM6.0 and SM6.1"
NVCCFLAGS="$sm20flags $sm30flags $sm50flags $sm52flags $sm53flags $sm60flags $sm61flags"
else
SM_CONFIG="Configuring for SM2.0, SM3.0, SM5.0, SM5.2 and SM5.3"
echo "BE AWARE: CUDA < 8.0 does not support GTX-1080, Titan-XP, DGX-1 or other Pascal based GPUs."
NVCCFLAGS="$sm20flags $sm30flags $sm50flags $sm52flags $sm53flags"
# TODO determine why Amber has arch=compute_60 for 70 and 75
SM_CONFIG="Configuring for"
NVCCFLAGS="$DBFLAG"
# Loop over supported shader models for this CUDA
for sm in $CUDA_SM_LIST ; do
SetCudaArch $sm
SM_CONFIG="$SM_CONFIG $sm"
NVCCFLAGS="$NVCCFLAGS -gencode arch=$CUDA_ARCH,code=$sm"
done
elif [ -z "$NVCCFLAGS" -a ! -z "$SHADER_MODEL" ] ; then
# Compile for single shader model
SM_CONFIG="Configuring for $SHADER_MODEL"
# See if it is supported.
sm_is_supported=0
for sm in $CUDA_SM_LIST ; do
if [ "$sm" = "$SHADER_MODEL" ] ; then
sm_is_supported=1
break
fi
done
if [ $sm_is_supported -eq 0 ] ; then
Err "Shader model $SHADER_MODEL is not supported by CUDA $cuda_version"
fi
NVCCFLAGS="$DBFLAG -arch=$SHADER_MODEL"
else
# Use specified NVCC flags
SM_CONFIG="Using NVCCFLAGS: $NVCCFLAGS"
fi
if [ -z "$NVCCFLAGS" ]; then NVCCFLAGS="$DBFLAG -arch=$SHADER_MODEL"; fi
if [ ! -z "$picflag" ] ; then
NVCCFLAGS="--compiler-options $picflag $NVCCFLAGS"
fi
Expand Down
Loading