-
Notifications
You must be signed in to change notification settings - Fork 144
Conference call notes 20190109
Kenneth Hoste edited this page Jan 9, 2019
·
2 revisions
(back to Conference calls)
Notes on the 117th EasyBuild conference call, Wednesday Jan 9th 2019 (17:00 - 18:00 CET)
Alphabetical list of attendees (7):
- Damian Alvarez (JSC, Germany)
- Mikael Öhman (Chalmers University of Technology, Sweden)
- Victor Holanda (CSCS, Switzerland)
- Kenneth Hoste (HPC-UGent, Belgium)
- Bart Oldeman (ComputeCanada)
- Åke Sandgren (Umeå University, Sweden)
- Davide Vanzo (Vanderbilt University, US)
- updates on upcoming EasyBuild v3.8.1
- 2019a update of common toolchains
- update on porting EasyBuild to Python 3
- Q&A
- ETA: by EUM19
- framework
- https://github.com/easybuilders/easybuild-framework/milestone/62
- highlights:
- only two minor bug fixes merged
- TODO
- no huge PRs pending for 3.8.1
- 'eb --new' targeted for 3.9.0
- easyblocks
- https://github.com/easybuilders/easybuild-easyblocks/milestone/53
- highlights:
- two minor PRs merged
- TODO:
- nothing major lined up, whatever is ready can go in for 3.8.1 release
- easyconfigs
- https://github.com/easybuilders/easybuild-easyconfigs/milestone/56
- highlights:
- various software updates/additions, minor software-specific bug fixes
- TODO:
- 2019a toolchains are considered a blocker for EasyBuild 3.8.1
-
foss/2019a
: https://github.com/easybuilders/easybuild-easyconfigs/pull/7371- OpenMPI 4.0.0 drops old MPI APIs, so not a good idea to use that OpenMPI 3.1.3
- breaks ScaLAPACK 2.0.2 for example (Åke has a fix)
- see OpenMPI release notes
- Victor: what about
fosscuda
(CUDA is not compatible with GCC 8.2 yet)- Davide/Åke: not updates for now beyond
fosscuda/2018a
- Davide/Åke: not updates for now beyond
- OpenMPI 4.0.0 drops old MPI APIs, so not a good idea to use that OpenMPI 3.1.3
-
intel/2019a
: https://github.com/easybuilders/easybuild-easyconfigs/pull/7372- Damian: Intel MPI 2019 update 1 is problematic with "big" jobs (>1.5k (24*64) cores)
- main problem is with collectives (and libfabric mostly deals with P2P connections)
- can be triggered with Intel MPI benchmarks on GitHub
- Åke: OFED stack + IB cards?
- OFED 10 + Mellanox
-
- also in different system
- Bart: there's a new libfabric release that may fix the problem
- Damian: Intel MPI has official support to use separate libfabric as a dependency (and it's documented)
- this seems like a blocker to use impi 2019.01 in intel/2019a
- go forward with impi 2018 update 4 instead (is used in production at JSC with 2019 Intel compilers/MKL)
- Damian: Intel MPI 2019 update 1 is problematic with "big" jobs (>1.5k (24*64) cores)
- 4.x branch
- 1st step: ingest vsc-base [DONE]: https://github.com/easybuilders/easybuild-framework/pull/2708
- 2nd step: make easybuild.base Python packages import with Python 3: WIP @ https://github.com/easybuilders/easybuild-framework/compare/4.x...boegel:py3
-
pigz PR (https://github.com/easybuilders/easybuild-easyconfigs/pull/7346) raised back a previously reported problems
- lib64 system paths are considered before paths in $LIBRARY_PATH
- cfr. https://github.com/easybuilders/easybuild-easyconfigs/issues/5776
- unclear what best solution for this is
- compiler wrapper that adds -L is least bad?
- for pigz, can be fixed by adding
-L$EBROOTZLIB/lib
to$LDFLAGS
viabuildopts
- lib64 system paths are considered before paths in $LIBRARY_PATH
-
Mikael: update/thoughts on bringing Python down to GCCcore level?
- cfr. post on EB mailing list
- only build Python interpreter once, doesn't make sense to build once with intel, once with foss
- performance benchmarking done with Intel vs GCC build for Python by Damian
- significant diff for 'exp' function, but only for scalars (cfr. https://github.com/numpy/numpy/issues/8823)
- Bart has also experimented with this
- numpy performance is hit by libm already being in memory for libpython
- forcing it to use libimf requires static linking or preloading libimfo
- some serious tuning in recent glibc versions, more competitive with Intel's libimf
- numpy performance is hit by libm already being in memory for libpython
- not a clear win for Intel vs GCC, unclear how/if it actually affects real production code in practice
- JSC is already using "core" Python vs SciPy module (to build numpy with imkl)
- Mikael's approach:
- preserving user experience by defining separate 'PythonCore' lib
- prime issue w.r.t. graphics libs like Qt5 at GCCcore level (which needs Mesa, which needs Python for bindings)
- we should consider lowering Python to GCCcore as a community policy
- maybe too tight to go through with this for 2019a common toolchains?
- we should try and find something that most people are happy with...
- at JSC: shadowing of Python interpreter when SciPy module is loaded was done before
- approach now is different: SciPy is now down at GCCcore level too (just like imkl)
- Mikael has seen problems with ABI compatibility problems with an approach like this
- Mikael has a relevant PR to fix a linking issue in Python
- cfr. framework PR
- follow-up during next conf call, Mikael can look into alternatives and present them