-
Notifications
You must be signed in to change notification settings - Fork 868
WeeklyTelcon_20170321
Geoffrey Paulsen edited this page Jan 9, 2018
·
1 revision
- Dialup Info: (Do not post to public mailing list or public wiki)
- Geoff Paulsen
- Jeff Squyres
- Artem Polyakov
- Brian Barrett
- Geoffroy Vallee (ORNL)
- David Bernholdt (ORNL)
- Howard
- Josh Hursey
- Joshua Ladd
- Ralph
- Todd Kordenbrock
Review All Open Blockers
- No news is good news.
Review Milestones v2.1.0
- Lets release 2.1.0 today. It shall be done.
- PMIx will also release 1.2.2 today
- We'll wait a week or so before we apply 2.1.1 fixes to that branch.
- We branched for v3.x - So don't forget to PR over to v3.x when PRing over to v2.x
- Everything is off the whitelist, except PMIx.
- PMIx - reason we're doing an accelerated v3.0
- Discussed PMIx PR 3194 - got pushed to v3.1
- Whitelist Issue 3107
- UCX got in.
- Ralph working on job control monitoring RFC.
- Just finishing integration of this.
- And Only other major piece is the messaging compatibility piece.
- Still on track.
- No status this week.
Review Master Pull Requests
Review Master MTT testing
- Looking pretty good.
- Nvidia cluster seems to have something wrong, most unusual errors.
- Thoughts about removing Travis?
- Can turn off MAC parts of Travis, but nervous to turn off other Travis until after AWS is online.
- Should do a comparison of coverage of Travis and what others are testing.
- Howard will remove Mac OS testing from Travis... we'll keep the rest in Travis for now.
- We should begin thinking about scheduling our next face to face.
- Geoff will put out doodle for June and July and begin to nail down a schedule.
- Cisco has a site in Chicago.
- Prefer not-last week of July.
- Dallas, San Jose, Seattle
- Cisco, IBM, ORNL, UTK, NVIDIA, Amazon
- IBM - working Spectrum MPI based on v2.0.2 in field, working well.
- ORNL setup mtt to do some testing
- Amazon - build scripts for PMIx / hwloc - Some news about scripts not cleaning up correctly after themselves.
- Want to start using s3 instead of gatorhost for storing nightly tarballs.
- Also, eventually all tarballs will move out of ompi repo, so our main repo will be small again.
- Release work - takes time.
- Trying to hire staff.
- Want to start using s3 instead of gatorhost for storing nightly tarballs.
- Mellanox, Sandia, Intelm
- LANL, Houston, IBM, Fujitsu