-
Notifications
You must be signed in to change notification settings - Fork 868
WeeklyTelcon_20180612
Geoffrey Paulsen edited this page Jan 15, 2019
·
1 revision
- Dialup Info: (Do not post to public mailing list or public wiki)
- Jeff Squyres
- Geoff Paulsen
- Peter (Cisco)
- Edgar Gabriel
- David Bernholdt
- Josh Hursey
- Joshua Ladd
- Todd Kordenbrock
- Xin Zhao
- Brian
- Howard Pritchard
- Thomas Naughton
- Nathan Hjelm
- Akvenkatesh
- Nathan Hjelm
- Ralph
- Geoffroy Vallee
- Howard Pritchard
- Matthew Dosanjh
- Dan Topa (LANL)
- MPI Forum is this week, so we have a bunch out.
Review All Open Blockers
Review v2.x Milestones v2.1.4
- v2.1.4 - Targeting Oct 15th,
- No complelling reason, but might pull in the date to assist with more testing on v4.0
- lower priority to v3.0 and v3.1
- PR5217 changes in OSHMEM logic MPI_Initialized/Finalized
Review v3.0.x Milestones v3.0.2
- Schedule:
- Still has not shipped
- v3.0.2 has been tagged and build, and just need 30 minutes to release.
- v3.0.3 - targeting Sept 1st (3 months out)
- No progress
- Do we want AR64 stuff in v3.0.3? - Up to Nathan. Sounds good.
- Helps IBM too.
Review v3.1.x Milestones v3.1.0
- No progress yes.
- Issue 5263 - Symbols in an issue in v3.x and master.
- common doesn't depend on components, components depend on common.
- common might be calling up to components, which would be incorrect, but linker might be
- A we don't see this issue on any other platform, think we'd see it somewhere else.
- Schedule: mid-July branch. mid-Sept relelase.
- Still working through iWARP issues; LANL waiting for Chelsio RNICs.
- No further / substantive update since last week (4 day weekend prevented a bunch of work this past weekend).
- favor external vs internal components - hwloc and pmix and libevent.
- PMIx v3.0 updates to ORTE
- Xin - OSHMEM PRs going in today for review
- Edgar - Did you want to make something default?
- Ralph merged in some PMIx v3.0
- PR5258 to master -
- PMIx
- An update as well to ORTE code
- Update to PMIX v3.0 component (PMIX branched for v3.0)
- On PRTE side of things, quite a few bugfixes that haven't been implemented in the orte code.
- Not using any PMIx v3.0 features in Open MPI yet, but Ralph is interested in pre-setting endpoints (option)
- Preliminary Debugger connection stuff.
- Not going to touch MPIR.
- New feature coming later this summer - to look at network topology - decide how to run collective based on that. *
- OMPI has an MCA framework for ORTE and a STICKY component. We'd added a PMIx component, where you could run MPI just on PMIx.
- With OMPI PMIX RTE - right now it's a static frame work, so you either build PMIX RTE, or ORTE, but no great reason for that yet, just need to put things behind function pointers.
- Or perhaps this might be a bit mute.
- With OMPI PMIX RTE - right now it's a static frame work, so you either build PMIX RTE, or ORTE, but no great reason for that yet, just need to put things behind function pointers.
- Overall Runtime Discussion (talking v5.0 timeframe, 2019)
- What is it that we want? It's changed a bit since last Face to Face.
- Getting confused about the Goal - Regardless of who and when, lets discuss what.
- What? Two Options:
- Keep going on our current path, and taking updates to ORTE, etc.
- Shuffle our code a bit (new ompi_rte framework merged with orte_pmix frame work moved down and renamed)
- Opal used to be single process abstraction, but not as true anymore.
- API of foo, looks pretty much like PMIx API.
- Still have PMIx v2.0, PMI2 or other components (all retooled for new framework to use PMIx)
- to call just call opal_foo.spawn(), etc then you get whatever component is underneath.
- what about mpirun? Well, PRTE comes in, it's the server side of the PMIx stuff.
- Could use their prun and wrap in a new mpirun wrapper
- PRTE doesn't just replace ORTE. PRTE and OMPI layer don't really interact with each other, they both call the same OPAL layer (which contains PMIx, and other OPAL stuff).
- prun has a lam-boot looking approach.
- Build system about opal, etc. Code Shufflling, retooling of components.
- We want to leverage the work the PMIx community is doing correctly.
- If we do this, we still need people to do runtime work over in PRTE.
- In some ways it might be harder to get resources from management for yet another project.
- Nice to have a componentized interface, without moving runtime to a 3rd party project.
- Need to think about it.
- Concerns with working adding ORTE PMIx integration.
- Want to know the state of SLURM PMIx Plugin with PMIx v3.x
- It should build, and work with v3. They only implemented about 5 interfaces, and they haven't changed.
- A few related to OMPIx project, talking about how much to contribute to this effort.
- How to factor in requirements of OSHMEM (who use our runtimes), and already doing things to adapt.
- Would be nice to support both groups with a straight forward component to handle both of these.
- Thinking about how much effort this will be. and manage these tasks in a timely manor.
- Testing, will need to discuss how to best test all of this.
- ACTION: Lets go off and reflect and discuss at next week's Web-Ex.
- We aren't going to do this before v4.0 branches in mid-July.
- Need to be thinking about the Schedule, action items, and owners.
Review Master Master Pull Requests
- Decided to file PR5200 to begin the long process of deleting osc/pt2pt (by enabling all relevant RDMA BTLs so that every transport will use osc/rdma).
- Anything Jeff can help with Absoft and NAG licenses?
- waiting.
Review Master MTT testing
-
Hope to have better Cisco MTT in a week or two
- Peter is going through, and he found a few failures, which some have been posted.
- one-sided - nathan's looking at.
- some more coming.
- OSC_pt2pt will exclude yourself in a MT run.
- One of Cisco MTTs runs with env to turn all MPI_Init to MPI_Thread_init (even though single threaded run).
- Now that osc_pt2pt is ineligible, many tests fail.
- on Master, this will fix itself 'soon'
- BLOCKER for v4.0 for this work so we'll have vader and something for osc_pt2pt.
- Probably an issue on v3.x also.
- One of Cisco MTTs runs with env to turn all MPI_Init to MPI_Thread_init (even though single threaded run).
- Peter is going through, and he found a few failures, which some have been posted.
-
OSHMEM v1.4 - cleanup work
- and refactoring.
-
Edgar has some issues running on omnipath - Not able to open HFI correctly.
- Not sure if it's OFI components.
- Mathias just updated his PR5004 and asked Jeff to review.
- libfabric related, but probably not Edgar's issue.
- might be missing coverage here. Results LLNL and cray stuff, not sure what these are.
-
aarch64 and cray master is failing to build with AlltoAllw INTER something.
- Might be an MPI1 causalty. Several MPI1 cleanup PRs.
- Giles caught some stuff
- Leave on here for one more week.
-
Next Face to Face?
- When? Late summer, early fall?
- Where? San Jose - Cisco, Albuquerque - Sandia
- Super computing is in Dallas this year in Nov.
- Mellanox, Sandia, Intel
- LANL, Houston, IBM, Fujitsu
- Amazon,
- Cisco, ORNL, UTK, NVIDIA