Skip to content

WeeklyTelcon_20190709

Geoffrey Paulsen edited this page Jul 9, 2019 · 1 revision

Open MPI Weekly Telecon


  • Dialup Info: (Do not post to public mailing list or public wiki)

Attendees (on Web-ex)

  • Akshay Venkatesh (nVidia)
  • Artem Polyakov (Mellanox)
  • Brian Barrett (Amazon)
  • Edgar Gabriel (UH)
  • Geoff Paulsen (IBM)
  • Howard Pritchard (LANL)
  • Jeff Squyres (Cisco)
  • Josh Hursey (IBM)
  • Mark Allen (IBM)
  • Matthew Dosanjh (Sandia)
  • Ralph Castain (Intel)
  • Thomas Naughton
  • Todd Kordenbrock

not there today (I keep this for easy cut-n-paste for future notes)

  • Aravind Gopalakrishnan (Intel)
  • Arm (UTK)
  • Brandon Yates (Intel)
  • Brendan Cunningham (Intel)
  • Dan Topa (LANL)
  • David Bernhold
  • Geoffroy Vallee
  • George Bosilca (UTK)
  • Jake Hemstad
  • Joshua Ladd (Mellanox)
  • Matias Cabral
  • Michael Heinz (Intel)
  • Nathan Hjelm
  • Noah Evans (Sandia)
  • Peter Gottesman (Cisco)
  • Xin Zhao (Mellanox)
  • mohan

Agenda/New Business

  • Hwloc PRs discussion

    • PR 6755 "restoring more hwloc --cpu-set behavior from OMPI 3.x"

      • This section of code is similar to running in a cgroup, (Binding within a subset). Restored some code from OMPI v3.x branch to resolve.
      • Right to fix this, but restoring old code is not the way to do it.
      • Just need to properly handle CPUSET
      • Most of the code makes sense, but getavailablecpuset() was removed for a reason
      • The code is very distributed, Could shrink the PR to just restore cpu-set, it's very distributed
      • This code was taken out for a reason.
      • Don't just cut-n-paste code, just implement cpuset behavior.
    • Issue 6768 "Hwloc symbol conflicts"

      • Hwloc has a nice symbol prefixing option, but only appropriate to use this, when using the embedded Open MPI
      • proposal is to build hwloc with symbol prefix
      • Problem is libhcoll statically includes hwloc symbols.
        • It's the static inclusion that's the issue.
        • Also hwloc 1.x vs 2.x dynamically can also be the issue (they broke ABI)
      • How do we draw a line between hwloc and other libs that could break ABI.
      • hwloc symbol prefixing does give us some flexibility beyond other libs.
      • hwloc data shared memory may not be prefixed, and might break this.
      • Right now collect hwloc data at PMIx layer and expose it via dstore.
        • No PMIx passes the shared memory connection points, but hwloc holds the shared memory.
      • These hwloc trees are not small.
      • Also worried if a user uses a prefixed hwloc different from PMIx, not sure
      • If you ship your own copy of pmix, building externally (from ompis point of view) If someone else links against another version of libpmix, those two libraries are separate, it'll mess up the pmix reference counting.
      • This may just be
    • PR 6760 "--report-bindings on whole machine"

      • Typologies right now are collected without the whole-machine option
      • Pro: in general this is a good thing in general because much smaller, etc.
      • Con: without whole-machine, it makes --report-bindings fairly unreadable.
      • This PR has rank 0 use whole machine to generate more readable output
        • Traditionally --report-bindings works with non MPI programs, this requires report-bindings to output only when app calls MPI_Init
        • Used hooks to gather to rank 0 in presence of '--report-bindings'
        • Issue is that have to consider heterogeneous systems, need to detect this.
        • First iteration of PR wasn't aware of that, but now have updated so that each host does that evaluation itself.
      • The printing part is correct.
      • But what this is revealing is that there is an issue that needs to be fixed.
      • If we don't explicitly configure with --enable-heterogeneous, then signatures aren't sufficient. Signatures only reports numbers or resources, not placement of resources.
      • Discussion about mapping, but this PR is just about printing result of mapping.
        • This is exposing this lower issue.
      • Mark has tested using different cgroups, and hasn't seen this issue.
      • have to take this offline due to time.
  • Status of Scale testing

    • Issue 6786 "OMPI 4.0.1 TCP connection errors beyond 86 nodes"
    • Issue 6198 "SSH launch fails when host file has more than 64 hosts"
    • IBM is also working on something like this as well (for ssh launch)
      • Prefer this every night, instead of each PR.
  • Issue 6799 "UVM buffers failing in culpGetMemHandle ?"


Infrastrastructure

Transition website, and email to AWS

  • Complete

Process enforcement bots

  • No update

Submodule prototype

  • Suggest just doing hwloc (stable and not too much development) first
  • No update

Release Branches

Review v3.0.x Milestones v3.0.4

Review v3.1.x Milestones v3.1.4

  • PRs for PMIx rcs update merged for testing
    • Mellanox potential CI issue Mellanox is looking at.
      • Fixed

Review v4.0.x Milestones v4.0.2

  • New Issue 6785
    • UCX is not in all distros yet, so this is a blocker.
    • Yes still an issue.
  • 2nd Put issue PR 6568 (Vader deadlocking with 4MB transfers)
  • New Datatype work https://github.com/open-mpi/ompi/pull/6695 (master)
    • Want for v4.0.2
    • Now approved for master.
  • https://github.com/open-mpi/ompi/issues/6568 - put protocol has lost it's pipelining.
    • Right now only shows in vader, because all others prefer get protocol.
    • Vader generate a bunch of 32K frags. so for 4MBs overwhelms vader.
    • Does NOT occur with single copy like CMA or KNEM.
  • Issue 6789 - OMPI crashes when configured with ucx version ...
    • Issue with PML UCX conflicting with btl_uct - memory hooks

Review Master Master Pull Requests

  • PR6556 and 6621 should go to the release branches.
    • no update
  • Good reminder that we now need to be careful about OPAL's ABI.

v5.0.0

  • When do we get rid of 32bit?
  • Still don't have any release manager.
    • Need to identify someone in next few months.

Depdendancies

PMIx Update

  • PMIx v3.1.3 is ready to release.
    • put a tarball in ompi's v4.0.x for integrated test
    • So far looking good.
    • One issue on Mellanox CI, probably cluster, or test config
  • PMIx v2.2 update could be ready soon after that.
    • Doesn't have MPIR fix.
    • Missing something else. - Ralph will audit.

ORTE/PRRTE

  • Take a look at Gile's PRRTE work. He may have done SOME of that. He should have done that all in PRRTE layer, maybe just some MPI layer work remains.

Next face to face

  • Need people to react and do things.
  • Fall Face to face is canceled due to lack of agenda
    • PRTE transition still requires dedicated discussion
  • Might meet in New Mexico, University of Tennessee, or Dallas (IBM)
    • Should make a meeting prep page
    • Jeff will make doodle.
    • Two days

MTT

  • IBM has some new failures.
    • Geoff will get some time to look at this week.
  • AWS - Scale testing not sure of status of that.

Back to 2019 WeeklyTelcon-2019

Clone this wiki locally