Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fortran + MPI application - Caliper does not aggregate output #584

Open
kawechel opened this issue Aug 9, 2024 · 2 comments
Open

Fortran + MPI application - Caliper does not aggregate output #584

kawechel opened this issue Aug 9, 2024 · 2 comments

Comments

@kawechel
Copy link

kawechel commented Aug 9, 2024

Summary of the problem:
A minimal Fortran + MPI code throws the Caliper error MPI is already finalized. Cannot aggregate output at runtime.

How to reproduce:
I have modified the example Fortran code that ships with Caliper (v2.12.0-dev under examples/apps/fortran-example.f) to be a minimal MPI code. I build the code with gfortran 11.4.0 and OpenMPI 4.1.2:

program fortran_example
    use caliper_mod
    use iso_c_binding, ONLY : C_INT64_T
    use mpi

    implicit none

    type(ConfigManager)   :: mgr

    integer               :: i, count, argc, ierr
    integer(C_INT64_T)    :: loop_attribute, iter_attribute

    logical               :: ret
    character(len=:), allocatable :: errmsg
    character(len=256)    :: arg

    call mpi_init(ierr) ! MPI INIT

    ! (Optional) create a ConfigManager object to control profiling.
    ! Users can provide a configuration string (e.g., 'runtime-report')
    ! on the command line.
    mgr = ConfigManager_new()
    call mgr%set_default_parameter('aggregate_across_ranks', 'false')

!!!!! ... OMITTED FOR BREVITY

    ! End 'main'
    call cali_end_region('main')

    ! Compute and flush output for the ConfigManager profiles.
    call mgr%flush
    call ConfigManager_delete(mgr)

    call mpi_finalize(ierr) ! MPI_FINALIZE

  end program fortran_example

I set both CALI_CONFIG and CALI_SERVICES_ENABLE (though I realise CALI_CONFIG takes precedence and CALI_SERVICES_ENABLE is not strictly needed):

> echo $CALI_CONFIG
runtime-report,mpi-report
> echo $CALI_SERVICES_ENABLE
aggregate,event,mpi,mpireport,timer,mpiflush

Running the test code fails as follows:

> ./test_caliper
== CALIPER: (0): default: mpireport: MPI is already finalized. Cannot aggregate output.
== CALIPER: (0): runtime-report: mpireport: MPI is already finalized. Cannot aggregate output.
== CALIPER: (0): mpi-report: mpireport: MPI is already finalized. Cannot aggregate output.

However, if I unset the CALI_XXX environment variables and set the services programmatically, everything works as expected. I.e. I modify the code to use add() and remove the section that reads the command line arguments:

..
    mgr = ConfigManager_new()
    call mgr%set_default_parameter('aggregate_across_ranks', 'false')
    call mgr%add('runtime-report')
    call mgr%add('mpi-report')

    ! Start configured profiling channels
    call mgr%start
...

Followed by unsetting the environment variables:

unset CALI_CONFIG
unset CALI_SERVICES_ENABLE

When I run the code, the behaviour is correct:

 ./test_caliper
Path       Time (E) Time (I) Time % (E) Time % (I)
main       0.000067 0.000105   0.333531   0.524120
  init     0.000008 0.000008   0.042344   0.042344
  mainloop 0.000030 0.000030   0.148244   0.148244
Function     Count (min) Count (max) Time (min) Time (max) Time (avg) Time %
                       1           1   0.001643   0.001643   0.001643 96.864394
MPI_Comm_dup           1           1   0.000053   0.000053   0.000053  3.135606

Can you please shed some light on this? For Caliper to be a long-term solution for us, we need to be able to define profiling reports at runtime (via environment variables ideally), and currently we are restricted to compile time because we need to define everything in the code. Hopefully you are able to reproduce the issue given the example above and suggest a fix.

@daboehme
Copy link
Member

Hi @kawechel, thanks for the report. The fundamental issue here is that Caliper's MPI interception mechanism only supports the C MPI API but not the Fortran MPI API. That's why it doesn't capture and trigger a flush at MPI_Finalize() as it should. It works with the programmatic configuration since in this case the program calls the flush explicitly.

You can add call cali_flush(0) before the MPI finalize call, which would trigger a flush for Caliper's "default" channel, which runs what is set through the CALI_SERVICES_ENABLE environment variable. I'm planning to update that function to also include configurations set with the CALI_CONFIG variable. This should solve the flush problem. However, due to the lack of Fortran MPI support functionality like mpi-report that depends on MPI interception will still be limited.

@kawechel
Copy link
Author

Thanks for the advice, that all makes sense. I managed to get things to work using command line flags, which is good enough for our purposes. But I can confirm that inserting call cali_flush(0) just before MPI_Finalize fixes the environment variables approach as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants