Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problems in mpi initialization #6

Open
simonecamarri opened this issue Jun 14, 2015 · 1 comment
Open

problems in mpi initialization #6

simonecamarri opened this issue Jun 14, 2015 · 1 comment

Comments

@simonecamarri
Copy link

Dear Mikael,

I have the following problem in running a first simulation with Oasis:

simone@sc:Oasis$ python NSfracStep.py problem=DrivenCavity solver=IPCS_ABCN
*** The MPI_Allreduce() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[sc.local:42195] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!

It seems this is a problem in mpi initialization, probably intrinsic in Fenics. I have installed pyMPI but the problem persists. Do you have any suggestion?

Best regards,

Simone

@mikaem
Copy link
Owner

mikaem commented Jun 16, 2015

Hi Simone,

I have seen that error message before on the Scinet supercomputer at the University of Toronto. It's a dolfin bug of some sort if I remember correctly and it was "fixed" by creating a mesh as the first thing after doing from dolfin import *. See top section of problems/init.py, uncheck line number 12. If you are using more than 400 cpus, then change the size of the mesh.

Mikael

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants