-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenFOAM MPI in the Docker/Singularity container. #542
Comments
Sure Bryce, so the issue here which we are facing is mainly with the momentum solver which uses Open Foam
The 2 issues are currently the major hurdles which need to be addressed for running windninja momentum simulations. These Simulations are currently high priority for future simulations also so I am currently working on these to check what could be the root cause of the issue. coming to the code part the current master branch is a working implementation for diagnosing the issue. I have a set of self contained dataset that are ready for getting some work done on them to check if the container is working properly with the multithreading enabled OpenFoam installation |
Meeting notes: 12/5/24
|
Testing required before the next meeting, limited to problem 1:
Once these are accomplished, we are ready to test: Can Windninja start a run that uses OpenFOAM MPI without connecting to the host? This is the topic of #497. |
Meeting notes 12/6/2024:
|
The issue with the OpenFoam is resolved by a change in the docker file this is now reflected in apptainer_test branch after further testing this will be pulled in to master |
There are rumors of a problem with the containerization of WindNinja as it relates to the MPI libraries. I have been asked to help. Let's have a meeting, but prior to the meeting, I'd like to ask that @sathwikreddy56 or @dgh007786 define the problem a little more closely and make available the Dockerfile used which is exhibiting the problem. As we may well be pursuing solutions in our own separate environments, may I suggest that one of you make a development branch (call it say HPC or containerization) into which you load the exact source that is failing on the HPC cluster. We can all work off of that and merge changes back to master when we're done. To summarize, we need three things:
We can review these at the meeting, but I will be better prepared to ask questions that aren't stupid if I have the above.
The text was updated successfully, but these errors were encountered: