-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Windows Support #2919
base: develop
Are you sure you want to change the base?
Initial Windows Support #2919
Conversation
Nice! I always wondered why we couldn't do this. Pre-packing a binary for windows could be very nice for some users. |
Co-authored-by: Gavin Ridley <gavin.keith.ridley@gmail.com>
@HunterBelanger Regarding the xtl/xtensor update, can you submit a separate PR just updating our submodules to the latest released version of each? That way we can consider separately (and later) whether we want to adopt FetchContent rather than lump it with the Windows-related changes here. |
Yeah, I can make a separate PR to update the submodules and discuss FetchContent. For the time being though, I figure we probably want to keep the temporary work around that I have in this branch, just so it functions, and we can update this branch when we finally want to merge ? Otherwise, this won't work for anyone trying to test it. |
@HunterBelanger Yes, once the submodule update is merged separately, we can update this branch to remove the FetchContent changes (I can help with that if needed). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HunterBelanger Thanks a lot for starting this! I made some updates on your branch. Please take a look and let me know if you have any objections or questions. See below for a few questions about which symbols are getting exported.
# Not sure why, but using multiprocessing on Windows leads to many transport | ||
# simulations being run over eachother and leads to catastrophe. | ||
if sys.platform == 'win32': | ||
USE_MULTIPROCESSING = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We ought to get to the bottom of this since it should work in principle on Windows
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree, though I have not had time to look at it since. It was a while ago when I examined this, but from what I could tell I think it is a Python/Windows problem and not necessarily a bug on our side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just as a note, turns out this same problem exists on my student's M1 mac.
@@ -30,7 +30,7 @@ int OPENMC_E_PHYSICS {-10}; | |||
int OPENMC_E_WARNING {1}; | |||
|
|||
// Error message | |||
char openmc_err_msg[256]; | |||
char OPENMC_API openmc_err_msg[256]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't need the export in .cpp files, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I thought too, originally, but from what I could tell, things only work if the export is in both the .hpp and the .cpp files. I can check again, but I remember struggling with this for a while before I figured out both were needed (I am definitely not a Windows developer 😅).
@@ -5,7 +5,7 @@ namespace mpi { | |||
|
|||
int rank {0}; | |||
int n_procs {1}; | |||
bool master {true}; | |||
bool OPENMC_API master {true}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't need an export?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for the header, icx wants this variable exported, and currently I think we need the export in both the hpp and the cpp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking a look at this, @paulromano ! All of your name changes make sense to me ! One thing I also forgot to mention is that since the merging of random rays, this no longer compiles with MSVC (due to the OpenMP features that are used there), but it still compiles with icx for me. This leads to pains with the Intel DLLs and loading the Python library, but it does seem to work.
@@ -30,7 +30,7 @@ int OPENMC_E_PHYSICS {-10}; | |||
int OPENMC_E_WARNING {1}; | |||
|
|||
// Error message | |||
char openmc_err_msg[256]; | |||
char OPENMC_API openmc_err_msg[256]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I thought too, originally, but from what I could tell, things only work if the export is in both the .hpp and the .cpp files. I can check again, but I remember struggling with this for a while before I figured out both were needed (I am definitely not a Windows developer 😅).
@@ -5,7 +5,7 @@ namespace mpi { | |||
|
|||
int rank {0}; | |||
int n_procs {1}; | |||
bool master {true}; | |||
bool OPENMC_API master {true}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for the header, icx wants this variable exported, and currently I think we need the export in both the hpp and the cpp.
I just pushed some small changes that now allow compilation with MSVC again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HunterBelanger This mostly looks good to me at this point. It would be nice to have a minimal CI action that builds the C++ code on the windows-latest
runner though. Can you take a stab at that? Otherwise, I know we are at high risk of breaking the build on Windows as new features get added as none of our devs are using Windows.
I completely agree that having CI for Windows will be necessary, as it will quickly be difficult for me to keep up if I am the only Windows compiler at the moment. I have no problems with attacking the CI, but before I can do that, we probably want to consider how we handle the HDF5 dependency. On Windows, Python now requires that you give it explicit permission to load a DLL that is not in the python path. Because of this, if we link the OpenMC executable to any shared library, the Python API will not work, as it won't be able to load the libopenmc.dll. For a basic build on Windows, the only library we need to link against is libhdf5. Currently, I have installed HDF5 locally on my system using the HDF5 group installer. When I build OpenMC on Windows, I pass |
That'd be one way to do it. If you all look at how paraview distributes binaries, one approach is to roll all the DLLs into one standalone folder and distribute it with OpenMC. Obviously it takes up extra space, and would not be suitable for development, but being able to just download and run a binary without docker on windows would be pretty epic I think. |
@gridley, once we get this in, I think it would even be reasonable to make OpenMC pip installable at that point, greatly simplifying everything for most users. @paulromano, I forget things like vcpkg exist. I will try to setup a Windows CI using that to grab the HDF5 dependence. On Windows, do we only want to do a basic build, or do we want a limited case matrix ? In theory, MPI should also work on Windows, though I haven't yet tried it, and am not sure anyone would even use it in practice. |
@HunterBelanger I'm OK with just a basic build at this point. We can build up the CI one step at a time. |
…to feature/windows
Sorry it has been a while, but I have been having a very hard time getting the CI Windows runners to build OpenMC. I have created a set of powershell scripts to help automate the process like we have for bash, and from what I can tell, they appear to work correctly on my PC. For some reason, however, it is not able to find the vcpkg install of HDF5 on the runners, leading the CMake configuration to fail. I tried a few different methods of sshing into a runner to debug the problem, but after trying several different actions, I don' t think this is possible on the Windows runners. If someone has more experience with the Windows CI runners or any ideas, I would greatly appreciate them ! |
Hi! I did a few runs of CI on my fork and was able to log onto the Windows machine using You can find my changes here in case they're useful: |
Thanks for this, @pshriwise ! I will try using your branch to see if I can figure out why vcpkg is having a hard time on the runners. |
Description
This PR is an initial attempt to adding native Windows support to OpenMC. With these modifications, I have been able to compile vanilla OpenMC on Windows, using both MSVC (19.38) and the Intel compilers. I have made this a draft PR for the moment due to the following changes / restrictions.
I found that the version of xtensor we are using in the submodules has a memory bug on Windows. For the time being, I have simply commented out loading the xtensor submodule directories, and currently get more recent versions with FetchContent.
If we want to merge this, we will need update the submodule (or move to FetchContent ??)(see Update xtl and xtensor submodules #2941).In general, I was able to get most of the Python API to work, with the exception of the cythonized bits like resonance reconstruction and fast ENDF floating point conversion. For now, I have simply disabled them on Windows. Otherwise, it seems to work just fine ? I was able to run the pin cell depletion example problem from Python.
With Intel compilers, OpenMP parallelism works fine. With MSVC, an experimental flag needs to be added:
/openmp:llvm
. With this, MSVC builds it in parallel, and it seems to work. This experimental OMP support on Windows does not like the#pragma omp atomic write seq_cst
we have inSharedArray
, so I have added compilation guards to use a simple#pragma omp atomic write
when using MSVC. In theory, this could open us up to a race condition here, but I haven't observed any problems with this yet (and in any case, using Intel avoid this problem).Many of the tests are failing on Windows. I have not been able to figure out why this is, however, as I can barely even get the tests to pass on my linux machine half the time. The results which I have obtained thus far seem to be in agreement with what I would expect. I am not sure to what extent some tests might be failing due to differences in Python dependencies on Windows ?
I am very open to suggestions on how to tackle the remaining problems ! I am definitely not a Windows developer, and have yet to find/understand good tools to properly examine this on Windows. For example, it will almost always crash in Debug mode because we apparently do some bad things with iterators in some spots. If you compile in Release mode, these do not pose a problem. Please feel free to test it out on a Windows machine and find all the bugs I have probably missed ! For now, I am not building HDF5 myself, but simply download and install the pre-built binaries from the HDF5 website.
Fixes #1243
Checklist