-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect node ghostRank values #525
Comments
The reason seems to be the way Instead of a workaround, can we fix ghostRank to be correct? E.g. just keep the original value in Another question is how to detect these cases in general and correctly do field synchronization. I don't like the double sync flag in DofManager (and have actually removed it), because it's not the right place for it. A solver may be running explicit schemes and not using DofManager at all (and thus not checking the flag), but still suffer from this issue. It needs to be something in |
ill take a look |
A couple of notes. If you set the plotLevel in the output block like:
you get all the fields. Then you can plot the localToGlobal map. In the figure below, we have the problem that you have specified. The left is rank0, the middle is rank1, and the right is rank2. The labels are:
So the thing that I notice is this that on ...which appears to be correct. |
Yeah. So this is a bit of an edge use case (having only a single layer of elements between ranks that are not neighbors), but we do currently have an integrated test where this happens. On my branch, I've added a check in So we can:
|
I am inclined to not use a redundant sync. If we want to properly get around this issue, then we need to expand the collection of neighbors for a rank when the situation arises. I am not sure what the best time to do this is. I think it may be enough to do this after the call to
This will solve the issue for the coarse aggregates that will likely expose this issue. |
@AntoineMazuyer any comments on this? |
@rrsettgast yes indeed if it becomes possible to add non-adjacent neighbors then I would not have to impose a very fine well mesh (this is the workaround that I am currently using). That would simplify the well implementation a little bit and would make the well solvers more robust, I think. |
In my experiences, problems that are not expected to happen will happen ^^. If I understand the issue, the communications seem to work fine but problems arise with the DOF Manager ?
I will see once I will finish the transfer from |
The issue is that the ghosting is wrong. In the example,
It depends on how you are doing the aggregate ghosting. I forgot what you did, but if you have your coarse aggregate with a single layer of elements on a rank, then depending on how you do the ghosting you can get hit with this issue. |
Should have been Resolved by #776 |
Describe the bug
If a mesh with 5 layers (cells) in z direction is run with
-z3
, nodal ghost ranks are a little off.Consider the following mesh: 1x1x5_LaplaceFEM.xml.txt
Here, the bottom 2 elements are owned by rank 0, the middle one by rank 1, the top 2 by rank 2. This is correct. However, nodes C, I, O and U (global numbers 2, 8, 14 and 20, but I don't know how to display them in visit) will have a
ghostRank
value of -1 on both rank 0 and rank 1. Here is some debug output:To Reproduce
Steps to reproduce the behavior:
LaplaceFEM::AssembleSystem
, add debug output:mpirun -n3 ./bin/geosx -i 1x1x5_LaplaceFEM.xml -x1 -y1 -z3
Expected behavior
In the above, rank 1 thinks it owns 8 nodes, but it should only own 4 (D, J, P, V on the image, or global indices 3, 9, 15, 21), according to assigned element ownership.
Additional context
I should probably add, that all communications seem to be set up more or less correctly, aside from these ghost rank values. In particular, data on these 4 nodes on rank 1 gets overwritten by data from rank 0 during sync, as it should. However, I need ghost rank to be correct, since I changed DofManager algorithm to rely on this value when assigning DoF ownership (so 50x10x5_LaplaceFEM test with 27 ranks does not work in #516)
The text was updated successfully, but these errors were encountered: