-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A starting point for a polymorphic solver-based TDycore interface. #211
Conversation
Codecov Report
@@ Coverage Diff @@
## master #211 +/- ##
=======================================
Coverage 49.19% 49.19%
=======================================
Files 4 4
Lines 683 683
=======================================
Hits 336 336
Misses 347 347 Continue to review full report at Codecov.
|
Thanks for taking this up. One thought on TDycore being used as a library: all the SNESSetX and TSSetX interfaces related to a model actually call corresponding DMSNESSetX and DMTSSetX. The primary thing TDycore could do is to wire up a DM with all the model stuff needed, and drivers could |
Interesting. Thanks for pointing this out, Jed. If everything happens through the DM, what you suggest does seem like the way to go. I'll try to figure out a simple interface to do things this way. |
We'll be doing something like this for a libCEED+PETSc mechanics solver (tentatively https://gitlab.com/micromorph/ratel), which is currently just a fork of the mini-app that lives in the libCEED repo, but is intended to evolve into a full-featured library+app. |
To make sure I understand: can Jacobian and residual/RHS functions be supplied to the solvers through the DM? One of the benefits of the In other words, the interface I proposed looks like this: Approach 1
So we're both moving toward a workflow that sets up a solver using the dycore, and then uses that solver. It seems to me that you'd still need to set Jacobian/residual functions for the solver after setting the DM. That's fine--it just means we have to make polymorphic SNES/TS Jacobian/residual/RHS functions that are passed to the solver: Approach 2 (Corrected to reflect Matt and Jed's comments)
These two approaches are very similar. The first one is more of a "black box" that sets the DM/function/Jacobian on the solver in one command ("TDycore, just please do whatever you need to the solver"), and the second one reflects how it's done more explicitly in a PETSc app. |
You would replace the SNESSetFunction() with something like DMSNESSetFunctionLocal(), and similarly for the Jacobian. I also put in a callback for imposing Dirichlet conditions. |
Maybe |
Unlike PETSc, I was not envisioning a demo to specify a user-defined residual/jacobian/ifunction/ijacobian etc. Those functions will be internal to the TDycore lib and get automatically set depending on the choice of the mode (=Richards/TH/etc) + Discretization (FE/FV/etc). I'm most likely missing the point in the above discussion of why a demo would be setting user-defined functions. Can someone explain the reasoning behind it? |
Hey @bishtgautam , Approach 1 is probably a bit more friendly as a "black box" way to set up a solver. This is what I had originally envisioned, though I probably wasn't very clear about the dycore's role in a demo (both approaches use TDycore to set up a solver and then use the solver itself instead of the dycore to run a simulation). Aside from the need to assign these functions, the two approaches are basically the same. Maybe Jed and Matt have thoughts about this, though. |
I would go with 2, but How do initial conditions and initial guesses for Newton get created? If they require a solve, what does it look like and what data needs to be read? |
Okay, I get it now. We do the calls to set the functions for J and R internally as part of setting up the DM, and then the dycore turns over the DM to the solver, communicating those functions. The two approaches are identical, except in the case of Approach 2, the dycore never has to talk directly with the solver. Neat. I like it. Let's go with Approach 2. In this case, it's simplest if the dycore creates the DM itself, since it's going to be configuring these kinds of things on it. I recommend we get rid of Good questions about initial conditions and Newton guesses. I would guess the answers depend on the range of complexity involved. If we're doing "spin up", we're going to need solves there. |
If we end up solving a different set of equations for initial conditions, maybe it's worth thinking about another DM constructed for that purpose, to be passed to a solver that can be fiddled with, same as we're discussing for solving the transient problem? |
If the equations are different, then we want the same function space (but perhaps different boundary conditions). I wish this were explicit by making the "function space" part of DM distinct from the "equation set" part of DM (a dispute I lost to Barry a decade ago). If the initial condition problem is formulated with different essential boundary conditions, then we wouldn't use the same |
This operation configures the dycore's DM for its mode/discretization/etc. This is just a sketch so far--the details of the operations needed to properly configure a DM for a solver may evolve as we learn more. I also renamed TDySetupNumericalMethods to TDySetup. This is about as descriptive and is less typing. Hope nobody minds. :-)
@jedbrown I did this separation in "stealth" mode with PetscWeakForm. Currently, I do this kind of affine projection through the back door with DMPlexSetBoundaryValues() and Toby has complained about this, so it would be good to have a proper interface for it. |
Thanks, guys. This sounds like a productive direction for model initialization. Maybe @bishtgautam and I (and others interested) can come up with an example that illustrates what we need. |
I'm merging this PR so we can continue working in the direction of polymorphism. I opened #213 to track our model initialization discussion. |
This small PR lays part of the foundation for a move to a more solver-based interface. Our objective here, in the spirit of #197, is to move from our current approach of determining the behavior of the dycore (
if
tests in several functions scattered throughout the library, to match the dycore against its type of discretization, etc) to a new approach based on function pointers. The closest handy analogy for most is C++'s implementation of polymorphism using virtual tables. But PETSc itself uses this approach heavily.In
include/tdycore.h
, line 162, you can see that I've written that certain functions using the old approach will likely be replaced by a smaller set of new polymorphic functions. Specifically, we are adding two new functions:TDySetSNES(TDy dycore, SNES solver)
- takes an SNES nonlinear solver object and sets it up to solve the system relevant to the dycore.TDySetTS(TDy dycore, TS solver)
- takes a TS ODE solver object and sets it up to solve the system relevant to the dycore.These functions are simple wrappers and are implemented in
tdycore.c
. You can see all they do is pass the call along to the appropriate function pointer, or complain with an error if no such function pointer exists. This allows any given implementation of the dycore to simply supply the right function pointers instead of being added to a long chain ofif
tests in several functions throughout the code base.Additionally (if people like the direction of this PR), we'll be retrofitting other functions such as
TDyComputeErrorNorms
to be polymorphic. Any function that containsif
tests that try to figure out "which algorithm we're using in the dycore" will be replaced with an interface function that calls the appropriate implementation-specific function pointer.The idea here is that we can use the solver interface to set specific solver parameters, and to step forward in time. We also talked about a higher level TDycore interface that does some of this stuff in solver-neutral language. That's very easy to add on top of this kind of interface.
I know the PETSc folks are very familiar with this approach--I expect them mostly to correct my "spelling"of polymorphic C code to their tastes. For others, I'm happy to chat about function pointers, polymorphism in general, and/or how this will look in specific cases.