-
Notifications
You must be signed in to change notification settings - Fork 54
A new Example folder
This page describes how to start marking a new example in GRChombo. You should have read Structure of the code before trying what follows.
The first thing to do is to make a new Example folder in "Examples" for any case where the compiled code will differ from an existing Example (ie, not just a change in the parameters, but a change in the actual compiled code). Normally you would start by duplicating the Example folder that is most similar to what you plan to do.
As an example, we describe here the process of copying the KerrBH example to make the ScalarField example. This was quite a major change as it added a whole new matter class - often the changes required will be far fewer.
We start by copying the entire contents of the Example/KerrBH folder into a new folder called Examples/ScalarField. This folder is where the files specific to your physics problem live. At this point you probably want to "git commit" the new folder, even though you have not changed anything yet. This will allow you to see exactly what changes you make in subsequent commits to the working Example. In this way you can track which changes break it.
We would suggest that all your new files and tools should be created in this new Example folder first. One can subsequently move the new files to the Source libraries if it is felt they would be of use for other examples (we will see in the sections on Additions to Source code what new library files were added for the Scalar Field (SF) example). But at first it is usually easier to have everything in one place.
A new executable is created by changing references to “KerrBH” to “ScalarField” in all the example files. If you use vim as your editor (as God intended), the command to find and replace with a check and confirmation for each replacement is:
:%s/OldWord/NewWord/gc
The updates below should result in an identical executable, but simply one with a new name. It is useful to check that the code still compiles and runs after this step. Here we describe briefly the function of each file in which naming changes were required.
This tells the compiler where to find the included files and what the name of the executable will start with when the make all
command is executed. Thus ebase
was changed and src_dirs
was updated to include the new libraries that would be needed for the problem (in this case the folder Matter.)
ebase := Main_ScalarField
src_dirs := $(GRCHOMBO_SOURCE)/Matter \
etc
This contains the main() function and the function which runs the AMR object (which controls the setup and running of the evolution on the AMR grid). We needed to amend the includes to include the right Level file (see later)
#include "ScalarFieldLevel.hpp"
And update the code for the template to the DefaultLevelFactory and its naming:
DefaultLevelFactory<ScalarFieldLevel> scalar_field_level_fact(sim_params);
...
setupAMRObject(amr, scalar_field_level_fact);
This is the file in which the key problem specific tasks are included, and the main one you will need to edit. There is also an .hpp header file where some of the functions and aliases are defined.
At this stage of updating the example the naming was simply changed to reflect the change of BinaryBH to ScalarField in various places such as changing the class name to:
class ScalarFieldLevel : public GRAMRLevel
...
void ScalarFieldLevel::specificAdvance()
{
etc
and to give the correct friend class
friend class DefaultLevelFactory<ScalarFieldLevel>;
Other more detailed changes are discussed later. Note that one must also change the include guards:
#ifndef SCALARFIELDLEVEL_HPP_
#define SCALARFIELDLEVEL_HPP_
...
#endif /* SCALARFIELDLEVEL_HPP_ */
which prevent the file being included more than once by the compiler.
You should now check that everything still compiles and runs.
To put in new physical routines, one must update the ScalarFieldLevel.cpp file, which tells Chombo which problem specific routines to run at each stage in the evolution. Note that, as described in Structure of the code the overall program flow is specified in the AMR class which is a part of the Chombo code. At each stage there are hooks for users to enter physics specific routines, and it is these which are being specified in the ScalarFieldLevel class.
We also need to update the parameters and evolution variables, and add some auxiliary functions, including initial conditions.
This file tells Chombo which problem specific routines to run at each stage in the evolution. Two problem specific functions need to be defined in this class for the code to compile:
-
initialData()
- defines the initial data for the metric and matter fields -
specificEvalRHS(GRLevelData &a_soln, GRLevelData &a_rhs, const double a_time)
- provides the evolution equations (RHS) for the metric and matter variables.
There is also one further function, without which the code will compile but you may get unpredictable results so you should define:
-
computeTaggingCriterion(FArrayBox &tagging_criterion, const FArrayBox ¤t_state, const FArrayBox ¤t_state_diagnostics)
- specifies the tagging criteria for the AMR regrid
Most of the other generic level based actions are controlled in GRAMRLevel.cpp/AMRLevel.cpp, but other optional functions which add problem specific actions at particular stages of the AMR update are:
specificUpdateODE(GRLevelData &a_soln, const GRLevelData &a_rhs, Real a_dt)
postTimeStep()
preTagCells()
postInitialize()
specificAdvance()
preCheckpointLevel()
prePlotLevel()
postRestart()
The naming of these should be mostly self explanatory, but you can check back through the code to find exactly how they are called.
We updated the initial conditions to use a newly created initial condition class InitialScalarData.
BoxLoops::loop(InitialScalarData(m_p.initial_params, m_dx), m_state_new, m_state_new, FILL_GHOST_CELLS);
The BoxLoops namespace basically tells the computer to go through all the cells in the grid on this Level and call the compute
function of InitialScalarData at each one. The first argument passes the class from which the compute function will run, the second argument is the input data, and the third argument is the place where the output is written - in this case both the input and output data is m_state_new
which holds the values of all the evolution variables at the current timestep.
We updated the Constraints to use a new class MatterConstraints, which is templated over the matter type. We tell it that the matter type will be ScalarFieldWithPotential
, and pass it an instance of that class which has an appropriate potential set. This will be calculated before each plot, in prePlotLevel().
Potential potential(m_p.potential_params);
ScalarFieldWithPotential scalar_field(potential);
BoxLoops::loop(MatterConstraints<ScalarFieldWithPotential>(scalar_field, m_dx, m_p.G_Newton),
m_state_new, m_state_new, SKIP_GHOST_CELLS);
Note that we have used typedef in the header file ScalarFieldLevel.hpp to define:
// Typedef for scalar field
typedef ScalarField<Potential> ScalarFieldWithPotential;
This avoids having to write ScalarField and having a double template object with lots of angle brackets.
We also update CCZ4RHS in the same way, so we have
MatterCCZ4RHS<ScalarFieldWithPotential> my_ccz4_matter(scalar_field,
m_p.ccz4_params, m_dx, m_p.sigma, m_p.formulation, m_p.G_Newton);
Finally, we add in a new tagging function FixedGridsTaggingCriterion which creates a fixed refinement grid structure:
BoxLoops::loop(
FixedGridsTaggingCriterion(m_dx, m_level, 2.0 * m_p.L, m_p.center), current_state, tagging_criterion);
Clearly at this point we still need to write the classes Potential, ScalarField, MatterCCZ4RHS and MatterConstraints, but we can already think about how they might inherit from existing classes or use templating to make the code as reusable as possible. This will be discussed more in Additions to Source code.
This file controls the read in of parameters from the parameter file, which will then be accessible as members of m_p
in each Level class. Note that it inherits from SimulationParametersBase.hpp
, which reads in the majority of parameters which are used in GR simulations. So this file is for problem specific parameters only.
Here we first needed to include the header files Potential.hpp
and InitialScalarData.hpp
so that the correct form for the structs is known.
We added in all the problem specific parameters that we wanted to read in from the params.txt
file, for example, the initial conditions for the scalar field. This allows us to make changes to the initial conditions without recompiling the code.
The variables are defined at the bottom:
// Initial data for matter and potential
double G_Newton;
InitialScalarData::params_t initial_params;
Potential::params_t potential_params;
We must also implement the reading in of these params (nearer the top):
pp.load("G_Newton", G_Newton, 1.0);
etc
These required input values must also be added to params.txt
, unless defaults are set (but even then it is good practise to add a line in params.txt and comment it out so the existence of the parameter is evident).
We updated the enum to include the new field variables phi
and Pi
(the time derivative of phi) after the CCZ4 vars
c_phi = NUM_CCZ4_VARS, // matter field added
c_Pi, //(minus) conjugate momentum
and in the namespace we add the strings
"phi",
"Pi"
The diagnostic variables stayed as they were, but we could have added more here as for the evolution variables, had we wanted to add specific outputs which are derived from evolution variables but not evolved, e.g. energy density.
Initial conditions for the scalar field were written, to specify the values for phi and Pi across the grid at the start of the evolution.
This class is not templated over anything - it is highly specific to the problem and so unlikely to be reused without modification, and it clearly wouldn't make sense for a different matter type.
This is a good example of how to write a new set of initial conditions. It should be relatively straightforward to copy. The keys steps are:
-
Work out where you are on the grid
Coordinates<data_t> coords(current_cell, m_dx, m_params.center); data_t rr = coords.get_radius(); etc
Note the use of
data_t
rather thandouble
as the type ofrr
- this is a template name which allows vectorisation over the cells - basically data_t can either be a double or a vector of doubles depending on how we want to split up the grid. Any function that depends onx
should be a data_t type since the grid is vectorised in thex
direction. There is a difference withy
andz
dependent functions:data_t x = coords.x; double y = coords.y; double z = coords.z;
-
Calculate the value you want based on the position (note again it is a
data_t
type)data_t phi = m_params.amplitude * (1.0 + 0.01 * rr2 * exp(-pow(rr / m_params.width, 2.0)));
-
Save the calculated variables into the correct output state
// store the vars current_cell.store_vars(phi, c_phi); current_cell.store_vars(0.0, c_Pi);
Note that this command will save phi
into the n-th entry in the output state m_state_new
, where n is the integer value of the e_num
to which c_phi
is set in UserVariables.hpp. The code will not check if this is indeed the variable with the name phi, so you need to make sure this is the case. A danger here is that if you accidentally set your output state to be m_state_diagnostics
, clearly the n-th entry will not be phi
(as it is not a diagnostic variable). Depending on whether a n-th diagnostic variable exists, the code may crash at runtime with a complaint about trying to access data that does not exist, or, which is worse, it may not complain at all and just write the value somewhere it should not.
- The header files provides a struct for input params which will be required for the initial conditions. This means they can be read in at runtime, rather than hard coded (which would mean that the code must be recompiled when they change).
struct params_t { double amplitude; //!< Amplitude of bump in initial SF bubble std::array<double, CH_SPACEDIM> center; //!< Centre of perturbation in initial SF bubble double width; //!< Width of bump in initial SF bubble };
**Note that there is nothing that explicitly checks that the initial conditions that are input satisfy the constraint equations ** (although you can of course calculate the violation on the grid and output it in checkpoints). It is therefore up to you to use sensible and consistent initial conditions - i.e. something that satisfies the constraints. Usually if you do not, the code will crash after several timesteps. (See the Initial Condition Solver.)
Now we need to write the classes that we called in ScalarFieldLevel.cpp above: MatterCCZ4RHS, MatterConstraints, ScalarField (and its Potential input). These are functions that will be more widely used than just for this one example, and so they will live in the Source folder. They are discussed in Additions to Source code.
Copyright GRChombo 2018. Contact us for further details.