forked from parflow/parflow
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.SAMRAI
202 lines (154 loc) · 7.63 KB
/
README.SAMRAI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
Parflow/SAMRAI Notes
WARNING! The SAMRAI version of Parflow is still beta and there may be
issues with it.
The Paflow/SAMRAI version may use significantly less memory for some
problems due to a more flexible computation domain. In Parflow the
computation domain is specified as a single parallel piped in the TCL
input script. Parflow/SAMRAI will also accept this input but in
addition supports specifying the computation domain as a set of
subgrides. You may have more than one subgrid per processor. This
enables the computation domain to more closely match the active
region.
Note for CLM and WRF runs only a single subgrid is allowed per
processor since CLM and WRF only decompose problems in X and Y. Also
the subgrids will need to match the decomposition that WRF is using in
X and Y.
=============================================================================
Configuration
=============================================================================
Use of SAMRAI is an optional specified in the configure process using the
--with-samrai=<SAMRAI_DIR>
Where <SAMRAI_DIR> is the installed location of SAMRAI; what was used
during the SAMRAI configure for the --prefix=<SAMRAI_DIR> option.
You should use the same HDF and other options in both the SAMRAI and
Parflow configures to avoid library incompatiblity issues. Using the
same compilers is also recommended.
You also need to compile Parflow/SAMRAI with a C++ compiler rather
than the C compiler. This can be done by setting the CC variable used
to specify the compiler to Parflow.
So a minimal configure which includes SAMRAI would be:
CC=g++ ./configure --with-samrai=<SAMRAI_DIR>
If you fail to use a C++ compiler with the SAMRAI option Parflow will
not compile.
=============================================================================
Running Parflow with SAMRAI
=============================================================================
You can run any existing input script with Parflow/SAMRAI; no changes
are necessary. This should produce the same output as the non-SAMRAI
version. However this offers no real advantage but is supported to
allow for backward compatibility. If you want to run with a SAMRAI
grid you need to do two things differently. First the compute domain
specification is more complicated. Each subgrid must be specified.
Second you must use pfdistondomain instead of pfdist to distribute
input files.
=============================================================================
File distribution
=============================================================================
All of the grid based input files to Parflow must be specified on the
SAMRAI grid. The "pfdistondomain" command distributes a file onto a
domain in a similar way to pfdist worked for the old code. Indicator
fields etc will need to be distributed using this utility. Basically
if you used pfdist on something before, you must use pfdistondomain
with the SAMRAI version.
=============================================================================
Input file changes for running with a SAMRAI grid
=============================================================================
This simplest example of specifying a SAMRAI grid is a single subgrid
run on a single processor:
pfset ProcessGrid.NumSubgrids 1
pfset ProcessGrid.0.P 0
pfset ProcessGrid.0.IX 0
pfset ProcessGrid.0.IY 0
pfset ProcessGrid.0.IZ 0
pfset ProcessGrid.0.NX 10
pfset ProcessGrid.0.NY 10
pfset ProcessGrid.0.NZ 8
NumSubgrids is the total number of subgrids in the specification
(across all processors). For each subgrid you must specify the
processor (P) and the starting index and number of grid points along
each dimension.
To run the previous problem on 2 processors the input
might look like:
pfset ProcessGrid.NumSubgrids 2
pfset ProcessGrid.0.P 0
pfset ProcessGrid.0.IX 0
pfset ProcessGrid.0.IY 0
pfset ProcessGrid.0.IZ 0
pfset ProcessGrid.0.NX 10
pfset ProcessGrid.0.NY 5
pfset ProcessGrid.0.NZ 8
pfset ProcessGrid.1.P 1
pfset ProcessGrid.1.IX 0
pfset ProcessGrid.1.IY 5
pfset ProcessGrid.1.IZ 0
pfset ProcessGrid.1.NX 10
pfset ProcessGrid.1.NY 5
pfset ProcessGrid.1.NZ 8
Which specifies a split of the domain along the Y axis at Y=5.
See the test script "samrai.tcl" for some examples on different
processor topologies and using more than one subgrid per processor.
If your manually building the grid for a CLM run you need to specify
subgrid extents to include overlap for the active region ghost layer.
Basically you need to make sure that the IZ and NZ values are such
that they cover the active domain for each processor plus the active
domain in neighboring processors ghost layers (IX-2 to NX+4). The
overland flow calculation needs to have the information about the top
of the domain to correctly move water to/from neighboring subgrids and
communication is done only along subgrid boundaries so the subgrid
extents need to be high enough to communicate the information between
neighbors.
Manually building this grid is obviously less than ideal so some
automated support is provided to help build a computation grid that
follows the terrain. Emphasis on "some"; we realize this is a
somewhat anoying procedure to have to do and hopefully we can automate
this in the future.
The automated approach first requires running Paflow using the
original single large computation domain approach for a single
time-step (can be really small). Using the mask file from that is
created by this run one can use the "pfcomputedomain" and
"pfprintdomain" commands in pftools to write out a grid. This grid
will use the processor topology you have specified to build a grid
such that each processor's subgrid covers only the extent of the
active region (which comes from the mask file).
A sample script compute_domain.tcl which does this is enclosed in test
directory. The important parts are shown below.
This assumes the mask file exists and is called "samrai.out.mask.pfb".
First the processor topology is specified. The original large
computation domain is specified. The mask file is then loaded. The
top and bottom of the domain are computed from the mask. These are
NX*NY arrays with values specifying the Z index of the top/bottom.
pfcomputedomain computes the subgrids that cover top to bottom on each
processor. This grid specification is then saved to the
"samrai_grid.tcl" file. You can use the TCL "source" command to
include this in your Parflow TCL input script (or cut and paste if you
prefer).
Note that if you change the processor topology you need to rerun this
script as the subgrids are processor dependent but you do not need to
recompute the mask file.
set P [lindex $argv 0]
set Q [lindex $argv 1]
set R [lindex $argv 2]
pfset Process.Topology.P $P
pfset Process.Topology.Q $Q
pfset Process.Topology.R $R
set NumProcs [expr $P * $Q * $R]
#---------------------------------------------------------
# Computational Grid
#---------------------------------------------------------
pfset ComputationalGrid.Lower.X -10.0
pfset ComputationalGrid.Lower.Y 10.0
pfset ComputationalGrid.Lower.Z 1.0
pfset ComputationalGrid.DX 8.8888888888888893
pfset ComputationalGrid.DY 10.666666666666666
pfset ComputationalGrid.DZ 1.0
pfset ComputationalGrid.NX 10
pfset ComputationalGrid.NY 10
pfset ComputationalGrid.NZ 8
set mask [pfload samrai.out.mask.pfb]
set top [pfcomputetop $mask]
set bottom [pfcomputebottom $mask]
set domain [pfcomputedomain $top $bottom]
set out [pfprintdomain $domain]
set grid_file [open samrai_grid.tcl w]
puts $grid_file $out
close $grid_file