Replies: 9 comments 20 replies
-
@katiedagon @djk2120 @adrifoster and others working in the PPE space, how useful does this all sound for your work? Especially my addition I added above where I go beyond @dlawrenncar suggestion so that the map is dimensioned by PFT? I also generalized his request so that dimensionality could be easily moved between any dimensionality (including scalar and by PFT) |
Beta Was this translation helpful? Give feedback.
-
This would be very useful! But in terms of time scale of usage I'm not sure . For my part I think it might be a future need rather than a short-term one. I will let others chime in on when they might be interested in using this |
Beta Was this translation helpful? Give feedback.
-
OK, looking at how parameters are handled now, I think there would need to be some redesign brought in to make this easy and flexible. There's paramUtilMod.F90 that is sometimes used to read in parameters of different dimensions. It recognizes the dimension level, and can read in scalar, 1D, or 2D level variables. Only FATES uses it for beyond scalar's. And it's not used consistently in the code. readParams calls all the subroutines that reads in parameters for different physics packages. Each package has their list of variables and dimensionality explicitly defined as native FORTRAN types. |
Beta Was this translation helpful? Give feedback.
-
One way to do this would be to have four different parameter files. One for pure scalars, one for PFT, a streams file by gridcell, a 3D streams file by gridcell and PFT. The user then moves variables to the other file to change the dimension. I suspect this might NOT be liked by scientists though, so maybe there is one file and you change the dimension of the variables on the file? Going from 1D to 2D seems like a big shift to me though. So maybe there has to be two one for scalar and by-PFT, and another for stream files of either 2D or 3D? Thoughts on this idea? |
Beta Was this translation helpful? Give feedback.
-
I'm thinking the way to go about this would be to figure out a design to work towards, and then take small refactoring steps toward the full design. The first steps wouldn't add new functionality and only do some refactoring to make future steps easier. Later steps would start to allow flexibility between scalar and by-PFT. Then an extension for 2D and 3D maps on the model resolution. And then an extension for 2D streams would be a later step, with the possibility of adding 3D streams later. I'm thinking we'd use OO classes to have a Base parameter class, a scalar class, a by-PFT class, a 2D fixed resolution class, a 3D fixed resolution class, a 2D stream class, and then a 3D stream class. There would need to be some infrastructure for it to decide what dimensionality each parameter actually is. I think it would just read in the different files and use the dimension from the file it's on, and only fail if it can't find a parameter on any of the files. |
Beta Was this translation helpful? Give feedback.
-
I think the hydrology community would be very interested in this. Not
having this capability is actually deterring use by some on the hydrology
community, as far as I understand. And, we saw the example this week about
how it could likely be possibly useful for the crop model, essentially
accounting for different crop varieties.
Dave
…On Thu, Feb 29, 2024 at 10:34 AM Erik Kluzek ***@***.***> wrote:
@adrifoster <https://github.com/adrifoster> @samsrabin
<https://github.com/samsrabin> @slevis-lmwg
<https://github.com/slevis-lmwg> @glemieux <https://github.com/glemieux>
and @rgknox <https://github.com/rgknox> do you have thoughts on this as
it's the more SE oriented issue.
—
Reply to this email directly, view it on GitHub
<#2395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVEQJP42LUDX6TOOAZDYV5TBHAVCNFSM6AAAAABEAI4CXWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DMMZUGAYDA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@ekluzek I think what you wrote in terms of incremental development makes a lot of sense, as does extending One thing I'd like us to shoot for is to avoid having to write much new code whenever a new parameter is added. The more we can make this automatic, the better. |
Beta Was this translation helpful? Give feedback.
-
This might be a separate discussion, but it's been nagging at me and seems relevant here. Lots of crop development ideas (e.g., having one patch in a corn/soy rotation, or ) either require (a) rethinking how we handle about CFTs/patches or (b) making new CFTs. The latter is more realistic given engineering resource constraints, but it's currently a pain to make new CFTs. Each requires a new integer identifier like We should make this a lot more flexible. PFTs should be defined—not just parameterized, but defined—in model inputs, not the code itself. This would require changing everything that uses PFT identifiers (e.g., |
Beta Was this translation helpful? Give feedback.
-
Hi, I just discovered this thread (thanks Erik). We hydrologists have indeed discussed upgrading parameter-handling in CTSM, as calibrating models (aka 'history matching' and 'iterative updating' in ESM world) is part of our DNA, going back to the 1960s. There is an effective and straightforward example of this functionality in SUMMA (ie, https://summa.readthedocs.io/en/latest/input_output/SUMMA_input/#attribute-and-parameter-files). SUMMA is an NCAR-developed modeling framework and notably was the design template invoked in the proposal to evolve CLM to CTSM, mainly related to ideas about numerical solver separation from physics. I use SUMMA for nearly all my agency stakeholder-oriented research and applications, and I'm trying to bring CTSM into that arena. Basically, all meaningful parameters SUMMA are hierarchically exposed to different levels of specification. The default (for users with no information) is a global specification, which is provided in a text file listing of parameters (applied identically everywhere) and their theoretical ranges. These are always read in (as default), but are immediately overwritten by the same parameters if specified in an index/type library (eg soil types, veg types ie PFTs) that has a suite of process parameters attached by index to a type number. The libraries can also be text files (they're not large), but are complex enough to warrant netcdf file format. They just have a squirrely structure because the type libraries can offer multiple options, eg IGBP-MODIS for a veg type parameter classification systems, not all of which include consistent parameters. I think this is about as far as CTSM goes, currently, in terms of parameter control. That's not uncommon -- a lot of global models stop about here in terms of user control. Note, parameters are distinct from geophysical attributes (eg elevation, slope, veg type, soil type) which are always distributed (and in netcdf). Anyway, following this type-based 'read/overwrite', all parameters are again overwritten by a fully distributed parameter file for those parameters a user seeks to calibrate or feels they have information to support going beyond the type libraries. These local parameters are stored in fully distributed 'trialParameter' files (by cell/grid/polygon). This hierarchy is extremely useful. We decide which parameters to adjust, at what level of granularity, then the trialParameter file (or library, or global file) can be updated in PPE or optimization runs. Nearly all parameters have a potentially globally distributed scope, whether it's invoked or not by the source the user specifies. Internal data array allocation can be done in various ways depending on preference (for efficiency vs code complexity). It mainly takes i/o engineering for a variety of static inputs and types. I think some of us who work on other fortran models (SUMMA, Noah-MP) could probably implement it if we had Erik or a CTSM code expert looking over our shoulders for guidance, tips and QC. Like 'history matching', this capability probably doesn't need a lot of re-conceptualizing and invention, versus getting a jump by leveraging what exists. Hydrologists have good examples and long experience with parameter optimization in models. The functionality seems a small lift for CTSM to become a more usable model for water security studies, and I presume the same is true for all the other applications (carbon, climate, ecology, etc). Guoqiang Tang has just completed a powerful emulator-based parameter estimation workflow for CTSM hydrology -- but its global application and value will be limited in CTSM by not having this kind of flexibility. Hence my (renewed) interest! :) It would be great to have an initial high-level discussion to scope out an effort to do this. We can think about how to map what SUMMA does, for instance, into CTSM. |
Beta Was this translation helpful? Give feedback.
-
An idea from @dlawrenncar from the LMWG meeting today, is to be able to easily try an experiment where a parameter comes in with a different dimensionality than is hardcoded in the parameter file. And specifically to allow a parameter to come in as a map by gridcell. With that it seems like you might as well allow it by gridcell and PFT as well. This would be something that would be experimental and just allow you to easily try different dimensionalities for Perturbed Parameter Ensemble (PPE) type experiments. Or just for work with specific parameters. With experimentation the default parameter might be tuned to work with a specific dimensionality, and kept at the level for future default work.
Beta Was this translation helpful? Give feedback.
All reactions