-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
numerical oscillations in near surface winds #342
Comments
I should add that @tto061 and myself have been iterating back and forth a bit about possible fixes with @vlarson. In addition to the modifications you've mentioned from Thomas, I've created a separate set of modifications to calculate the land surface fluxes using an implicit method. So far this seems to reduce the magnitude of these oscillations in E3SM, but it still needs a couple of tweaks. I'll put up an E3SM tag with these changes next week, since I'm about to take off for the weekend... |
Oh, and I should also say that I think that the process ordering is only part of the issue. I think that the way that we're calculating/applying the fluxes is not numerically stable at a 30 minute time step, not unless the bottom layer or two is thick enough to resist large swings in the fluxes, or there's enough turbulent diffusion to smear the fluxes upwards among several layers. (That said, the fact that CLUBB applies the fluxes in tphysbc doesn't seem to be helping either.) |
That would suggest a similar time-step instability would in principle occur in CAM5 as well ... Any thoughts on the mapping files in the coupler? I thought that the default was aoflux_grid = 'ocn' meaning the winds are first mapped to the ocean grid and then a wind stress is calculated. What wind stress does the atmosphere see? The one that is mapped back to the atmosphere grid, or is it calculated from the surface winds in the atmosphere?. @PeterHjortLauritzen this may impact our wind stress bias over the Southern Ocean. |
I've just updated the other issue with a link to my code for implicit land fluxes: E3SM-Project/E3SM#4073 (comment) @adamrher I didn't notice your comment earlier. I haven't thought much about the mapping files to be honest, since I've mostly been interested in this issue over land, where the wind oscillations are much worse (although they occur over the ocean somewhat too). The modifications I've made to address the oscillations are also specific to the land model, but it would be possible to do them for the ocean too (and in fact it would probably be much easier than what I've done for the land model). I think you are correct that the atmosphere-ocean fluxes are calculated on the ocean grid (at least they are in E3SM). |
From @JulioTBacmeister:
|
I did a convergence test for the SPARTICUS case out of the box (so not using the nudging that Julio is using). For 32levs, it runs 2000 tsteps w/o any noticeable oscillations. 64 levs also ran fine w/o any insanity. For 128 & 256 levs, it blew up fairly quickly. These runs are characterize by wild oscillations in the first 100 time-steps. |
To be clear, the test I showed used large-scale dynamical forcing derived
from re-analysis using finite differences, as well as nudging. This
reanalysis derived forcing is not yet part of the release version of SCAM.
The nudging can actually be turned off and oscillations still occur in 32L
when reanalysis forcing is used.
In the out of the box SPARITCUS case it's not clear to me what large scale
dynamical tendencies are used for calculating momentum evolution. I don't
really recall at the moment how momentum is prognosed in out of the box
scmforecast.F90 but my fuzzy recollection is that it is a pretty minimal calculation.
|
Is it possible to apply stress tendencies to the lowest dp in each case. If the dp is the same as the dp of the lowest level in L32 that might be a good test of equivalence. Also @adamrher what's happening in the first 100 timesteps? Is it really spin-up of the column as you say or is the meteorology different at this time, compared to later? |
Would this be like dribbling in an incremental updated stress each macrophysics substep? If so, hmmm. Maybe? I'm leaning towards Sean's solution of providing the CLUBB tendencies to the land model to estimate an implicit surface stress |
I've made some progress on this issue. I pushed all subroutines between just before CLUBB+MG2 and the end of tphysbc, to tphysac. As the plot below shows, the oscillations in sparticus are greatly improved (compare w/ plot earlier up in this thread), supporting my hypothesis that the 2dt oscillations are due to applying the surface stresses in tphysbc. I'll now move onto 3D sims to understand how moving all these routines to tphysac impacts the climate. @mvertens this is one CAM-only fix we are considering instead of Sean's elaborate fixes involving all the component models. |
@adamrher This is interesting! Does this also fix the oscillations for 3D sims, or have you just looked at SPARTICUS? |
Hi Sean, the tphysac mods do subdue those pathologic evening oscillations over S.America in 3D runs. When I look closely at the tphsac soln., I can still see small amplitude oscillations. These may not be worth fussing about, but I want to try this fix w/ enhanced BL resolution to be sure. And of course, the impact of passing the coupler a non-CLUBB adjusted state still needs to be assessed. |
Nice work. I didn't get much joy from process ordering changes in E3SM, but maybe I should give it another shot if this is working so well in CAM. If I can make a suggestion, if you have problems with moving most of the physics to tphysac, an alternative would be to move everything to tphysbc, including in particular the gravity wave drag and Beljaars schemes. I am curious whether either of these schemes is contributing to the problem in CAM, or if the problem is purely due to the dynamics running between the surface coupling and CLUBB. (Though thinking about it, I'm guessing that these schemes are not active for the SPARTICUS case anyway? It might help in mountainous regions though.) |
Thanks Adam,
Really nice result. A few questions:
What is left in tphysbc?
Any idea yet whether the fears about coupling before saturation are
justified?
-Julio
…On Tue, Apr 20, 2021 at 8:02 PM Adam Herrington ***@***.***> wrote:
Hi Sean, the tphysac mods do subdue those pathologic evening oscillations
over S.America in 3D runs.
[image: temp_tseries]
<https://user-images.githubusercontent.com/29961476/115485363-c947c600-a211-11eb-86af-32e22ae22a5e.png>
When I look closely at the tphsac soln., I can still see small amplitude
oscillations. These may not be worth fussing about, but I want to try this
fix w/ enhanced BL resolution to be sure. And of course, the impact of
passing the coupler a non-CLUBB adjusted state still needs to be assessed.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#342 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACGLMTQIPZN7KBY3UCI7CL3TJYW2LANCNFSM4YV6553Q>
.
|
From just before CLUBB/MG2 to the very end of tphysbc is moved to tphysac except radiation (and export calls to the coupler). The cam initialization and run loops are built assuming radiation is in tphysbc, so it can't be moved easily. I am mtg w/ Mariana today to figure out a hack to move radiation into tphysac. Then I will run 5 year F2000's with the these various mods to see if it makes a difference in the climate. |
@quantheory it seems possible (plausible?) that downstream effects (OGW+Beljaars) of sending CLUBB a lagged surface stress could be contributing, or I guess, even causing the oscillations. I can experiment w/ your suggestion to move everything into tphysbc to be sure. [edit - I think my plots of (ubot,vbot) every macmic cycle suggest that the oscillations are being generated in clubb ... ] |
I've completed a 5yr run for the case where I slice tphysbc in half at just before CLUBB, and push this lower half into tphysac just after the vertical_diffusion call (which dumps the non-water species surface tracer flux into the lowest model level ... my thinking was that by calling clubb after, clubb can mix some of these tracers upwards). The only thing kept in tphysbc in that second half is radiation (more on this below) and export to coupler calls. 5 year amwg diagnostics vs. cntl and vs. obs [edit - i just realized that the vs .cntl is a comparing against a different dycore, so some of the results below are not due to these tphysac changes. Will correct these shortly.] [edit edit - All of the sensitivities below are due to comparing against a different dycore. Please scroll down to find updated analysis]
@whannah1 you had suggested that you've done similar process ordering experiments with MMF. Do any of these sensitivities sound familiar? Lastly, I have a case where I am able to preserve the process ordering of radiation by pushing it to just after bc history write (that is now in tphysac), but it crashed last night I think due to a random cheyenne burp. So it's running now and will have those diagnostics up soon. It will be interesting to see if it has any impact on the (1)-(4). |
@adamrher So you moved all convective parameterizations? I'm confused that tphysbc only includes radiation, since I would think you would naturally leave the deep convection and microphysics in tphysbc, but I guess you wanted to keep the order of the cloud parameterization calls? The results do seem consistent with what I had found. The most troubling signal I found in my experiments was the moistening of the land (soil moisture had a slow steady upward trend at most soil levels). At first this seemed like a good thing because it was reducing dry biases, but then I realized it was actually bad because I was sending conditions to the coupler with saturated or super-saturated air at the surface, which is obviously not something we observe often. Having the convective adjustment before the coupler seems to be a crucially important aspect of the atmospheric process ordering since the dynamics has a tendency to produce super-saturated conditions. That reduction in marine Sc is surprisingly large. I don't remember seeing anything like that in the MMF. |
The splitting point was CLUBB, so everything above it in tphysbc remains. That is, tphysbc contains the energy fixer, dry adjustment, deep convection (ZM) and radiation. The reasoning was I want to move clubb in the process ordering to where a pbl scheme typically goes, at the top of tphysac, while maintaining the ordering for process that occur after clubb. There is no easy way to separate clubb from mg microphysics, and so where clubb goes, so does mg. You bring up a good point about the coupler seeing and even passing supersaturated conditions to the surface. I will keep my eye on this. Depending on how big of a problem this is, I had thought about a potential fix, but it will increase the model cost. Basically I could subcycle the supersaturation adjustment in clubb w/ mg, i.e., not advance clubb prognostic equations, but rather do that part in tphysac. That would require calling the pdf closure (which contains the super-saturation adj) twice as much since I'd have to do it in tphysbc and tphysac. Just a thought. |
@adamrher I see now, thanks for the detailed explanation. I assume the microphysics is a big part of dealing with super-saturated conditions, so it's likely pretty similar to the MMF tests in that sense. But then you left some of the convective adjustment in tphysbc, so in another sense the differences might be fundamentally different. Have you thought about the possibility of subcycling the mic-mac loop with dynamics? It seems that we could set the mic-mac sub stepping to match the dynamics time step and have some iteration there. It would obviously increase cost, but it's got to be cheaper than just reducing the entire model time step. The physgrid mapping is a pretty big complication for this idea though. |
I mistakingly was comparing the new process ordering runs w/ the fv dycore to a cntl run using the se dycore, which turned out to be the main source of the differences discussed. Here are some updated diagnostics, comparing new process ordering to a cntl, using the fv dycore: Move CLUBB and everything after, except radiation, to tphysac vs. cntl One caveat is that the cntl is from about ten tags back (cam6_3_006), whereas the new process ordering is from cam6_3_015. I have a cam6_3_015 in the queue, but after reviewing all the commits from 006 to 015, nothing sticks out as climate changing. The only possibility of a climate change might be the updating of the ctsm externals. That said, the climate is basically the same in the altered process ordering runs vs. cntl. I was able to find two plots that exhibit some sensitivity. (1) An increase in dust optical depth It's tempting to link (2) to (1), but there is no obvious collocation in space. I'll need to do some plotting of my own so I can control the contour ranges. When comparing the dust of the two difference variants of the tphysac changes, the global mean is zero, but there are large magnitudes anomalies in either direction. This indicates to me that the contour interval for the anomalies might be exaggerating the changes seen vs. the cntl ... and also that this is a highly variable variable. I want to ensure that the issue that Walter discussed is not occurring in these sims. Unfortunately, the cntl land files have been scrubbed, so I'll have to wait until the cam6_3_015 cntl gets through the cheyenne queue. But in the meantime, I've plotted global mean values of TWS (terrestrial water storage) just to see if calling the radiation before sending fields to the coupler has any impact: It's hard to know whether the differences in the trends are significant. If they are, it sort of makes sense. I would think that radiation before the coupler would tend cause radiative cooling and exacerbate super saturation. And just to be clear, the mechanism that Walter describes, in my own words, would be that if you don't do a super-saturation adj before the coupler, the lowest model level may have super saturated vapor, which can contribute to a LHFLX into the surface (presumably the conditions for this occur are a moist maritime air mass wafting over dry land). If that ss moisture was instead converted to cloud, then it could not be transferred to the land. |
@adamrher very interesting, I guess the previous results were reminiscent of my results by coincidence! I forget if I posted my previous results before, but the plot below shows the difference in surface fluxes for some MMF test runs where I moved the CRM call from BC to AC, so the difference is "MMF-AC minus MMF-BC". The flux changes were quite systematic over land and not just near the coast where you might expect influence from marine air masses moving over land. These results are from 2-year runs. I would naively expect LHFLX to go down from this mechanism since the low-level moisture is higher (see below), but since the land became so much wetter that somehow caused the LHFLX to general increase. There was also a corresponding decrease in temperature and a increase in surface pressure over land. |
@whannah1 thanks for posting ur analysis. This is really helpful. That's quite a remarkable trend in H2OSOI. It's really pumping a ton of moisture into the soil in the first 0.5 years of that run. Thankfully my tphysac runs resemble your left plot more than your right plot. I've been thinking that the ZM scheme may be saving the day here, since ZM is quite active at the lowest model level. I think what I'll do is shove ZM into tphysac to see if I can't recreate your MMF-ac climate. It would be helpful to better understand this potential pit fall in process ordering for coupled models. I've been looking through Aaron's Donahue paper on process ordering, and it seems like he isn't really messing around with tphysac (he lumps tphysac, CPL, and dynamics into "0th", and so I think he is really just altering the ordering within tphysbc, but I could be wrong). |
That's a great idea to try a case with ZM in AC. Looking forward to seeing those results! I remember asking Donahue about this, and he definitely restricted that paper to only reordering stuff within tphysbc. He agrees that considering the other physics processes would be interesting, but it was a bit beyond his scope at the time. |
@adamrher super interesting stuff. I assume you didn't change where the data was sampled for the history files, right? |
I maintained that the history is sampled after CLUBB/MG2. So it has been moved into tphysac for all experiments other than control. |
That could change the picture a lot, except for the land output which shouldn't be affected. |
As mentioned in the comment I just made here, I have finally gotten around to reproducing this in E3SM. Changing the process order is fairly effective at suppressing the oscillations (also, doing both the process order change and using an implicit calculation for surface fluxes pretty much gets rid of them entirely). Haven't looked at climate or the effect on mean winds though; I've only really looked at the numerical stability aspect of this for the "worst" (i.e. the roughest/most oscillatory) grid cells. |
this should've been closed w/ the cam_dev PR. |
This issue is motivated by an email correspondence and related E3SM issue E3SM-Project/E3SM#4073. Please see the E3SM issue for detailed history of the debug.
My understanding is that the numerical oscillations are a byproduct of applying the surface wind stress to CAM in tphysbc (i.e., CLUBB), which is derived from the state in the prior time-step (sandwiched in between tphysbc and tphysac in the previous time-step). That is, the surface stress being applied to CAM is derived from winds that are inconsistent with the current winds at the time of the CLUBB call.
@tto061 has come up with a fix that combines (1) exporting u & v for use in the coupler at the top of tphysbc, and (2) massaging the wind stresses cam_in%wsx/y at the clubb_intr.F90 level to be consistent with current model winds. The only drawback is that (2) means the surface stress seen by the atmosphere is not the same as seen by the other model components.
Tagging our in-house stress experts @JulioTBacmeister @siouxyalie and Sean since he seems to have discovered this issue @quantheory. The issue seems to arise with high vertical resolution in the PBL, although Julio is seeing similar oscillations in his SCAM tests forced by reanalysis (using standard 32 levs?).
The text was updated successfully, but these errors were encountered: