Replies: 5 comments 7 replies
-
I do not think we need to have in memory all depth values for all structures for all scenarios etc. |
Beta Was this translation helpful? Give feedback.
-
Your advice is in contradiction to the algorithm. Have you had a chance to read it?
… On May 26, 2022, at 5:26 PM, Will Lehman ***@***.***> wrote:
I do not think we need to have in memory all depth values for all structures for all scenarios etc.
I would advise considering the intent of the compute and doing it one hydraulic dataset at a time - for one probability hdf file combination for all structures, and then store that result and move on to the next. That way we can reduce the overall memory burden necessary for the compute at any given time.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
do you have thoughts on how to replicate the algorithm in a way that would be mathematically equivalent?
one quick note - a scenario is not a dimension of what I’ve described.
thanks!
… On May 26, 2022, at 5:26 PM, Will Lehman ***@***.***> wrote:
I do not think we need to have in memory all depth values for all structures for all scenarios etc.
I would advise considering the intent of the compute and doing it one hydraulic dataset at a time - for one probability hdf file combination for all structures, and then store that result and move on to the next. That way we can reduce the overall memory burden necessary for the compute at any given time.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
Extrapolation is really where event by event wouldn’t get us the result we need. I think the only way to get at that would be to identify the delta at the index location, apply it to the .5 event at the structure, and increment stage at the same pace until reaching the .5 event. So you hold the .5 event in memory for several increments and calculate stage off of that. From .5 to .002, use the event stage. And then do the same extrapolation after the .002 event. Maybe that gets us something equivalent to what’s described in CLD-72?
… On May 26, 2022, at 5:26 PM, Will Lehman ***@***.***> wrote:
I do not think we need to have in memory all depth values for all structures for all scenarios etc.
I would advise considering the intent of the compute and doing it one hydraulic dataset at a time - for one probability hdf file combination for all structures, and then store that result and move on to the next. That way we can reduce the overall memory burden necessary for the compute at any given time.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
Another piece that I neglected in the discussion is the between-event interpolation. Stage at the structure increases at the same relative rate as at the index location from one frequency to the next. So we’d need to hold two events in memory at a time when operating between .5 and .002.
… On May 26, 2022, at 5:26 PM, Will Lehman ***@***.***> wrote:
I do not think we need to have in memory all depth values for all structures for all scenarios etc.
I would advise considering the intent of the compute and doing it one hydraulic dataset at a time - for one probability hdf file combination for all structures, and then store that result and move on to the next. That way we can reduce the overall memory burden necessary for the compute at any given time.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
Beta Was this translation helpful? Give feedback.
-
The steps for computing stage-damage are clearly documented in Appendix E of CPD-72. I will not regurgitate them here. Instead, I will try to relate the steps to how we'll carry out the compute. We should aim to be as close to CPD-72 as possible. There are gains for everybody in doing so.
The object off of which we will compute stage-damage results should be something like a StageDamageSet. A stage-damage set represents the stage-damage functions for a given plan-year combination. Hence, there should be a function for every impact area-damage category-asset category combination in a given set.
A StageDamageSet should be constructed using a structure inventory, occupancy types, a hydraulic dataset, and a flow-frequency and rating curve or just a stage-frequency. The H&H functions give us the range of stages we'll use at the index location and the delta by which we'll extrapolate the stage-frequency at the structure. We'll also need the price index and analysis year.
A StageDamageSet compute should produce a list of uncertain paired data objects, which should probably exist as a complex object, something like metrics.StageDamageSetResults. Add a name or ID for that particular set. Manual stage-damage functions exist as a similar list, but no complex object. The uncertain paired data will be stage vs distribution of dollars, where the distribution is a normal distribution. An alternative would be to just use the histogram directly instead of creating a normal distribution based on the histogram summary statistics. The only trick there is that I don't think uncertain paired data would hold a histogram now that the histogram is no longer an IDistribution. We could probably make that happen using an overloaded constructor? Using just the histogram seems like a worthy change in the compute, but I am not sure of the consequences as they relate to the field's experience in transition from 1.4.3 to 2.0. We'll compute until convergence, and convergence assumes a normal distribution, so this together with the central limit theorem should suggest a small impact. We should think about how the return type fits into the ImpactAreaScenarioSimulation - a list of UPD objects is expected. So we'll need a method in metrics.StageDamageSetResults that accepts an impact area ID and returns a list of all UPD for that impact area to be used in the ImpactAreaScenarioSimulation builder.
The first step in the compute should be to compute the range of stages at each structure. The stage-frequency at the structure from the hydraulic data set will need to be extrapolated to the .0001 and .9999 events by the same delta (stage of .5 - stage of .0001) as the stage-frequency at the index location, and by the same quantity of coordinates. We'll want to construct this once per compute. This will exist as a list of paired data, I think we'll want structure stage vs index location stage. I think we'll want to use index location stage as something like an index. If we are not provided H&H functions, we can complain loudly and use just the hydraulic data set, confining the results to prob = (0.5, ...., 0.002).
We should then enter the iterative part of the compute. I think the outer loop should go over index station stage. For each stage, we'll want to do the following many times:
Sample the structure inventory (which involves checking if the analysis year is before the year in service), sample depths above first floor elevation, use the depths to calculate sampled percent damage, and then the sampled inventory to calculate damage for each asset category. If there are multiple structures, multiply. Add the damage to its impact area - damage category - asset category histogram of damages for that index stage.
We can do this until convergence, then move onto the next stage. We'll then have a list of UPD (index station stage vs damage) for each impact area - damage category - asset category combination. We could keep track of the impact area by using the Name property in CurveMetaData.
This disregards SID Reference Flood stuff. I am not sure that its used at all anymore. If so, I think we should think about that for the next version. This also disregards structure modules. We plan to handle structure modules by allowing the user to have multiple structure inventories.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions