You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flexible, that can be linked to different datasets provided by RDL
Workplan and Timeline
Mat and Jasper to split tasks and coordinate releases on new branch dev_push.
First we optimise and test a flood-only version, which is richer in terms of options compared to other hazards.
Once solid and tested, we try to split the components that can be used for multi-hazard from hazard-specific configuration.
Then we produce a launcher that can:
wrap everything in a visual interface and guide the user in a friendly way (limit manual data preparation, avoid script editing)
execute the program in batch (sequential for different parameters)
produce maps and charts once results are calculated
New GUI for selecting hazard, and then for running the specific hazard screening. Only flood is included for now, placeholder for cyclones and heat stress.
Move away from old hazard notebooks (those feature superseeded analysis code) and focus on the new approach (dynamic GUI): different versions of the same GUI are generated dynamically based on each hazard's hzd_config.py file
For now, the choice leads to hazard-specific GUI static notebooks - let's finalise floods first, then extend to other hazards using config files approach.
Preview results as map and EAI charts after analysis is completed
FUNCTIONALITIES
Country selector: dropdown menu with all country names as autocomplete field; resolve argument as ISO_A3 (based on either a csv with global [name, ISO_A3, WB_REGION]
Connect GUI.py to run_analysis.py (run analysis button simply runs try.py at the moment). Rename try.py into manual_run.py, used just for testing purpose, might delete later.
Ensure parallel processing works as expected, and all stages of progress from run_analytics are printed in the interface
Auto-fetching and processing of ADM, POP. BU.
AGR will need pre-clipped input data on S3 - original source data too big to fetch and process.
Link to notebook to pre-process FATHOM v3 country dataset (merge tiles)
Preview impact functions for each exp based on country georegion.
Save output as multi-tab excel with auto-summary, and multi-layer geopackage.
Option to edit the manually edit the damage functions
STRUCTURE AND REPO
input_utils.py includes libs to fetch and pre-process input data. Only called if input_data not already present in the folder.
GH repo: set protection rules for push to main (dev_push has none)
dev_push branch: file renaming and restructuring folders.
Objective
Take the existing CCDR tools and make them:
Workplan and Timeline
Mat and Jasper to split tasks and coordinate releases on new branch dev_push.
First we optimise and test a flood-only version, which is richer in terms of options compared to other hazards.
Once solid and tested, we try to split the components that can be used for multi-hazard from hazard-specific configuration.
Then we produce a launcher that can:
Tasks
Chance to concatenate (batch) several analyses, e.g. several countries one after the other.Timeline
The text was updated successfully, but these errors were encountered: