CREDIT is a research platform to train and run neural networks that can emulate full NWP models by predicting the next state of the atmosphere given the current state. The platform is still under very active development. If you are interested in using or contributing to CREDIT, please reach out to David John Gagne (dgagne@ucar.edu).
Currently, the framework for running miles-credit in parallel is centered around NSF NCAR's Derecho HPC. Derecho requires building several miles-credit dependent packages locally, including PyTorch, to enable correct MPI configuration. To begin, create a clone of the pre-built miles-credit environment, which contains compatiable versions of torch, torch-vision, numpy, and others.
module purge
module load ncarenv/23.09 gcc/12.2.0 ncarcompilers cray-mpich/8.1.27 cuda/12.2.1 cudnn/8.8.1.3-12 conda/latest
conda create --name credit-derecho --clone /glade/derecho/scratch/benkirk/derecho-pytorch-mpi/envs/credit-pytorch-v2.3.1-derecho-gcc-12.2.0-cray-mpich-8.1.27
Going forward, care must be taken when installing new packages so that PyTorch and the other relevant miles-credit dependencies are not overridden. Next, grab the most updated version of miles-credit from github (assuming no changes to the local-build dependencies):
conda activate credit-derecho
git clone git@github.com:NCAR/miles-credit.git
cd miles-credit
and then install without dependencies by
pip install --no-deps .
Henceforth, when adding new packages aim to use the no dependenices option.
Clone from miles-credit github page:
git clone git@github.com:NCAR/miles-credit.git
cd miles-credit
Install dependencies using environment_gpu.yml file (also compatible with CPU-only machines):
Note: if you are on NCAR HPC, we recommend installing to your home directory. To do this, simply append -p /glade/u/home/$USER/[your_install_dir]/
to the conda/mamba env create
command below:
mamba env create -f environment_gpu.yml
conda activate credit
CPU-only install:
mamba env create -f environment_cpu.yml
conda activate credit
Some metrics use WeatherBench2 for computation. Install with:
git clone git@github.com:google-research/weatherbench2.git
cd weatherbench2
pip install .
python applications/train.py -c config/unet.yml
python applications/train.py -c config/vit.yml
Or use a fancier variation
python applications/train.py -c config/wxformer_1dg_test.yml
Adjust the PBS settings in a configuration file for either casper or derecho. Then, submit the job via
python applications/train.py -c config/wxformer_1dg_test.yml -l 1
The launch script may be found in the save location that you set in the configation file. The automatic launch script generation will take care of MPI calls and other complexities if you are using more than 1 GPU.
The predict field in the config file allows one to speficy start and end dates to roll-out a trained model. To generate a forecast,
python applications/rollout_to_netcdf.py -c config/wxformer_1dg_test.yml
This software is based upon work supported by the NSF National Center for Atmospheric Research, a major facility sponsored by the U.S. National Science Foundation under Cooperative Agreement No. 1852977 and managed by the University Corporation for Atmospheric Research. Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of NSF. Additional support for development was provided by The NSF AI Institute for Research on Trustworthy AI for Weather, Climate, and Coastal Oceanography (AI2ES) with grant number RISE-2019758.