Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Longer time periods for phase 1? #4

Open
DamienIrving opened this issue Jul 12, 2023 · 2 comments
Open

Longer time periods for phase 1? #4

DamienIrving opened this issue Jul 12, 2023 · 2 comments

Comments

@DamienIrving
Copy link
Member

After yesterday's ACS bias correction workshop (where the problems associated with overfitting were discussed), it occurs to me that we restricted ourselves to only considering data from 1980 onwards (and thus training periods of only 20 years) due to the availability of observational data for variables like wind speed and solar radiation.

Now that we've decided to only look at temperature and rainfall in phase 1 of the intercomparison (both of which have a long AGCD timeseries), we could use longer periods such as: 1955-1984 ("1970"), 1985-2014 ("2000") and 2070-2099 ("2085").

The three bias correction tasks would then become:

  1. Historical: Produce bias corrected data for the 1985-2014 period, using 1955-1984 as a training period.
  2. Projection: Produce bias corrected data for the 2070-2099 period, using 1985-2014 as a training period.
  3. Cross validation: Produce bias corrected data for even years from 1955-2014 (i.e. every second year), using odd years from 1955-2014 as training data.

I guess we could go even longer than 30 year periods, but I can't recall too many bias correction papers use more than 30 years for training?

@DamienIrving
Copy link
Member Author

Turns out the CORDEX “evaluation” experiment (i.e. the ERA5 downscaling) only goes back to 1979, which I’m assuming is why we went with the 40 year period 1980-2019 for training and assessment (i.e. 20 years each) of the current climate.

The CORDEX historical goes back to 1960 so we could use 1960-2019 for that (i.e. 30 years each) but then we’d have an inconsistency between the evaluation and historical experiments.

@DamienIrving
Copy link
Member Author

So an option could be to ditch the "evaluation" experiment and just look at the historical experiment (maybe for two different GCMs) using 30 year periods.

The four tasks would then be as follows:

  1. Historical: Produce bias corrected data for the 1990-2019 period, using 1960-1989 as a training period.
  2. Projection: Produce bias corrected data for the 2070-2099 period, using 1990-2019 as a training period.
  3. Cross validation: Produce bias corrected data for even years from 1960-2019 (i.e. every second year), using odd years from 1960-2019 as training data.
  4. Benchmark: Produce bias corrected data for the 1990-2019 period, using 1990-2019 as a training period.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant