@@ -30,7 +30,7 @@ consistency tests and they verify whether a forecast in consistent with an obser
3030that can be used to compare the performance of two (or more) competing forecasts.
3131PyCSEP implements the following evaluation routines for grid-based forecasts. These functions are intended to work with
3232:class: `GriddedForecasts<csep.core.forecasts.GriddedForecast> ` and :class: `CSEPCatalogs`<csep.core.catalogs.CSEPCatalog> `.
33- Visit the :ref: `catalogs reference<catalogs-reference> ` and the :ref: `forecasts reference<forecasts -reference> ` to learn
33+ Visit the :ref: `catalogs reference<catalogs-reference> ` and the :ref: `forecasts reference<forecast -reference> ` to learn
3434more about to import your forecasts and catalogs into PyCSEP.
3535
3636.. note ::
@@ -105,6 +105,8 @@ Consistency tests
105105 magnitude_test
106106 pseudolikelihood_test
107107 calibration_test
108+ resampled_magnitude_test
109+ MLL_magnitude_test
108110
109111Publication reference
110112=====================
@@ -114,13 +116,16 @@ Publication reference
1141163. Magnitude test (:ref: `Savran et al., 2020<savran-2020> `)
1151174. Pseudolikelihood test (:ref: `Savran et al., 2020<savran-2020> `)
1161185. Calibration test (:ref: `Savran et al., 2020<savran-2020> `)
119+ 6. Resampled Magnitude Test (Serafini et al., in-prep)
120+ 7. MLL Magnitude Test (Serafini et al., in-prep)
117121
118122****************************
119123Preparing evaluation catalog
120124****************************
121125
122126The evaluations in PyCSEP do not implicitly filter the observed catalogs or modify the forecast data when called. For most
123127cases, the observation catalog should be filtered according to:
128+
124129 1. Magnitude range of the forecast
125130 2. Spatial region of the forecast
126131 3. Start and end-time of the forecast
0 commit comments