-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
The t-test requires an alpha value to create a confidence interval (e.g., 5%)
pycsep/csep/core/poisson_evaluations.py
Lines 14 to 15 in 5f84ea9
def paired_t_test(forecast, benchmark_forecast, observed_catalog, | |
alpha=0.05, scale=False): |
EvaluationResult
. However, this alpha value is then forgotten, which cause the EvaluationResult
plotting to require recalling the original value of alpha
with which the t-test was carried out.Line 1718 in 5f84ea9
percentile = plot_args.get('percentile', 95) |
Not sure if creating a new attribute alpha
of the resulting EvaluationResult
pycsep/csep/core/poisson_evaluations.py
Lines 46 to 54 in 5f84ea9
result = EvaluationResult() | |
result.name = 'Paired T-Test' | |
result.test_distribution = (out['ig_lower'], out['ig_upper']) | |
result.observed_statistic = out['information_gain'] | |
result.quantile = (out['t_statistic'], out['t_critical']) | |
result.sim_name = (forecast.name, benchmark_forecast.name) | |
result.obs_name = observed_catalog.name | |
result.status = 'normal' | |
result.min_mw = numpy.min(forecast.magnitudes) |
or to redefine the attributes of the t-test. For instance, shouldnt result.quantile
, instead of result.test_distribution
, contain actually the information_gain lower and upper bounds?
Also, the W-test confidence interval is calculated inside the plotting functions, instead of the evaluation function itself.
Metadata
Metadata
Assignees
Labels
No labels