Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue135 forecast uncertainty 2 #693

Open
wants to merge 54 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
eab2f75
Uncertainty generator function included.
laura-zabala Aug 19, 2022
79cbeb2
Uncertainty emulator function included
Sep 5, 2022
0dc3f89
predict_error method is imported in forcaster.
Oct 11, 2022
f4093d1
Required arguments by predict_error are propagated to get_forecast
Oct 11, 2022
407bc79
uncertainty error added juts to dry bult temperature
Oct 14, 2022
cac66b6
AR speficied in the function name
Oct 18, 2022
e66ea46
Formatting correction in the error_emulator function
Oct 27, 2022
c74022e
Merge remote-tracking branch 'upstream/issue135_forecastUncertainty' …
wfzheng Nov 19, 2023
9d33819
Merge remote-tracking branch 'upstream/master' into issue135_forecast…
wfzheng Nov 19, 2023
816358c
Add forecast uncertainty function
wfzheng Nov 20, 2023
edeaf1c
Add forecast uncertainty function
wfzheng Nov 20, 2023
0267d23
Add the function of selecting weather forecast uncertainty in the sce…
wfzheng Nov 20, 2023
b5db17d
add forecast uncertainty test for both single zone and multi zone env…
wfzheng Nov 28, 2023
2597023
add forecast uncertainty test for both single zone and multi zone env…
wfzheng Nov 28, 2023
b6fb4a7
add forecast uncertainty test for both single zone and multi zone env…
wfzheng Nov 28, 2023
bc6f233
Remove file with special character from staging area
wfzheng Nov 28, 2023
5eebe2b
add forecast uncertainty test for both single zone and multi zone env…
wfzheng Nov 28, 2023
32f727f
Enhanced testcase.py by adding seed parameter to set_scenario functio…
wfzheng Dec 28, 2023
a433f7c
Merge pull request #2 from wfzheng/issue135_forecastUncertainty
laura-zabala Jan 11, 2024
c29d489
Description of the functions to generate errors for the forecast.
laura-zabala Apr 24, 2024
a7560fb
Merge remote-tracking branch 'laura/master' into issue135_forecastUnc…
wfzheng May 23, 2024
d81a697
Merge remote-tracking branch 'laura/issue135_forecastUncertainty' int…
wfzheng May 23, 2024
5c200c8
Add missing parameter descriptions to Forecaster class
wfzheng May 23, 2024
7fb0965
Add missing parameter descriptions to Forecaster class
wfzheng May 23, 2024
3ffc633
Add missing parameter descriptions to Forecaster class
wfzheng May 23, 2024
b1f38f1
Add missing parameter descriptions to Forecaster class
wfzheng May 23, 2024
8c29dbf
Shorten variable names for better readability in get_forecast
wfzheng May 23, 2024
f503671
revise .gitignore
wfzheng May 23, 2024
844f120
Merge commit 'f50367117c27e45833e3609fe0c0d370af2655a0' into issue135…
dhblum Oct 18, 2024
d3400e6
Remove unnecessary testing files
dhblum Oct 18, 2024
da82362
Remove unnecessary testing files [ci skip]
dhblum Oct 18, 2024
f1c9705
Merge branch 'master' into issue135_forecastUncertainty_2
dhblum Oct 29, 2024
498b108
Some cleanup
dhblum Oct 29, 2024
01ab685
Fix forecast parameter spec and unit tests
dhblum Oct 29, 2024
59113fa
Remove uncertainty from get_forecast and load params within forecaster
dhblum Dec 11, 2024
278a706
Add todo references to #719
dhblum Dec 11, 2024
4e4638b
Add forecast uncertainty test to API tests
dhblum Dec 12, 2024
daddd2b
Remove multizone uncertainty test and rename single zone to generalized
dhblum Dec 12, 2024
a0c22ca
Add test_forecast_uncertainty.py to makefile
dhblum Dec 12, 2024
bceadad
Move stats checking to within forecast_uncertainty unit test
dhblum Dec 12, 2024
df82fd8
Revert to 1h step
dhblum Dec 12, 2024
4239405
Add 400 message check to uncertainty paramters
dhblum Dec 12, 2024
eb4494a
Add ref uncertain forecast and modify submit json in unit tests
dhblum Dec 17, 2024
8d329bd
Merge branch 'master' into issue135_forecastUncertainty_2
dhblum Dec 17, 2024
debae93
Fix unit test utilities for Service
dhblum Dec 17, 2024
a7d1288
Fix Worker for loading forecast uncertainty parameters json
dhblum Dec 17, 2024
cf53275
Fix Worker for no forecast uncertainty settings in set_scenario
dhblum Dec 17, 2024
3895032
Correct unit test submit jsons
dhblum Dec 17, 2024
4301179
Fix horizon check for less than 0
dhblum Dec 17, 2024
1c463f8
Revert changes to config.json
dhblum Dec 17, 2024
cebcbe1
Fix unit tests for data, forecast, and kpis
dhblum Dec 18, 2024
3ceb130
Add uncertainty test to test_forecast
dhblum Jan 3, 2025
ace85c3
Update readme, contirubors, and releasenotes
dhblum Jan 3, 2025
3036dda
Fix typo in releasenotes.md
dhblum Jan 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Enhanced testcase.py by adding seed parameter to set_scenario functio…
…n and introduced two new tests: test_forecast_temperature_are_within_range and test_forecast_solar_radiation_are_positive for improved criteria checking.
  • Loading branch information
wfzheng committed Dec 28, 2023
commit 32f727f33c5a868ef40bdd8b6ce06b69ff7902fa
35 changes: 22 additions & 13 deletions forecast/error_emulator.py
Original file line number Diff line number Diff line change
@@ -35,23 +35,32 @@ def mean_filter(data, window_size=3):

return filtered_data
def predict_temperature_error_AR1(hp, F0, K0, F, K, mu):
'''Generates an error with an AR model in the hp points of the predictions horizon.
'''
Generates an error with an AR model in the hp points of the predictions horizon.

Parameters
----------
hp : int, number of points in the prediction horizon
F0 : float, mean of the initial error model
K0 : float, standard deviation of the initial error model
F : float from 0 to 1, autocorrelation factor of the AR error model
K : float, standard deviation of the AR error model
mu : float, mean value of the distribution function integrated in the AR error model
Parameters
----------
hp : int
Number of points in the prediction horizon.
F0 : float
Mean of the initial error model.
K0 : float
Standard deviation of the initial error model.
F : float
Autocorrelation factor of the AR error model, value should be between 0 and 1.
K : float
Standard deviation of the AR error model.
mu : float
Mean value of the distribution function integrated in the AR error model.

Returns
-------
error : 1D array with the error values in the hp points

Returns
-------
error : 1D array
Array containing the error values in the hp points.
'''
hp=int(hp)


error = np.zeros(hp)
error[0] = np.random.normal(F0, K0)
for i_c in range(hp - 1):
33 changes: 19 additions & 14 deletions forecast/forecaster.py
Original file line number Diff line number Diff line change
@@ -8,7 +8,7 @@
of the test case to provide deterministic forecast.

'''
from .error_emulator import predict_temperature_error_AR1, predict_solar_error_AR1,mean_filter
from .error_emulator import predict_temperature_error_AR1, predict_solar_error_AR1, mean_filter
import numpy as np


@@ -34,7 +34,7 @@ def __init__(self, testcase):
self.case = testcase

def get_forecast(self, point_names, horizon=24 * 3600, interval=3600,
weather_temperature_dry_bulb=None, weather_solar_global_horizontal=None,
weather_temperature_dry_bulb=None, weather_solar_global_horizontal=None, seed=None,
category=None, plot=False):

if weather_temperature_dry_bulb is None:
@@ -52,10 +52,13 @@ def get_forecast(self, point_names, horizon=24 * 3600, interval=3600,
interval=interval,
category=category,
plot=plot)

if 'TDryBul' in point_names and any(weather_temperature_dry_bulb.values()):
if seed is not None:
np.random.seed(seed)
# error in the forecast
error_forecast_temp = predict_temperature_error_AR1(
hp=horizon / interval + 1,
hp=int(horizon / interval + 1),
F0=weather_temperature_dry_bulb["F0"],
K0=weather_temperature_dry_bulb["K0"],
F=weather_temperature_dry_bulb["F"],
@@ -65,17 +68,20 @@ def get_forecast(self, point_names, horizon=24 * 3600, interval=3600,

# forecast error just added to dry bulb temperature
forecast['TDryBul'] = forecast['TDryBul'] - error_forecast_temp
forecast['TDryBul']=forecast['TDryBul'].tolist()
forecast['TDryBul'] = forecast['TDryBul'].tolist()
if 'HGloHor' in point_names and any(weather_solar_global_horizontal.values()):

original_HGloHor = np.array(forecast['HGloHor']).copy()
lower_bound = 0.2 * original_HGloHor
upper_bound = 2 * original_HGloHor
indices = np.where(original_HGloHor > 50)[0]
for _ in range(200):
# while True:


for i in range(200):
if seed is not None:
np.random.seed(seed+i*i)
error_forecast_solar = predict_solar_error_AR1(
int(horizon / interval) + 1,
int(horizon / interval + 1),
weather_solar_global_horizontal["ag0"],
weather_solar_global_horizontal["bg0"],
weather_solar_global_horizontal["phi"],
@@ -85,15 +91,14 @@ def get_forecast(self, point_names, horizon=24 * 3600, interval=3600,

forecast['HGloHor'] = original_HGloHor - error_forecast_solar

#Check if any point in forecast['HGloHor'] is out of the specified range
# Check if any point in forecast['HGloHor'] is out of the specified range
condition = np.any((forecast['HGloHor'][indices] > 2 * original_HGloHor[indices]) |
(forecast['HGloHor'][indices] < 0.2 * original_HGloHor[indices]))
(forecast['HGloHor'][indices] < 0.2 * original_HGloHor[indices]))
# forecast['HGloHor']=gaussian_filter_ignoring_nans(forecast['HGloHor'])
forecast['HGloHor'] = mean_filter(forecast['HGloHor'])
forecast['HGloHor'] = np.clip(forecast['HGloHor'], lower_bound, upper_bound)
forecast['HGloHor'] = forecast['HGloHor'].tolist()
if not condition:
break

forecast['HGloHor'] = mean_filter(forecast['HGloHor'])
forecast['HGloHor'] = np.clip(forecast['HGloHor'], lower_bound, upper_bound)
forecast['HGloHor']=forecast['HGloHor'].tolist()


return forecast
1 change: 1 addition & 0 deletions restapi.py
Original file line number Diff line number Diff line change
@@ -84,6 +84,7 @@ def handle_validation_error(self, error, bundle_errors):
parser_scenario.add_argument('time_period', type=str)
parser_scenario.add_argument('temperature_uncertainty', type=str)
parser_scenario.add_argument('solar_uncertainty', type=str)
parser_scenario.add_argument('seed', type=int)
# ``forecast`` interface
parser_forecast_points = reqparse.RequestParser(argument_class=CustomArgument)
parser_forecast_points.add_argument('point_names', type=list, action='append', required=True)
25 changes: 22 additions & 3 deletions testcase.py
Original file line number Diff line number Diff line change
@@ -91,6 +91,7 @@ def __init__(self, fmupath='models/wrapped.fmu'):
# Set default scenario
self.config_json['scenario']['solar_uncertainty']=None #todo
self.config_json['scenario']['temperature_uncertainty']=None #todo
self.config_json['scenario']['seed'] = None #todo
self.set_scenario(self.config_json['scenario'])
self.uncertainty_params = self.load_uncertainty_params()

@@ -978,13 +979,17 @@ def get_forecast(self, point_names, horizon, interval, temperature_uncertainty=N
solar_params.update(self.uncertainty_params['solar'][solar_uncertainty])

try:

if self.scenario['seed'] is not None :
applied_seed=int(self.scenario['seed']+self.start_time)
else:
applied_seed=None
payload = self.forecaster.get_forecast(
point_names,
horizon=horizon,
interval=interval,
weather_temperature_dry_bulb=temperature_params,
weather_solar_global_horizontal=solar_params
weather_solar_global_horizontal=solar_params,
seed=applied_seed
)

except:
@@ -1032,7 +1037,8 @@ def set_scenario(self, scenario):
'electricity_price': None,
'time_period': None,
'temperature_uncertainty': None,
'solar_uncertainty': None
'solar_uncertainty': None,
'seed':None,
}


@@ -1112,6 +1118,19 @@ def set_scenario(self, scenario):
else:
self.scenario['solar_uncertainty'] = None

if scenario['seed']:
if isinstance(scenario['seed'], int) and scenario['seed'] >= 0:
self.scenario['seed'] = scenario['seed']
payload['seed'] = self.scenario['seed']
else:
status = 400
message = "Scenario parameter seed is {}, " \
"but should be a non-negative integer.".format(scenario['seed'])
logging.error(message)
return status, message, payload
else:
self.scenario['seed'] = None

except:
status = 400
message = "Invalid values for the scenario parameters: {}".format(traceback.format_exc())
17 changes: 9 additions & 8 deletions testcases/bestest_air/models/config.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
{
"name" : "bestest_air",
"area" : 48.0,
"start_time" : 0.0,
"warmup_period" : 0.0,
"step" : 3600.0,
"scenario" : {"electricity_price":"constant",
"time_period":null,
"weather_forecast_uncertainty": null}
"name": "bestest_air",
"area": 48.0,
"start_time": 0.0,
"warmup_period": 0.0,
"step": 3600.0,
"scenario": {
"electricity_price": "constant",
"time_period": null
}
}
33 changes: 30 additions & 3 deletions testing/test_forecast_uncertainty_MultiZone.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
"""
This module runs tests for testcase3. To run these tests, testcase3 must already be deployed.
This module runs tests for testcase3. To run these tests, testcase3 must already be deployed.Ensure 'testcase3'
is deployed by running `TESTCASE=testcase3 docker-compose up` in the terminal at the root directory of the software.
It includes tests to check forecast intervals, horizon, and uncertainty levels.

"""
@@ -57,7 +58,8 @@ def test_interval_horizon(self):
uncertain_level = 'low'
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'temperature_uncertainty': uncertain_level
'temperature_uncertainty': uncertain_level,
'seed': 5
})
forecasts = requests.put('{0}/forecast'.format(self.url),
json={'point_names': ['TDryBul'],
@@ -84,6 +86,30 @@ def test_high_uncertainty(self):
"""Test forecasts under high uncertainty."""
self.check_uncertainty(uncertain_level='high')

def test_forecast_temperature_are_within_range(self):
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'temperature_uncertainty': 'high',
'seed': 5
})
u = {
self.input_names[0]: float(1),
self.input_names[1]: float(self.inputs_metadata[self.input_names[1]]['Minimum'])
}

for _ in range(1000):

forecasts=requests.put('{0}/forecast'.format(self.url),
json={'point_names': ['TDryBul'],
'interval': 3600,
'horizon': 48 * 3600},
).json()['payload']['TDryBul']
for forecast in forecasts:
self.assertGreaterEqual(forecast, 173.15, f"Forecast temperature {forecast} is below -100°C")
self.assertLessEqual(forecast, 373.15, f"Forecast temperature {forecast} is above 100°C")
requests.post('{0}/advance'.format(self.url), json=u).json()


def check_uncertainty(self,uncertain_level):
"""Check the forecast uncertainty parameters against references.

@@ -106,7 +132,8 @@ def check_uncertainty(self,uncertain_level):

requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'temperature_uncertainty': uncertain_level
'temperature_uncertainty': uncertain_level,
'seed': 5
})

# Collect forecasts and calculate errors
56 changes: 52 additions & 4 deletions testing/test_forecast_uncertainty_SingleZone.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
"""
This module runs tests for testcase2. To run these tests, testcase2 must already be deployed.
This module runs tests for testcase2. To run these tests, testcase2 must already be deployed.Ensure 'testcase2'
is deployed by running `TESTCASE=testcase2 docker-compose up` in the terminal at the root directory of the software.
It includes tests to check forecast intervals, horizon, and uncertainty levels.

"""
@@ -60,7 +61,8 @@ def test_interval_horizon(self):
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'solar_uncertainty': uncertain_level,
'temperature_uncertainty': uncertain_level
'temperature_uncertainty': uncertain_level,
'seed':5
})
forecasts = requests.put('{0}/forecast'.format(self.url),
json={'point_names': ['TDryBul', 'HGloHor'],
@@ -78,7 +80,7 @@ def test_interval_horizon(self):
def test_low_uncertainty(self):
"""Test forecasts under low uncertainty."""
self.check_uncertainty(uncertain_level='low')

#
def test_medium_uncertainty(self):
"""Test forecasts under medium uncertainty."""
self.check_uncertainty(uncertain_level='medium')
@@ -87,6 +89,51 @@ def test_high_uncertainty(self):
"""Test forecasts under high uncertainty."""
self.check_uncertainty(uncertain_level='high')

def test_forecast_solar_radiation_are_positive(self):
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'solar_uncertainty': 'high',
'seed': 5
})
u = {
self.input_names[0]: float(1),
self.input_names[1]: float(self.inputs_metadata[self.input_names[1]]['Minimum'])
}

for _ in range(1000):

forecasts=requests.put('{0}/forecast'.format(self.url),
json={'point_names': ['HGloHor'],
'interval': 3600,
'horizon': 48 * 3600},
).json()['payload']['HGloHor']
for forecast in forecasts:
self.assertGreaterEqual(forecast, 0, f"Forecast value {forecast} is not greater than 0")
requests.post('{0}/advance'.format(self.url), json=u).json()

def test_forecast_temperature_are_within_range(self):
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'temperature_uncertainty': 'high',
'seed': 5
})
u = {
self.input_names[0]: float(1),
self.input_names[1]: float(self.inputs_metadata[self.input_names[1]]['Minimum'])
}

for _ in range(1000):

forecasts=requests.put('{0}/forecast'.format(self.url),
json={'point_names': ['TDryBul'],
'interval': 3600,
'horizon': 48 * 3600},
).json()['payload']['TDryBul']
for forecast in forecasts:
self.assertGreaterEqual(forecast, 173.15, f"Forecast temperature {forecast} is below -100°C")
self.assertLessEqual(forecast, 373.15, f"Forecast temperature {forecast} is above 100°C")
requests.post('{0}/advance'.format(self.url), json=u).json()

def check_uncertainty(self,uncertain_level):
"""Check the forecast uncertainty parameters against references.

@@ -111,7 +158,8 @@ def check_uncertainty(self,uncertain_level):
requests.put('{0}/scenario'.format(self.url), json={
'electricity_price': 'constant',
'solar_uncertainty': uncertain_level,
'temperature_uncertainty': uncertain_level
'temperature_uncertainty': uncertain_level,
'seed':5
})

# Collect forecasts and calculate errors