Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue/adjuster #115

Merged
merged 9 commits into from
Nov 13, 2024
Merged

Issue/adjuster #115

merged 9 commits into from
Nov 13, 2024

Conversation

peterdudfield
Copy link
Contributor

@peterdudfield peterdudfield commented Nov 12, 2024

Pull Request

Description

  • add adjuster model to run after man ml model
  • main sql query is in adjuster.py - this could be moved to the datamodel, but for now perhaps its ok
  • added tests
Screenshot 2024-11-12 at 16 33 52

Wind results currently do look slightly weird though

Screenshot 2024-11-12 at 16 36 13

#116

How Has This Been Tested?

  • CI tests

  • ran app locally, and results looks reasonable

  • add units tests

  • Yes

Checklist:

  • My code follows OCF's coding style guidelines
  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked my code and corrected any misspellings

@peterdudfield peterdudfield marked this pull request as ready for review November 13, 2024 07:24
@peterdudfield peterdudfield requested a review from Sukh-P November 13, 2024 07:48
for i in range(0, len(start_times)):
forecast_value = ForecastValueSQL(
horizon_minutes=i * 15,
forecast_power_kw=random.random() * 10000,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be easier to test/ have stronger tests if these weren't random values? Thinking then some of the != asserts in tests below could be stronger = asserts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

your right, they would be better. Trying to work out if to do it in this PR, or put it as a good first issue. The generation values are currently random too, so would need to change them too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forecast_values_df_adjust["me_kw"].fillna(0, inplace=True)

# adjust forecast_power_kw by ME values
forecast_values_df_adjust["forecast_power_kw"] = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently applied to all forecast horizons, is that okay or do we want to add some logic to restrict this to be applied to just day ahead horizons?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, currently not. I was thinking, perhaps to deploy it and see the behaviour on this.
You would hope our ME on the smaller forecast horizons is smaller due to live data, satellite e.t.c, (which i saw breifly when looking at me), so this extra bit of logic might not be necessary.

@peterdudfield peterdudfield merged commit e8993bd into main Nov 13, 2024
3 checks passed
@peterdudfield peterdudfield deleted the issue/adjuster branch November 13, 2024 09:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants