Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimization.py: Fix "epsilons" keyword argument in _optimize() #150

Merged
merged 1 commit into from
Oct 25, 2022

Conversation

EwoutH
Copy link
Collaborator

@EwoutH EwoutH commented Jun 17, 2022

The keyword argument epsilon isn't used anywhere else in the EMAworkbench, it should be epsilons (with a s).

So currently the _optimize() function (and all functions that call it) doesn't throw an error when you define a list of epsilons of the wrong length, because it searches for epsilon instead of epsilons. This PR fixes this.

Fixing this error should help catch this a lot sooner, but a bit of additional docs describing the behavior of different epsilon values, why and how certain values can be chosen and how to exactly define them should clear it up even more. I think that can be included in #147.

Maybe we can also include tests for this function, one with the right length of epsilon values and one with a wrong list length.

  • Fix keyword spelling
  • Add tests

On a bit more personal note, this behavior threw me way off, and unfortunately invalidates a lot of our results, because we thought we were defining epsilons right. We also should have provided epsilon values for all outcomes which weren't used as KPI, instead of defining only epsilon values for our KPIs.

The keyword argument "epsilon" isn't used anywhere else in the EMAworkbench, it should be epsilons (with an s).

So currently if doesn't throw an error when you define a list of epsilons of the wrong length, because it searches for epsilon instead of epsilons. This commit fixes this.
@coveralls
Copy link

Coverage Status

Coverage remained the same at 80.04% when pulling e35fffb on EwoutH:patch-12 into 1b5175d on quaquel:master.

@quaquel
Copy link
Owner

quaquel commented Jun 20, 2022

It would also be good to add the updated tests to this PR.

In my view, the check on the correct number of epsilons actually belongs in the platypus-opt rather than in the workbench. But in the meantime, it is good to have it here. However, you only need a number of values equal to the number of objectives (i.e., non-INFO type outcomes defined on a model).

I am unsure how much documentation I want to do in the workbench for functionality from other libraries (e.g., SALib, platypus_opt).

@EwoutH
Copy link
Collaborator Author

EwoutH commented Jul 2, 2022

Shall we merge this, split out the testing to a different issue, and open a broader discussion about the scope of the EMAworkbench somewhere/-when else?

@quaquel
Copy link
Owner

quaquel commented Jul 5, 2022

I prefer the additional tests here after which this can be merged.

The broader discussion on scope can move somewhere else.

@EwoutH EwoutH added this to the 2.3.0 milestone Aug 31, 2022
@quaquel
Copy link
Owner

quaquel commented Sep 24, 2022

How do we want to handle this? We can merge this into master and create a separate issue for adding more unit tests for optimization.py. Alternatively, I can close this and create a new pull request with both the unit tests and the fix.

@EwoutH
Copy link
Collaborator Author

EwoutH commented Sep 25, 2022

I would like to take a crack at writing the unittest myself. Testing is an area I would like some more experience

@EwoutH EwoutH self-assigned this Sep 25, 2022
@EwoutH EwoutH merged commit 54ba5e6 into quaquel:master Oct 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants