Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass Inputs dictionary #359

Merged
merged 31 commits into from
Jul 4, 2024
Merged

Pass Inputs dictionary #359

merged 31 commits into from
Jul 4, 2024

Conversation

NicolaCourtier
Copy link
Member

Description

Aligning Inputs between problem, observer and model.

Issue reference

Fixes #358.

Review

Before you mark your PR as ready for review, please ensure that you've considered the following:

  • Updated the CHANGELOG.md in reverse chronological order (newest at the top) with a concise description of the changes, including the PR number.
  • Noted any breaking changes, including details on how it might impact existing functionality.

Type of change

  • New Feature: A non-breaking change that adds new functionality.
  • Optimization: A code change that improves performance.
  • Examples: A change to existing or additional examples.
  • Bug Fix: A non-breaking change that addresses an issue.
  • Documentation: Updates to documentation or new documentation for new features.
  • Refactoring: Non-functional changes that improve the codebase.
  • Style: Non-functional changes related to code style (formatting, naming, etc).
  • Testing: Additional tests to improve coverage or confirm functionality.
  • Other: (Insert description of change)

Key checklist:

  • No style issues: $ pre-commit run (or $ nox -s pre-commit) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)
  • All unit tests pass: $ nox -s tests
  • The documentation builds: $ nox -s doctest

You can run integration tests, unit tests, and doctests together at once, using $ nox -s quick.

Further checks:

  • Code is well-commented, especially in complex or unclear areas.
  • Added tests that prove my fix is effective or that my feature works.
  • Checked that coverage remains or improves, and added tests if necessary to maintain or increase coverage.

Thank you for contributing to our project! Your efforts help us to deliver great software.

Copy link

codecov bot commented Jun 12, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.60%. Comparing base (41acda3) to head (a9f73df).
Report is 755 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #359      +/-   ##
===========================================
+ Coverage    97.49%   97.60%   +0.10%     
===========================================
  Files           42       42              
  Lines         2471     2459      -12     
===========================================
- Hits          2409     2400       -9     
+ Misses          62       59       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@NicolaCourtier NicolaCourtier marked this pull request as ready for review June 13, 2024 11:09
@NicolaCourtier
Copy link
Member Author

The diff of this PR is 100% covered and I've added a number of tests but the project coverage is still down by 0.11%. I think it might worth continuing with the review despite this.

@NicolaCourtier
Copy link
Member Author

NicolaCourtier commented Jun 13, 2024

Note that there will be conflicts with #352, as mentioned here: #352 (comment)

These conflicts have now been resolved, update here: #352 (comment)

@NicolaCourtier
Copy link
Member Author

NicolaCourtier commented Jun 13, 2024

Thanks for the helpful chat @BradyPlanden! I've added a parameters.verify function, so I think this is now ready for review. I can redirect this into #338/#352 if that would help (considering the 'temporary fixes' required for GLL in this branch).

Comment on lines +261 to +265
for param in parameters:
if param not in self.param.values():
self.add(param)
else:
print(f"Discarding duplicate {param.name}.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context: I am coming from the MultiFitting PR. If I understand this correctly, if a user provides the same parameter for 2 problems in the MultiFitting, only the first instance will make it through while the others will be discarded. This is fine if the users gives us the exact same 2 parameters (with bounds and initial values) but otherwise I think it would cause issues. I can think of 2 ways around this:

  1. Throw an error if a parameter appears twice with different bounds/initial values.
  2. Combine the two parameters somehow (take intersection between domains of validity?) and throw a warning to let the user know.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @brosaplanella! At the moment, it is option 1 because adding a parameter with the same name will return an error (see parameter.py line 223) - even if the initial values and bounds are identical. In future, we could upgrade to option 2, but 1 seems the more controlled approach for now.

Copy link
Member

@BradyPlanden BradyPlanden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the additions @NicolaCourtier. A few comments to look at, but I think it's heading in a good direction.

@@ -322,13 +332,6 @@ class RandomClass:
with pytest.raises(ValueError):
pybop.Optimisation(cost=cost, optimiser=RandomClass)

@pytest.mark.unit
def test_prior_sampling(self, cost):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty pedantic test, but do we replicate this elsewhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prior sampling is carried out by the rvs function of a Parameter, so I believe this is tested in test_priors.py.

tests/unit/test_observers.py Show resolved Hide resolved
tests/unit/test_models.py Outdated Show resolved Hide resolved
tests/unit/test_problem.py Outdated Show resolved Hide resolved
tests/unit/test_problem.py Outdated Show resolved Hide resolved
pybop/models/base_model.py Outdated Show resolved Hide resolved
pybop/models/base_model.py Outdated Show resolved Hide resolved
pybop/costs/fitting_costs.py Outdated Show resolved Hide resolved
pybop/costs/_likelihoods.py Outdated Show resolved Hide resolved
pybop/costs/_likelihoods.py Outdated Show resolved Hide resolved
@NicolaCourtier
Copy link
Member Author

Hi @BradyPlanden, thanks for the review. The two failing integration tests also fail when run with matching parameter values on develop. Should I investigate on this branch or shall we leave this for #338?

@NicolaCourtier
Copy link
Member Author

Noting that I applied the suggestions by @BradyPlanden to revert the passing of inputs as dictionaries, back to lists, in the tests. We will continue to support lists of parameter values, although it is safer to use dictionaries. The allowable Inputs are defined in Parameters verify.

Copy link
Member

@BradyPlanden BradyPlanden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for these additions @NicolaCourtier!

@NicolaCourtier NicolaCourtier merged commit 4c4a31e into develop Jul 4, 2024
27 of 29 checks passed
@NicolaCourtier NicolaCourtier deleted the 358-passing-inputs branch July 4, 2024 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Align the type of inputs passed from cost to problem and model
3 participants