Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix callback compatibility for trainable_model #878

Merged

Conversation

OkuyanBoga
Copy link
Collaborator

Summary

Fixed a compatibility issue as mentioned in #877 for different callback functions when it is used in trainable_model based algorithms.

Fixes #877

@OkuyanBoga OkuyanBoga added the Changelog: Bugfix Include in the Fixed section of the changelog label Dec 10, 2024
@woodsp-ibm
Copy link
Member

I guess I am trying to understand the relationship of this callback to that of the optimizers. Yes it looks to be the same signature as a scipy optimizer one but otherwise this is just a callback, True if you are using SPSA and have a callback for that, since it has a different signature you could not use it for the callback here in this context. The test which is failing is using SPSA, which presumably worked before, so I am having a hard time figuring things.

@OkuyanBoga
Copy link
Collaborator Author

I guess I am trying to understand the relationship of this callback to that of the optimizers. Yes it looks to be the same signature as a scipy optimizer one but otherwise this is just a callback, True if you are using SPSA and have a callback for that, since it has a different signature you could not use it for the callback here in this context. The test which is failing is using SPSA, which presumably worked before, so I am having a hard time figuring things.

Maybe, changing optimizers such as SPSA and ADAM might be a better option

@woodsp-ibm
Copy link
Member

Maybe, changing optimizers such as SPSA and ADAM might be a better optio

I guess I am still confused to some degree. The trainable model has its own callback that it defines right - it may have the same signature as scipy optimizer one but that is the only correlation as far as I know. The test was working with SPSA, which itself has a different callback. The issue really did not provide enough information for me to understand besides the notion that something mismatched - no code sample to reproduce or whatever.

@OkuyanBoga
Copy link
Collaborator Author

Hi Steve,

I was not clear enough on the issue, maybe this might be useful:

SPSA has its own callback function since it is not based on SciPyOptimizer and it is called inside its own loop:

if fx + self.allowed_increase <= fx_next: # accept only if loss improved
if self.callback is not None:
self.callback(
self._nfev, # number of function evals
x_next, # next parameters
fx_next, # loss at next parameters
np.linalg.norm(update), # size of the update step
False,
) # not accepted

But trainable_model makes calls itself, and I think it is only compatible with SciPyOptimizer:

def objective(objective_weights):
objective_value = function.objective(objective_weights)
self._callback(objective_weights, objective_value)
return objective_value

and my suggested change is:

  def objective(objective_weights):
      objective_value = function.objective(objective_weights)
      if isinstance(self._optimizer, SciPyOptimizer):
          self._callback(objective_weights, objective_value)
      return objective_value

This way we can allow custom callback functions and clear potential ambiguity when the function is called (inside or outside of the optimisation loop for custom callback functions).

Please note that SPSA is used in tutorials with QKT, not with trainable models:

"spsa_opt = SPSA(maxiter=10, callback=cb_qkt.callback, learning_rate=0.05, perturbation=0.05)\n",

I think trainable models must be adaptive if SciPyOptimizer is not used.

…66.yaml

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>
@OkuyanBoga OkuyanBoga added priority: high stable backport potential The bug might be minimal and/or import enough to be port to stable labels Dec 12, 2024
@edoaltamura edoaltamura marked this pull request as draft December 16, 2024 15:44
@edoaltamura edoaltamura marked this pull request as ready for review December 16, 2024 16:03
@coveralls
Copy link

Pull Request Test Coverage Report for Build 12356721976

Details

  • 3 of 3 (100.0%) changed or added relevant lines in 1 file are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage increased (+0.002%) to 90.635%

Files with Coverage Reduction New Missed Lines %
qiskit_machine_learning/algorithms/trainable_model.py 1 89.09%
Totals Coverage Status
Change from base Build 12356603561: 0.002%
Covered Lines: 4365
Relevant Lines: 4816

💛 - Coveralls

@edoaltamura edoaltamura merged commit 02ee6a7 into qiskit-community:main Dec 16, 2024
16 checks passed
@edoaltamura edoaltamura deleted the fix-callback_trainable_model branch December 16, 2024 19:28
mergify bot pushed a commit that referenced this pull request Dec 16, 2024
* Fix callback compatibility for trainable_model

* Update releasenotes/notes/fix-callback-trainable_model-ca52060260a3e466.yaml

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update tests with callbacks

* Disable pylint unused arguments in tests

---------

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>
(cherry picked from commit 02ee6a7)
edoaltamura pushed a commit that referenced this pull request Dec 16, 2024
* Fix callback compatibility for trainable_model

* Update releasenotes/notes/fix-callback-trainable_model-ca52060260a3e466.yaml

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update tests with callbacks

* Disable pylint unused arguments in tests

---------

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>
(cherry picked from commit 02ee6a7)

Co-authored-by: M. Emre Sahin <40424147+OkuyanBoga@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Changelog: Bugfix Include in the Fixed section of the changelog priority: high stable backport potential The bug might be minimal and/or import enough to be port to stable
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Incompatibility between different callback functions and trainable_model
4 participants