-
Notifications
You must be signed in to change notification settings - Fork 376
Implement selection of optimizer in the QGAN algorithm #1253
Conversation
Raise error if the number of the training data items is smaller than the batch size, e.g. due to truncation to bounds.
…he qGAN algorithm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this adds a feature that end users would be interested in could you please add a reno release note. If you look in the CONTRIBUTING.md it has information on what is needed. Thanks.
@@ -62,6 +64,7 @@ def __init__(self, | |||
or a QuantumCircuit implementing the generator. | |||
init_params: 1D numpy array or list, Initialization for | |||
the generator's parameters. | |||
optimizer: optimizer to be used for the training of the generator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any specific aspects or configuration of the optimizer that we could add here to the docstring to inform the user? For instance I see that the default optimizer just has one iteration.
Also in train I see this code that I think will fail if another optimizer is supplied.
self._optimizer._maxiter = 1
self._optimizer._t = 0
Could we have a test case with different optimizer to assure things work.
Can I point these lines out again from train() that I mentioned above
These are really updating private variables known to be there for ADAM. Worst case _t may be some internal variable for another optimizer and this clobbers it - best case the above simply create unused variables on the optimizer instance and it effect is benign. Maybe the above code should only be executed in the case that the optimizer is an instance of ADAM. This presumably forces the interations to be 1 - as I mentioned do we need to inform users in the docstring that this is how some other the optimizer should be set for best operation since it cannot be forced the way it is done there when its ADAM? |
# Force single optimization iteration | ||
self._optimizer._maxiter = 1 | ||
self._optimizer._t = 0 | ||
if self._optimizer._maxiter != 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there is no standard interface to query this. Some optimizers take max fun as well as max iter type settings, where the former, without deeper knowledge of what is happening internal figuring iters from funs is problematic.
I would suggest this type of text could just be the docstring for the optimizer param as I am not sure there is any real way to check this across the optimizers. So if a user supplies their own optimizer blindly without reading the text then yes things might behave unexpectedly - but hopefully they would consult the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In thinking since maxiter is placed in the options dictionary that could be checked and if the key exists then its value could be checked. If the key did not exist or its value was not 1 then an appropriate warning could be given. The options dictionary is not exposed via a public getter interface but could be (it can be accessed via its private instance variable, but if this is something that was wanted maybe making a getter and having it public would be better). This would be a simple change to base optimizer class to make a getter is you feel that this level of checking is needed and just documenting it will not suffice.
…ity/qiskit-aqua#1253) * Update qgan.py Raise error if the number of the training data items is smaller than the batch size, e.g. due to truncation to bounds. * Update qgan.py * Update qgan.py * add possibility to choose an optimizer for the quantum generator in the qGAN algorithm * Update quantum_generator.py * Add optimizer instances to test_qgan.py * Update test_qgan.py * remove forced maxiter and t=0 for optimizer * Update quantum_generator.py * fix lint * fix linting and brakets * fix optimizer argument * Change maxiter check, fix tests to set maxiter to 1 * fix lint Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com> Co-authored-by: woodsp <woodsp@us.ibm.com> Co-authored-by: Manoel Marques <Manoel.Marques@ibm.com>
Summary
This PR addresses Issue #1248.
Details and comments
The change enables to set a custom optimizer for the quantum generator when running the qGAN algorithm.