Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an adaptive optimizer #3192

Merged
merged 42 commits into from
Nov 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
bb593ac
add append gate function
soranjh Oct 20, 2022
0c1bf07
add AdaptiveOptimizer class
soranjh Oct 20, 2022
97155d4
add AdaptiveOptimizer class
soranjh Oct 20, 2022
5864abe
add test
soranjh Oct 20, 2022
c3aa6ba
add step function
soranjh Oct 20, 2022
b85cbe2
Merge branch 'master' into adaptive_optimizer
soranjh Oct 20, 2022
f3ab3a8
add docstring
soranjh Oct 21, 2022
409069f
add gradient return
soranjh Oct 21, 2022
ce30763
modify docs
soranjh Oct 21, 2022
a000d30
Merge branch 'master' into adaptive_optimizer
soranjh Oct 21, 2022
cfc4bd4
fix pylint issues
soranjh Oct 21, 2022
3108c8f
fix build issue
soranjh Oct 21, 2022
6134490
add circuit test
soranjh Oct 21, 2022
5f797b6
fix codefactor issue and modify names
soranjh Oct 21, 2022
ae6995a
correct example output
soranjh Oct 21, 2022
81435c3
Merge branch 'master' into adaptive_optimizer
soranjh Oct 25, 2022
4c7c2fb
Merge branch 'master' into adaptive_optimizer
Jaybsoni Oct 25, 2022
7b9f94d
apply suggestions from code review
soranjh Oct 26, 2022
f359a8b
modify docstring
soranjh Oct 26, 2022
fd9e4fa
Merge branch 'adaptive_optimizer' of https://github.com/PennyLaneAI/p…
soranjh Oct 26, 2022
9c89e8e
add code review comments
soranjh Oct 26, 2022
18e1346
Merge branch 'master' into adaptive_optimizer
soranjh Oct 26, 2022
1a2fb61
correct test args
soranjh Oct 26, 2022
0ff3fbc
add more tests
soranjh Oct 26, 2022
ac12936
add qubit rotation test
soranjh Oct 26, 2022
e06227a
correct typo
soranjh Oct 26, 2022
8b897d6
fix pylint issue
soranjh Oct 26, 2022
8dac0dc
fix type
soranjh Oct 26, 2022
aa5d530
modify test
soranjh Oct 27, 2022
2e72cd9
add drain test
soranjh Oct 27, 2022
4d1fba0
Merge branch 'master' into adaptive_optimizer
soranjh Oct 27, 2022
8c3cf27
modify docstring example
soranjh Oct 27, 2022
28825f0
Merge branch 'master' into adaptive_optimizer
soranjh Oct 28, 2022
ce38c1f
Merge branch 'master' into adaptive_optimizer
soranjh Oct 28, 2022
67c26e7
add code review comments
soranjh Oct 28, 2022
239af57
add text to tests
soranjh Oct 31, 2022
41e4716
Merge branch 'master' into adaptive_optimizer
soranjh Oct 31, 2022
038a51f
update changelog
soranjh Oct 31, 2022
3641dfe
modify docs
soranjh Oct 31, 2022
2ebd384
Merge branch 'master' into adaptive_optimizer
soranjh Oct 31, 2022
8e68c50
modify docs
soranjh Oct 31, 2022
a7a92af
modify changelog code
soranjh Nov 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/introduction/interfaces.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,7 @@ Some of these are specific to quantum optimization, such as the :class:`~.QNGOpt

~pennylane.AdagradOptimizer
~pennylane.AdamOptimizer
~pennylane.AdaptiveOptimizer
~pennylane.GradientDescentOptimizer
~pennylane.LieAlgebraOptimizer
~pennylane.MomentumOptimizer
Expand Down
84 changes: 84 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,90 @@ Users can specify the control wires as well as the values to control the operati
13: ──RY(1.23)─┤
```

* An optimizer is added for building and optimizing quantum circuits adaptively.
[(#3192)](https://github.com/PennyLaneAI/pennylane/pull/3192)

The new optimizer, ``AdaptiveOptimizer``, takes an initial circuit and a collection of operators
as input and adds a selected gate to the circuits at each optimization step. The process of
growing the circuit can be repeated until the circuit gradients converge to zero within a given
threshold. The adaptive optimizer can be used to implement algorithms such as ``ADAPT-VQE`` as
shown in the following example.

First, the molecule is defined and the Hamiltonian is computed:

```python
symbols = ["H", "H", "H"]
geometry = np.array([[0.01076341, 0.04449877, 0.0],
[0.98729513, 1.63059094, 0.0],
[1.87262415, -0.00815842, 0.0]], requires_grad=False)
H, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, charge = 1)
```

The collection of gates to grow the circuit is built to contain all single and double excitations:

```python
n_electrons = 2
singles, doubles = qml.qchem.excitations(n_electrons, qubits)
singles_excitations = [qml.SingleExcitation(0.0, x) for x in singles]
doubles_excitations = [qml.DoubleExcitation(0.0, x) for x in doubles]
operator_pool = doubles_excitations + singles_excitations
```

An initial circuit that prepares a Hartree-Fock state and returns the expectation value of the
Hamiltonian is defined:

```python
hf_state = qml.qchem.hf_state(n_electrons, qubits)
dev = qml.device("default.qubit", wires=qubits)
@qml.qnode(dev)
def circuit():
qml.BasisState(hf_state, wires=range(qubits))
return qml.expval(H)
```

Finally, the optimizer is instantiated and then the circuit is created and optimized adaptively:

```python
opt = qml.optimize.AdaptiveOptimizer()
for i in range(len(operator_pool)):
circuit, energy, gradient = opt.step_and_cost(circuit, operator_pool, drain_pool=True)
print('Energy:', energy)
print(qml.draw(circuit)())
print('Largest Gradient:', gradient)
print()
if gradient < 1e-3:
break
```

```pycon
Energy: -1.246549938420637
0: ─╭BasisState(M0)─╭G²(0.20)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)─┤ ╰<𝓗>
Largest Gradient: 0.14399872776755085

Energy: -1.2613740231529604
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)───────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)───────────┤ ╰<𝓗>
Largest Gradient: 0.1349349562423238

Energy: -1.2743971719780331
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)──────────┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─╭G(0.00)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─│────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─╰G(0.00)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)────────────────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)────────────────────┤ ╰<𝓗>
Largest Gradient: 0.00040841755397108586
```

<h3>Improvements</h3>

* Added the `samples_computational_basis` attribute to the `MeasurementProcess` and `QuantumScript` classes to track
Expand Down
2 changes: 2 additions & 0 deletions pennylane/optimize/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
# listed in alphabetical order to avoid circular imports
from .adagrad import AdagradOptimizer
from .adam import AdamOptimizer
from .adaptive import AdaptiveOptimizer
from .gradient_descent import GradientDescentOptimizer
from .lie_algebra import LieAlgebraOptimizer
from .momentum import MomentumOptimizer
Expand All @@ -35,6 +36,7 @@
__all__ = [
"AdagradOptimizer",
"AdamOptimizer",
"AdaptiveOptimizer",
"GradientDescentOptimizer",
"LieAlgebraOptimizer",
"MomentumOptimizer",
Expand Down
220 changes: 220 additions & 0 deletions pennylane/optimize/adaptive.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,220 @@
# Copyright 2018-2022 Xanadu Quantum Technologies Inc.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Adaptive optimizer"""
# pylint: disable= no-value-for-parameter, protected-access
import pennylane as qml

from pennylane import numpy as np


@qml.qfunc_transform
def append_gate(tape, params, gates):
"""Append parameterized gates to an existing tape.

Args:
tape (QuantumTape): quantum tape to transform by adding gates
params (array[float]): parameters of the gates to be added
gates (list[Operator]): list of the gates to be added
"""
for o in tape.operations:
qml.apply(o)

for i, g in enumerate(gates):
g.data[0] = params[i]
qml.apply(g)

for m in tape.measurements:
qml.apply(m)


class AdaptiveOptimizer:
r"""Optimizer for building fully trained quantum circuits by adding gates adaptively.

Quantum circuits can be built by adding gates
`adaptively <https://www.nature.com/articles/s41467-019-10988-2>`_. The adaptive optimizer
implements an algorithm that grows and optimizes an input quantum circuit by selecting and
adding gates from a user-defined collection of operators. The algorithm starts by adding all
the gates to the circuit and computing the circuit gradients with respect to the gate
parameters. The algorithm then retains the gate which has the largest gradient and optimizes its
parameter. The process of growing the circuit can be repeated until the computed gradients
converge to zero within a given threshold. The optimizer returns the fully trained and
adaptively-built circuit. The adaptive optimizer can be used to implement
algorithms such as `ADAPT-VQE <https://www.nature.com/articles/s41467-019-10988-2>`_.

Args:
param_steps (int): number of steps for optimizing the parameter of a selected gate
stepsize (float): step size for optimizing the parameter of a selected gate

**Example**

This examples shows an implementation of the
`ADAPT-VQE <https://www.nature.com/articles/s41467-019-10988-2>`_ algorithm for building an
adaptive circuit for the :math:`\text{H}_3^+` cation.

>>> import pennylane as qml
>>> from pennylane import numpy as np

The molecule is defined and the Hamiltonian is computed with:

>>> symbols = ["H", "H", "H"]
>>> geometry = np.array([[0.01076341, 0.04449877, 0.0],
... [0.98729513, 1.63059094, 0.0],
... [1.87262415, -0.00815842, 0.0]], requires_grad=False)
>>> H, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, charge = 1)

The collection of gates to grow the circuit adaptively contains all single and double
excitations:

>>> n_electrons = 2
>>> singles, doubles = qml.qchem.excitations(n_electrons, qubits)
>>> singles_excitations = [qml.SingleExcitation(0.0, x) for x in singles]
>>> doubles_excitations = [qml.DoubleExcitation(0.0, x) for x in doubles]
>>> operator_pool = doubles_excitations + singles_excitations

An initial circuit preparing the Hartree-Fock state and returning the expectation value of the
Hamiltonian is defined:

>>> hf_state = qml.qchem.hf_state(n_electrons, qubits)
>>> dev = qml.device("default.qubit", wires=qubits)
>>> @qml.qnode(dev)
... def circuit():
... qml.BasisState(hf_state, wires=range(qubits))
... return qml.expval(H)

The optimizer is instantiated and then the circuit is created and optimized adaptively:

>>> opt = AdaptiveOptimizer()
>>> for i in range(len(operator_pool)):
... circuit, energy, gradient = opt.step_and_cost(circuit, operator_pool, drain_pool=True)
ixfoduap marked this conversation as resolved.
Show resolved Hide resolved
soranjh marked this conversation as resolved.
Show resolved Hide resolved
soranjh marked this conversation as resolved.
Show resolved Hide resolved
... print('Energy:', energy)
... print(qml.draw(circuit)())
... print('Largest Gradient:', gradient)
... print()
... if gradient < 1e-3:
... break

.. code-block :: pycon

Energy: -1.246549938420637
0: ─╭BasisState(M0)─╭G²(0.20)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)─┤ ╰<𝓗>
Largest Gradient: 0.14399872776755085

Energy: -1.2613740231529604
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)───────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)───────────┤ ╰<𝓗>
Largest Gradient: 0.1349349562423238

Energy: -1.2743971719780331
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)──────────┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─╭G(0.00)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─│────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─╰G(0.00)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)────────────────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)────────────────────┤ ╰<𝓗>
Largest Gradient: 0.00040841755397108586
"""

def __init__(self, param_steps=10, stepsize=0.5):
self.param_steps = param_steps
self.stepsize = stepsize

@staticmethod
def _circuit(params, gates, initial_circuit):
ixfoduap marked this conversation as resolved.
Show resolved Hide resolved
"""Append parameterized gates to an existing circuit.

Args:
params (array[float]): parameters of the gates to be added
gates (list[Operator]): list of the gates to be added
initial_circuit (function): user-defined circuit that returns an expectation value

Returns:
function: user-defined circuit with appended gates
"""
final_circuit = append_gate(params, gates)(initial_circuit)

return final_circuit()

def step(self, circuit, operator_pool, params_zero=True):
r"""Update the circuit with one step of the optimizer.

Args:
circuit (.QNode): user-defined circuit returning an expectation value
operator_pool (list[Operator]): list of the gates to be used for adaptive optimization
params_zero (bool): flag to initiate circuit parameters at zero

Returns:
.QNode: the optimized circuit
"""
return self.step_and_cost(circuit, operator_pool, params_zero=params_zero)[0]

def step_and_cost(self, circuit, operator_pool, drain_pool=False, params_zero=True):
r"""Update the circuit with one step of the optimizer, return the corresponding
objective function value prior to the step, and return the maximum gradient

Args:
circuit (.QNode): user-defined circuit returning an expectation value
operator_pool (list[Operator]): list of the gates to be used for adaptive optimization
drain_pool (bool): flag to remove selected gates from the operator pool
params_zero (bool): flag to initiate circuit parameters at zero

Returns:
tuple[.QNode, float, float]: the optimized circuit, the objective function output prior
to the step, and the largest gradient
"""
cost = circuit()
device = circuit.device

if drain_pool:
repeated_gates = [
gate
for gate in operator_pool
for operation in circuit.tape.operations
if qml.equal(gate, operation, rtol=float("inf"))
]
for gate in repeated_gates:
operator_pool.remove(gate)

params = np.array([gate.parameters[0] for gate in operator_pool], requires_grad=True)
qnode = qml.QNode(self._circuit, device)
grads = qml.grad(qnode)(params, gates=operator_pool, initial_circuit=circuit.func)

selected_gates = [operator_pool[np.argmax(abs(grads))]]
ixfoduap marked this conversation as resolved.
Show resolved Hide resolved
optimizer = qml.GradientDescentOptimizer(stepsize=self.stepsize)

if params_zero:
params = np.zeros(len(selected_gates))
else:
params = np.array(
[gate.parameters[0]._value for gate in selected_gates], requires_grad=True
)

for _ in range(self.param_steps):
params, _ = optimizer.step_and_cost(
qnode, params, gates=selected_gates, initial_circuit=circuit.func
)

circuit = append_gate(params, selected_gates)(circuit.func)

qnode = qml.QNode(circuit, device)

return qnode, cost, max(abs(qml.math.toarray(grads)))
soranjh marked this conversation as resolved.
Show resolved Hide resolved
Loading