Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* Add abelian property to opvecs and AbelianGrouper to operator init.

* Break up PauliCoB big clifford synthesis function into smaller ones.

* Add AbelianGrouper test.

* Add better input checking in primitives and remove unnecessary print.

* Fix coeffs bugs in pauli_cob.py.

* Reorganize pauli_cob. All tests pass, with grouping on and off.

* Change expectation_value backends to work through setters.

* Reorganize local_simulator_sampler.py a bit to use it in a test.

* Grouping Paulis works!! All tests pass.

* Add "compute TPB pauli" function to pauli_cob.

* Add WIP attempt at evolution over Abelian paulis.

* Fix trotter bug.

* Fix some other Trotter bugs.

* Add parameters to OpPaulis and test. Parameterized evolution passes!!!

* Add parameter binding for Op coefficients.

* Add parameter binding, and binding tests. Tests pass.

* Add division to Operators to make normalization convenient.

* Finish merging MinEigenSolver PR. All tests pass.

* Update QAOA, all tests pass!!

* Update some QAOA imports and typehints.

* Add QDrift trotterization method. All tests pass.

* Start migrating QPE, tests fail.

* fix spell

* fix almost all style errors

* fix copyright

* fix import cycles, changed to relative imports when possible

* relative imports

* Add bind_params to state_fn_circuit.py and op_circuit.py, and tests. Add list unrolling for param binding, and tests. Tests pass.

* Add param list handling for all Op types.

* Make OpVec printing nicer.

* Add op_converter to imports for better backwards compatibility.

* Add AerPauliExpectation tests. Tests pass. Issue with Aer though.

* Fix a few AerPauliExpectation bugs

* Start building toward parameterized Qobj.

* fix some lint errors

* fix some lint errors

* fix style

* fix copyright

* set style to defaults, fix aqua unit test loading

* change loading tests qpe/iqpe

* Fix OpPrimitive lint errors.

* Fix operator_base.py lint errors.

* Fix state_fn.py lint errors.

* Fix OpVec lint errors.

* Fix state_fn_circuit.py lint errors.

* Fix state_fn_dict.py lint errors.

* Fix state_fn_operator.py lint errors.

* Fix state_fn_vector.py lint errors. Tests pass.

* Fix QDrift test to deal with first Op in trotterization list being OpCircuit.

* Fix op_circuit.py lint errors.

* Fix op_pauli.py lint errors.

* Fix op_composition.py lint errors.

* Fix op_kron.py lint errors.

* Fix abelian_grouper.py lint errors. Tests pass.

* Fix pauli_cob.py lint errors. Tests pass.

* Fix Expectation lint errors. Tests pass.

* Fix circuit sampler lint errors. Tests pass.

* Fix other expectation lint errors. Tests pass.

* Fix trotterization lint errors. Tests pass.

* Add MatrixEvolution shell, fix evolution lint errors.

* Fix bug in evolution tests after fixing lint errors.

* Fix cyclic import for lint.

* fix pylint cyclic import error

* Make tests pass. Add aux_ops back to VQE, and make VQE and QAOA take old or new ops.

* fix spell and lint

* Fix swapping issue in evolution.

* Fix composition in OpEvolution.

* Fix add OpSum and kron OpKron in OpEvolution.

* Add to_opflow to legacy base_operator

* Clean OpCircuit and StateFnCircuit __str__

* Fix qaoa mixer.

* fix spell,style

* Ok now really all tests pass.

* add to_matrix_op() methods to ops.

* Start migrating NumpyEigensolver for Opflow

* Start removing back from op_primitive eval functions. All tests pass.

* Update eval logic (to be able to remove back) for operator_combos.

* Add to_matrix_op for OpMatrix and StateFnVector, and change some `if back`s to `if back is not None`

* Finish decoupling back args from evals. All tests pass.

* Remove back from eval logic.

* Remove back from eval. All tests pass.

* Change matrix_expectation.py to rely on to_matrix_op.

* Migrate numpy_eigen_solver.py and numpy_minimum_eigen_solver.

* Remove ToMatrixOp converter.

* set VQE _auto_conversion to False for now

* Add sampling and tests. Fix a rounding error in a test. Fix a not none error in numpy_eigen_solver.py.

* Add array methods to OpVec. Fix typo in OpPauli. Allow reverse_endianness in to_opflow for WeightedPauli.

* Make NumpyEigensolver return a StateFn in result.eigenstate.

* Fix flaky optimization tests. Fix OpVec so iterator interface works.

* Fix StateFnVector sampling. Fix sparse NumpyEigensolution. Fix some aux_op stuff. Fix some other things here and there. Please no more breakage.

* Change some sparsity stuff.

* fix spelling

* Typehints.

* More typehints

* fix copyright

* fix spelling

* More typehints, make globals immutable.

* fix style

* Rearrange tests, Add CZ to globals.

* Refactor some names.

* Rename OpEvolution to EvolutionOp. Tests pass.

* Rename primitive ops. All tests pass.

* Finish renamings.

* Test IBMQ Pauli expectation. All tests pass.

* Update spelling.

* Update Pauli to num_qubits.

* Updating some naming.

* Add diag support to fix knapsack issue.

* fix unit test

* fix unit test

* fix travis

* Turn half of Steve's comments.

* Fix some exponentiation things.

* Fix some exponentiation things.

* Add trotterization_factory. All tests pass.

* Add evolution_factory. All tests pass.

* Add circuit_sampler_factory. All tests pass.

* Rename get_primitives to primitive_strings. Turn some of Julien's changes.

* Only allow sample_circuits to accept circuit_ops. Tests pass.

* Turn more review changes

* fix spell, style

* Add matrix_op exp_i() into HamiltonianGate. Tests fail due to CircuitOp decompose() during composition. If commented out (line 158) tests pass.

* Change CircuitOp and StateFnCircuit to rely on QuantumCircuit instead of Instruction. All tests pass. Add to_circuit_op to relevant ops. Solves all the decompose issues.

* Add matrix_evolution and update QAOA to test matrix_op for cost operator. Add logic to update UCCSD to operator flow. Tests pass.

* Delete PauliToInstruction, as it's obsolete.

* Add to_pauli_op and tests. Tests pass.

* Fix composed_op.py eval bug

* Add sig digit rounding. VQE tests fail.

* better precision for sig digit rounding. Tests pass.

* Fix pep8, add comment

* Add to_circuit_op to statefns, making DictToCircuit mostly obsolete. Tests pass.

* fix cyclic imports

* fix numpy boolean to string coercion

* Update repr and a docstring.

* Make ExpectationValues into converters. Test pass.

* Fix bug from merge.

* Fix bugs, make Minus just a CircuitStateFn and not a ComposedOp.

* Uncomment HamiltonianGate

* Update lots of docstrings part I. Tests pass.

* fix docstring

* More docstrings. Change class.rst so docs are generated for some python operator overloads.

* Add navigation structure for docs to init files

* More docs.

* fix doctrings

* 1) Change local_simulator_sampler.py to circuit_sampler.py
2) Set up circuit_samplers directory to be removed.
3) Add IBMQ VQE test.
4) Change AerPauliExpectation and CircuitSampler to handle expval_measurement/snapshots correctly.

Tests pass.

* 1) Delete circuit_samplers.
2) Allow CircuitSampler to attach_results.

* Update Operator init

* Change Operator directory names. Tests pass.

* fix spell, docs

* Turn Expectations purely into converters. Tests pass.

* fix docs

* skip IBMQ test

* Add Converters docs. Tests pass.

* fix spell

* Add Evolutions docs. Tests pass.

* Add Expectation docs. Tests pass.

* fix spell

* Add StateFn docs. Tests pass.

* Fix typo.

* Add ListOp init docs.

* Fix some ordering

* Little docs edits.

* fix spell

* Little docs edits.

* 1) Add to_legacy_op to OperatorBase to allow non-migrated algos to accept new Operators.
2) Allow QPE and iQPE to accept new Operators, migrate tests. Tests pass.

* Fix typehints for minimum_eigen_solvers

* Make sure expectations can handle mixed observables.

* fix spell

* Turn some more of Steve's comments. Tests pass.

* Turn some more of Steve's comments. Fix a buncha parameter stuff, and make sure mixed Pauli evolution works.

* Turn some more of Steve's comments. Tests pass.

* Turn some comments, fix a QAOA bug.

* Try collapsing ListOp to_matrix a bit.

* Turn more comments, fix some bugs.

* Turn more comments, fix some bugs.

* Update ListOp docs.

* Update ListOp docs.

* Update ListOp docs.

* fix docstring

* Update minimum_eigen_solvers setter typehints.

* Add Changelog and tests for DictToCircuitSum.

* Update VQE's construct_circuit and some changelog elements.

* fix spell

* Allow MinEigenOptimizer to accept StateFn result in result.eigenstate.

* fix style

* Update changelog with more detail. Update VQE to call super.

* Typo

Co-authored-by: Manoel Marques <manoel@us.ibm.com>
Co-authored-by: woodsp <woodsp@us.ibm.com>
  • Loading branch information
3 people authored Apr 27, 2020
1 parent 4278d25 commit 64caf42
Show file tree
Hide file tree
Showing 8 changed files with 125 additions and 118 deletions.
8 changes: 4 additions & 4 deletions qiskit/aqua/algorithms/classifiers/vqc.py
Original file line number Diff line number Diff line change
Expand Up @@ -368,8 +368,8 @@ def _cost_function_wrapper(self, theta):
if self._callback is not None:
self._callback(
self._eval_count,
theta[i * self._var_form.num_parameters:(i + 1) *
self._var_form.num_parameters],
theta[i * self._var_form.num_parameters:(i + 1)
* self._var_form.num_parameters],
curr_cost,
self._batch_index
)
Expand Down Expand Up @@ -671,8 +671,8 @@ def cost_estimate(probs, gt_labels, shots=None): # pylint: disable=unused-argum
def cross_entropy(predictions, targets, epsilon=1e-12):
predictions = np.clip(predictions, epsilon, 1. - epsilon)
N = predictions.shape[0]
tmp = np.sum(targets*np.log(predictions), axis=1)
ce = -np.sum(tmp)/N
tmp = np.sum(targets * np.log(predictions), axis=1)
ce = -np.sum(tmp) / N
return ce

x = cross_entropy(probs, mylabels)
Expand Down
8 changes: 4 additions & 4 deletions qiskit/aqua/algorithms/distribution_learners/qgan.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def __init__(self, data: np.ndarray, bounds: Optional[np.ndarray] = None,
# pylint: disable=unsubscriptable-object
if np.ndim(data) > 1:
if self._num_qubits is None:
self._num_qubits = np.ones[len(data[0])]*3
self._num_qubits = np.ones[len(data[0])] * 3
else:
if self._num_qubits is None:
self._num_qubits = np.array([3])
Expand Down Expand Up @@ -274,15 +274,15 @@ def train(self):
for e in range(self._num_epochs):
aqua_globals.random.shuffle(self._data)
index = 0
while (index+self._batch_size) <= len(self._data):
real_batch = self._data[index: index+self._batch_size]
while (index + self._batch_size) <= len(self._data):
real_batch = self._data[index: index + self._batch_size]
index += self._batch_size
generated_batch, generated_prob = self._generator.get_output(self._quantum_instance,
shots=self._batch_size)

# 1. Train Discriminator
ret_d = self._discriminator.train([real_batch, generated_batch],
[np.ones(len(real_batch))/len(real_batch),
[np.ones(len(real_batch)) / len(real_batch),
generated_prob])
d_loss_min = ret_d['loss']

Expand Down
32 changes: 18 additions & 14 deletions qiskit/aqua/components/neural_networks/numpy_discriminator.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@

logger = logging.getLogger(__name__)


# pylint: disable=invalid-name


Expand Down Expand Up @@ -167,12 +168,13 @@ def single_layer_backward_propagation(da_curr,
m = y.shape[1]
y = y.reshape(np.shape(x))
if weights is not None:
da_prev = - np.multiply(weights,
np.divide(y, np.maximum(np.ones(np.shape(x))*1e-4, x)) -
np.divide(1 - y, np.maximum(np.ones(np.shape(x))*1e-4, 1 - x)))
da_prev = - np.multiply(
weights,
np.divide(y, np.maximum(np.ones(np.shape(x)) * 1e-4, x))
- np.divide(1 - y, np.maximum(np.ones(np.shape(x)) * 1e-4, 1 - x)))
else:
da_prev = - (np.divide(y, np.maximum(np.ones(np.shape(x))*1e-4, x)) -
np.divide(1 - y, np.maximum(np.ones(np.shape(x))*1e-4, 1 - x))) / m
da_prev = - (np.divide(y, np.maximum(np.ones(np.shape(x)) * 1e-4, x))
- np.divide(1 - y, np.maximum(np.ones(np.shape(x)) * 1e-4, 1 - x))) / m

pointer = 0

Expand Down Expand Up @@ -306,17 +308,18 @@ def loss(self, x, y, weights=None):
if weights is not None:
# Use weights as scaling factors for the samples and compute the sum
return (-1) * np.dot(np.multiply(y,
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4, x))) +
np.multiply(np.ones(np.shape(y))-y,
np.log(np.maximum(np.ones(np.shape(x))*1e-4,
np.ones(np.shape(x))-x))), weights)
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4, x)))
+ np.multiply(np.ones(np.shape(y)) - y,
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4,
np.ones(np.shape(x)) - x))),
weights)
else:
# Compute the mean
return (-1) * np.mean(np.multiply(y,
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4, x))) +
np.multiply(np.ones(np.shape(y))-y,
np.log(np.maximum(np.ones(np.shape(x))*1e-4,
np.ones(np.shape(x))-x))))
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4, x)))
+ np.multiply(np.ones(np.shape(y)) - y,
np.log(np.maximum(np.ones(np.shape(x)) * 1e-4,
np.ones(np.shape(x)) - x))))

def _get_objective_function(self, data, weights):
"""
Expand All @@ -342,7 +345,7 @@ def objective_function(params):
prediction_fake = self.get_label(generated_batch)
loss_fake = self.loss(prediction_fake,
np.zeros(np.shape(prediction_fake)), generated_prob)
return 0.5*(loss_real[0]+loss_fake[0])
return 0.5 * (loss_real[0] + loss_fake[0])

return objective_function

Expand Down Expand Up @@ -371,6 +374,7 @@ def gradient_function(params):
grad_generated = self._discriminator.backward(prediction_generated, np.zeros(
np.shape(prediction_generated)), generated_prob)
return np.add(grad_real, grad_generated)

return gradient_function

def train(self, data, weights, penalty=False, quantum_instance=None, shots=None):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def gradient_penalty(self, x, lambda_=5., k=0.01, c=1.):
x = Variable(x)
# pylint: disable=no-member
delta_ = torch.rand(x.size()) * c
z = Variable(x+delta_, requires_grad=True)
z = Variable(x + delta_, requires_grad=True)
o_l = self.get_label(z)
# pylint: disable=no-member
d_g = torch.autograd.grad(o_l, z, grad_outputs=torch.ones(o_l.size()),
Expand Down
4 changes: 2 additions & 2 deletions qiskit/aqua/components/neural_networks/quantum_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,9 +301,9 @@ def loss(self, x, weights): # pylint: disable=arguments-differ
"""
try:
# pylint: disable=no-member
loss = (-1)*np.dot(np.log(x).transpose(), weights)
loss = (-1) * np.dot(np.log(x).transpose(), weights)
except Exception: # pylint: disable=broad-except
loss = (-1)*np.dot(np.log(x), weights)
loss = (-1) * np.dot(np.log(x), weights)
return loss.flatten()

def _get_objective_function(self, quantum_instance, discriminator):
Expand Down
7 changes: 4 additions & 3 deletions qiskit/aqua/utils/dataset_helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# This code is part of Qiskit.
#
# (C) Copyright IBM 2018, 2019.
# (C) Copyright IBM 2018, 2020.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
Expand Down Expand Up @@ -193,8 +193,9 @@ def discretize_and_truncate(data, bounds, num_qubits, return_data_grid_elements=
# prepare element grid for dim j
elements_current_dim = np.linspace(bounds[j, 0], bounds[j, 1], (2 ** prec))
# find index for data sample in grid
index_grid = np.searchsorted(elements_current_dim,
data_row-(elements_current_dim[1]-elements_current_dim[0])*0.5)
index_grid = np.searchsorted(
elements_current_dim,
data_row - (elements_current_dim[1] - elements_current_dim[0]) * 0.5)
for k, index in enumerate(index_grid):
data[k, j] = elements_current_dim[index]
if j == 0:
Expand Down
Loading

0 comments on commit 64caf42

Please sign in to comment.