Skip to content

Commit

Permalink
Remove old return type, mode kwarg (#4503)
Browse files Browse the repository at this point in the history
* remove old return type

* Changelog

* Remove numpy unpinning

* Update

* Update

* More tests

* Apply suggestions from code review

Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>

* Update doc/development/deprecations.rst

Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>

* Remove unused function

* Remove a lot of imports

* Linting

* More update

* Comments

* Remove qscript shape

* Remove unused import

* Remove compute vjp

* Remove test

* Remove compute_vjp in torch

* Remove from doc compute vjp

* Remove pylint

* Hessian coverage

* Unused import

---------

Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>
  • Loading branch information
rmoyard and mudit2812 authored Aug 24, 2023
1 parent 67b92d2 commit 6b71579
Show file tree
Hide file tree
Showing 117 changed files with 547 additions and 42,517 deletions.
31 changes: 5 additions & 26 deletions .github/workflows/interface-unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ jobs:
pytorch_version: ${{ inputs.pytorch_version }}
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: torch and not qcut and not legacy and not finite-diff and not param-shift
pytest_markers: torch and not qcut and not finite-diff and not param-shift


autograd-tests:
Expand All @@ -67,7 +67,7 @@ jobs:
install_pytorch: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: autograd and not qcut and not legacy and not finite-diff and not param-shift
pytest_markers: autograd and not qcut and not finite-diff and not param-shift


tf-tests:
Expand All @@ -92,7 +92,7 @@ jobs:
install_pytorch: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: tf and not qcut and not legacy and not finite-diff and not param-shift
pytest_markers: tf and not qcut and not finite-diff and not param-shift
pytest_additional_args: --splits 2 --group ${{ matrix.group }}
additional_pip_packages: pytest-split

Expand All @@ -118,7 +118,7 @@ jobs:
install_pytorch: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: jax and not qcut and not legacy and not finite-diff and not param-shift
pytest_markers: jax and not qcut and not finite-diff and not param-shift
pytest_additional_args: --splits 2 --group ${{ matrix.group }}
additional_pip_packages: pytest-split

Expand All @@ -144,7 +144,7 @@ jobs:
install_pytorch: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: core and not qcut and not legacy and not finite-diff and not param-shift
pytest_markers: core and not qcut and not finite-diff and not param-shift
pytest_additional_args: --splits 2 --group ${{ matrix.group }}
additional_pip_packages: pytest-split

Expand Down Expand Up @@ -219,26 +219,6 @@ jobs:
pytest_markers: qchem
additional_pip_packages: openfermionpyscf basis-set-exchange


legacy-tests:
uses: ./.github/workflows/unit-test.yml
with:
job_name: legacy-tests
branch: ${{ inputs.branch }}
coverage_artifact_name: legacy-coverage
python_version: 3.9
install_jax: true
jax_version: ${{ inputs.jax_version }}
install_tensorflow: true
tensorflow_version: ${{ inputs.tensorflow_version }}
keras_version: ${{ inputs.tensorflow_version }}
install_pytorch: true
pytorch_version: ${{ inputs.pytorch_version }}
install_pennylane_lightning_master: false
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: legacy


gradients-tests:
strategy:
matrix:
Expand Down Expand Up @@ -325,7 +305,6 @@ jobs:
- zx-tests
- qcut-tests
- qchem-tests
- legacy-tests
- gradients-tests
- data-tests
- device-tests
Expand Down
23 changes: 11 additions & 12 deletions doc/development/deprecations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,18 +12,6 @@ Pending deprecations
- Deprecated in v0.32
- Removed in v0.33

* ``qml.enable_return`` and ``qml.disable_return`` are deprecated. Please avoid calling
``disable_return``, as the old return system is deprecated along with these switch functions.

- Deprecated in v0.32
- Will be removed in v0.33

* The ``mode`` keyword argument in ``QNode`` is deprecated, as it was only used in the old return
system (which is also deprecated). Please use ``grad_on_execution`` instead.

- Deprecated in v0.32
- Will be removed in v0.33

* The ``observables`` argument in ``QubitDevice.statistics`` is deprecated. Please use ``circuit``
instead. Using a list of observables in ``QubitDevice.statistics`` is deprecated. Please use a
``QuantumTape`` instead.
Expand Down Expand Up @@ -147,6 +135,17 @@ Pending deprecations
Completed deprecation cycles
----------------------------

* ``qml.enable_return`` and ``qml.disable_return`` have been removed. The old return types are no longer available.

- Deprecated in v0.32
- Removed in v0.33

* The ``mode`` keyword argument in ``QNode`` has been removed, as it was only used in the old return
system (which has also been removed). Please use ``grad_on_execution`` instead.

- Deprecated in v0.32
- Removed in v0.33

* ``qml.math.purity``, ``qml.math.vn_entropy``, ``qml.math.mutual_info``, ``qml.math.fidelity``,
``qml.math.relative_entropy``, and ``qml.math.max_entropy`` no longer support state vectors as
input. Please call ``qml.math.dm_from_state_vector`` on the input before passing to any of these functions.
Expand Down
19 changes: 12 additions & 7 deletions doc/introduction/returns.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,12 +115,21 @@ select the option below that describes your situation.
x = np.array(0.5, requires_grad=True)
qml.jacobian(circuit)(x)
Follow the instructions :ref:`here <return-autograd-tf-gotcha>` to fix this issue, which
arises because NumPy and TensorFlow do not support differentiating tuples.
Alternatively, consider porting your code to use the :ref:`JAX <jax_interf>` or
Use stacking to fix this issue (see below), which arises because NumPy and TensorFlow do not support
differentiating tuples. Alternatively, consider porting your code to use the :ref:`JAX <jax_interf>` or
:ref:`Torch <torch_interf>` interface, which could unlock additional features and performance
benefits!

.. code-block:: python
with tf.GradientTape() as tape:
res = circuit(a, b)
res = tf.stack(res)
x = np.array(0.5, requires_grad=True)
qml.jacobian(circuit)(x)
* You are returning differently-shaped quantities together, such as
:func:`expval() <pennylane.expval>` and :func:`probs() <pennylane.probs>`. For example, the
following code is compatible with version 0.29 of PennyLane but will raise an error in version
Expand Down Expand Up @@ -183,7 +192,3 @@ select the option below that describes your situation.
- If you suspect that your issue is due to a bug in PennyLane itself, please open a
`bug report <https://github.com/PennyLaneAI/pennylane/issues/new?labels=bug+%3Abug%3A&template=bug_report.yml&title=[BUG]>`_
on the PennyLane GitHub page.

- As a *last resort*, you can place :func:`qml.disable_return() <.disable_return>` at the top of
your code. This will revert PennyLane's behaviour to the QNode return type in version 0.29.
However, be aware that this capability will be removed in a future version of PennyLane!
10 changes: 9 additions & 1 deletion doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,12 @@

<h3>Breaking changes 💔</h3>

* The old return type and associated functions ``qml.enable_return`` and ``qml.disable_return`` are removed.
[#4503](https://github.com/PennyLaneAI/pennylane/pull/4503)

* The ``mode`` keyword argument in ``QNode`` is removed. Please use ``grad_on_execution`` instead.
[#4503](https://github.com/PennyLaneAI/pennylane/pull/4503)

<h3>Deprecations 👋</h3>

<h3>Documentation 📝</h3>
Expand All @@ -16,4 +22,6 @@

<h3>Contributors ✍️</h3>

This release contains contributions from (in alphabetical order):
This release contains contributions from (in alphabetical order):

Romain Moyard
1 change: 0 additions & 1 deletion pennylane/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@
from pennylane.templates.subroutines import *
from pennylane import qaoa
from pennylane.qnode import QNode, qnode
from pennylane.return_types import enable_return, disable_return, active_return
from pennylane.transforms import (
adjoint_metric_tensor,
batch_params,
Expand Down
16 changes: 2 additions & 14 deletions pennylane/_grad.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,6 @@
from autograd.extend import vspace
from autograd.wrap_util import unary_to_nary

from pennylane.return_types import active_return

make_vjp = unary_to_nary(_make_vjp)


Expand Down Expand Up @@ -329,18 +327,8 @@ def _jacobian_function(*args, **kwargs):
"If this is unintended, please add trainable parameters via the "
"'requires_grad' attribute or 'argnum' keyword."
)
try:
jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
except TypeError as e:
if active_return():
raise ValueError(
"PennyLane has a new return shape specification that"
" may not work well with autograd and more than one measurement. That may"
" be the source of the error. \n\n"
"See the documentation here for more information:\n"
"https://docs.pennylane.ai/en/stable/introduction/returns.html"
) from e
raise e

jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)

return jac[0] if unpack else jac

Expand Down
Loading

0 comments on commit 6b71579

Please sign in to comment.