Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] Optimize operator - operator multiplication #158

Closed
jpmoutinho opened this issue Apr 23, 2024 · 1 comment
Closed

[Performance] Optimize operator - operator multiplication #158

jpmoutinho opened this issue Apr 23, 2024 · 1 comment
Assignees
Labels
noise noisy simulation performance Performance improvements

Comments

@jpmoutinho
Copy link
Collaborator

Currently different size operators in the apply_operator product are handled by padding the smaller one with identities. Another option is to do it by manipulating the qubit indices without padding, which is likely to be more efficient.

@jpmoutinho jpmoutinho added the performance Performance improvements label Apr 23, 2024
@EthanObadia EthanObadia self-assigned this Apr 25, 2024
@EthanObadia EthanObadia added the noise noisy simulation label May 22, 2024
jpmoutinho added a commit that referenced this issue Jul 26, 2024
…e, add more tests (#220)

Now this also takes care of the stuff previously in
#201

- [x] Finally `tensor` on all primitive gates fully working and tested
for arbitrary support and expanded to arbitrary support.
- [x] `tensor` on all composite operations also fully working working
and tested.
- [x] `tensor` on projectors fully working and tested.
- [x] `tensor` on `HamiltonianEvolution` fully working and tested.
- [x] Tests based on `tensor` (i.e. using `_calc_mat_vec_wavefunction`)
all reviewed, refactored, and centralized in `test_tensor.py`.

## Changes to `Primitive`, `Sequence`, `Add`, `Scale`, `Observable`:
- [KINDA BREAKING] The `qubit_support` property is now always ordered,
which was not the case for `Primitive`. The `tensor` method will follow
this order.
- The `Scale` can now be applied to any instance of `Primitive`,
`Sequence` or `Add`.
- [BREAKING] The `Observable` is now a simple extension of `Add` with an
extra `expectation` method.
- [BREAKING] The `DiagonalObservable` has been removed as its current
logic is much slower than the normal `Observable` logic of apply all
terms individually. There will be gains to get in doing with diagonal
gates but will need a full redesign in a new MR.

## Changes to the `tensor` method:
- [BREAKING] There is no longer a `n_qubits` argument.
- Instead, there is an optional `full_support`

The design of `tensor` is that doing `op.tensor()` will return the
matrix where the size and rows / column ordering exactly matches
`op.qubit_support`. E.g.:

```
op = CNOT(0, 1)
op.qubit_support
op.tensor()[..., 0].real
---
(0, 1)
tensor([[1., 0., 0., 0.],
        [0., 1., 0., 0.],
        [0., 0., 0., 1.],
        [0., 0., 1., 0.]], dtype=torch.float64)
```
```
op = CNOT(1, 0)
op.qubit_support
op.tensor()[..., 0].real
---
(0, 1)
tensor([[1., 0., 0., 0.],
        [0., 0., 0., 1.],
        [0., 0., 1., 0.],
        [0., 1., 0., 0.]], dtype=torch.float64)
```
```
op = CNOT(23, 12)
op.qubit_support
op.tensor()[..., 0].real
---
(12, 23)
tensor([[1., 0., 0., 0.],
        [0., 0., 0., 1.],
        [0., 0., 1., 0.],
        [0., 1., 0., 0.]], dtype=torch.float64)
```
Expanding an operator by introducing explicit identity matrices can be
done with the `full_support` argument. The order in which the
`full_support` tuple is given does not matter, it will simply sort it,
check which qubits in it are not part of the support of the operator,
and add those identities. The logic for this is in
`utils.expand_operator` (which would eventually replace
`promote_operator`)
```
op = CNOT(23, 12)
op.tensor(full_support = (2, 12, 23))[..., 0].real
---
(12, 23)
tensor([[1., 0., 0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 1., 0., 0., 0., 0.],
        [0., 0., 1., 0., 0., 0., 0., 0.],
        [0., 1., 0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 1., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0., 0., 1.],
        [0., 0., 0., 0., 0., 0., 1., 0.],
        [0., 0., 0., 0., 0., 1., 0., 0.]], dtype=torch.float64)
```
The logic for composite operations is exactly the same, it simply calls
`tensor()` on all individual operations but expands their qubit support
according to the qubit support of the composite operation itself. E.g.:
```
op = Sequence([X(0), CNOT(2, 5)])
op. qubit_support # (0, 2, 5)
op.tensor()[..., 0].real
---
tensor([[0., 0., 0., 0., 1., 0., 0., 0.],
        [0., 0., 0., 0., 0., 1., 0., 0.],
        [0., 0., 0., 0., 0., 0., 0., 1.],
        [0., 0., 0., 0., 0., 0., 1., 0.],
        [1., 0., 0., 0., 0., 0., 0., 0.],
        [0., 1., 0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 1., 0., 0., 0., 0.],
        [0., 0., 1., 0., 0., 0., 0., 0.]], dtype=torch.float64)
```
The operation above will call the `X(0).tensor(full_support = (0, 2,
5))` and `CNOT(2, 5).tensor(full_support = (0, 2, 5))` and then combine
them through matrix multiplication. In `Add`, the individual tensors are
added together, and in `Scale`, the tensor is multiplied by the scale
value. An even larger support could be passed onto the operation above
by doing e.g. `op.tensor(full_support = (0, 2, 5, 43))`, which would
then be propagated down the tree of operation tensors.

## What is left to do (in other MRs):
- Better polish the `Primitive` and `Parametric` classes:
#240
- More efficient operator-operator products:
#221 and
#158, (need to update the
issue) which would be important to make some of the `tensor` logic and
noisy simulations more efficient by not using explicit identities.
- Introduce specific logic for diagonal operators:
#211

## What this MR closes:
- Completes some of the things in
#225, and some of the points
there are now tracked elsewhere. Let's keep it open for now though
- Closes #210
- Closes #192
@jpmoutinho
Copy link
Collaborator Author

Closing as already tracked in #221

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
noise noisy simulation performance Performance improvements
Projects
None yet
Development

No branches or pull requests

2 participants