You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this case in particular, since the gradient will be computed independently on the interface, and that will be just used by the downstream to consume it, maybe you don't need to specify the interface at all, until you will actually use it.
I.e. just specify it in the expectation() call :)
(In principle, you don't even need a backend until you execute, but that would require passing one in every circuit call - unless you cache a backend in the Circuit object itself)
(In principle, you don't even need a backend until you execute, but that would require passing one in every circuit call - unless you cache a backend in the Circuit object itself)
This is probably true. I'll try to drop the interface definition and test all the interfaces implementing algorithms directly.
In this case in particular, since the gradient will be computed independently on the interface, and that will be just used by the downstream to consume it, maybe you don't need to specify the interface at all, until you will actually use it. I.e. just specify it in the expectation() call :)
Step back: the Hamiltonian object right now is backend (interface 🙈) dependent. E.g. ham.matrix is a torch.tensor if initialized after setting qibo.set_backend("pytorch"). This is required if you want to run a symbolical_with_torch. Same if you want to run the symbolical_with_jax. The interface has to be jax friendly,
As the title says, if needed.
The text was updated successfully, but these errors were encountered: