Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type conversion interface #4

Open
Roger-luo opened this issue Jan 5, 2019 · 10 comments
Open

Type conversion interface #4

Roger-luo opened this issue Jan 5, 2019 · 10 comments

Comments

@Roger-luo
Copy link
Member

This package should provide a gpu function that do the following, like FluxML/Flux.jl#513

some_circuit |> gpu(Float32)
some_cricuit |> cpu(Float64)
@GiggleLiu
Copy link
Member

current interface is some_circuit |> cu, why don't we use cu instead of gpu?

@MikeInnes
Copy link

I used to use cu in Flux but ended up going against that; adapt means something different and it doesn't really work to apply it to models.

It might be nice to re-use Flux's treelike functionality in Yao though. I'd be happy to split it out into its own package if that's useful to you.

@GiggleLiu
Copy link
Member

GiggleLiu commented Jan 30, 2019

@MikeInnes Unfortunately, we can not import Flux into our project for using the gpu interface only. Is there any example to help me and Roger understand why using cu can be a problem?

Also, what are the treelike functionality in Flux? It sounds very interesting.

@Roger-luo
Copy link
Member Author

@MikeInnes

I personally prefer cuda instead of gpu tho, pytorch use this as well.

Regarding to treelike, we already have one (tho, it's a bit ugly at the moment), it provides the traversing functionality and printing etc. I think we should bring that abstract tree package back to life, so we could merge those effort on this.

we will consider to integrate with Flux's AD or Zygote in another package which provides AD for quantum circuits. But Yao will be something like DE.jl, a meta package.

@MikeInnes
Copy link

Sure, I'm not expecting you to take a Flux dependency. Splitting out the treelike stuff would be the right move.

The reason for gpu as opposed to cu, in Flux, is that it's hardware agnostic, e.g. in principle gpu could convert to CLArrays, if you have that loaded. I can't remember what the specific problem with using adapt directly was though, to be honest. If you don't hit any conflicts with it then it might be fine.

@Roger-luo
Copy link
Member Author

Roger-luo commented Jan 31, 2019

@MikeInnes What about AbstractTree.jl, I copied part of it while implementing our printings. Is there any reason Flux is not using this?

Maybe we should just have some functions instead of providing any type for them.

@MikeInnes
Copy link

For Flux's purposes at least, we don't really need AbstractTrees' functionality. We really just need mapchildren and then the rest follow from there.

@Roger-luo
Copy link
Member Author

Roger-luo commented Jan 31, 2019

@MikeInnes I see. I guess you only need that for the conversion between CPU and GPU in Flux.

We do need some traverse API to dispatch/collect parameters however, unlike normal machine learning models we need to dispatch parameter across the whole tree. The block tree in Yao is actually a kind of AST in some sense, so it requires more functionality of a tree-like data structure.

@GiggleLiu
Copy link
Member

Maybe we should avoid using gpu, since we do have a vague plan to combine Yao.jl and Flux.jl in the future in another projects about differentiable quantum circuits. Then it will be anoying to have naming conflicts. Also, CuYao.jl supports CuArrays only, so that we don't have the problem of different types of GPU arrays. So far, I haven't see any problem of using cu in my current research project yet. But still thank @MikeInnes for reminding us about this potential problem.

The block tree in Yao is actually a kind of AST in some sense, so it requires more functionality of a tree-like data structure.

In fact, I am pretty happy with current recursive approach to visit all nodes 😂 . What can we expect from the AbstractTree? Or what kind of functionality do you want to support? @Roger-luo

@Roger-luo
Copy link
Member Author

Roger-luo commented Jan 31, 2019

@MikeInnes I'd love to have cuda for converting to CuArray and opencl/cl for converting to CLArray and maybe also tpu for converting to XRTArray. And they are also simple bindings to a mapchildren in Flux. This is more explicit than gpu. (maybe I should post this in Flux).

@GiggleLiu No, I mean it requires more functionality of a tree-like data structure than Flux's treelike.

For conversion between CPU and GPU in Yao, I think we can just use our own tree operators for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants