Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC][VM] Heterogeneous execution in Relay VM #4178

Closed
4 tasks
wweic opened this issue Oct 22, 2019 · 5 comments
Closed
4 tasks

[RFC][VM] Heterogeneous execution in Relay VM #4178

wweic opened this issue Oct 22, 2019 · 5 comments

Comments

@wweic
Copy link
Contributor

wweic commented Oct 22, 2019

Heterogeneous execution in Relay VM

Goal

Relay graph runtime supports executing different parts of the graph in various devices, namely heterogeneous execution. We’d like to port the feature to Relay VM.

Non-goals

There is a limitation of device annotation pass that it assumes all the computation happens inside a single function, so it’s not able to compute the device assignment of multiple relay functions. It might be an issue that we allocate GPU tensor in the main function, but calls out to a tensor array concatenate operation which is another relay function, it might crash or copy to CPU memory(I haven’t experimented yet). A proper way to fix this is implement interprocedural analysis for the device annotation pass.

Current Design in Relay Graph Runtime

Compilation

Reference: #2361

Summary: If users want to specify a device for an operator to run on, they can use an annotation operator named on_device(expr, dev_id) to wrap an expression. At a step RunDeviceAnnotationPass during relay.build, we will replace on_device node with device_copy node. At the step of PasGraphPlanMemory , we compute the device assignment(device_type see next section) of each memory block. This is possible because graph runtime only support static graph, so we can capture all the information statically. Then during native code generation, device_copy node is mapped to special packed function named __copy.

Runtime

Reference: #1695

Summary: In the graph json file, a new field named device_type specifies which device a static memory node should be scheduled to, the runtime allocates the memory in on the device accordingly. When graph runtime sees special operator named __copy, it calls TVMArrayCopyFromTo to move memory across devices correctly.

Proposal for Relay VM

Compilation

References:

  1. Add AllocStorage opcode which allocates physical memory. ([Relay][Memory][VM] #3560)

We should be able to reuse all the workflow up until RunDeviceAnnotationPass. VM compiler which translate relay expression into vm opcodes needs to map device_copy node into an opcode named DeviceCopy(src_register, dst_register). The tensor object in each register should have the device context so vm knows how to copy the data. We need to change AllocTensor(later AllocStorage) as well, we need to attach the device context to the instruction so we know where to allocate the memory, right now we just use the default context.

VM Runtime

VM needs to implement the changes to AllocTensor and DeviceCopy.

Tasks

  • Add opcode DeviceCopy.
  • Add device context to AllocTensor/AllocStorage.
  • Change VMCompiler to attach device context to AllocTensor/AllocStorage.
  • Change VMCompiler to emit DeviceCopy opcode.

cc @icemelon9 @zhiics @zxy844288792 @jroesch @tqchen @yzhliu

@jroesch
Copy link
Member

jroesch commented Oct 24, 2019

I think if we look at my recent PR we need to probably track the device context when we allocate storage. The storage's context will prevent merging different pieces of storage.

@wweic
Copy link
Contributor Author

wweic commented Oct 24, 2019

@jroesch thanks. I have put references to the PR in the RFC.

@zxy844288792
Copy link
Contributor

I'm interested in this. @wweic I'll talk to you for advice.

@tqchen
Copy link
Member

tqchen commented Oct 8, 2020

@zhiics @wweic would be great to get a status update and see if this PR can be updated or closed

@zhiics
Copy link
Member

zhiics commented Oct 8, 2020

ahh, thanks for reminding. This is closed by #6337

@zhiics zhiics closed this as completed Oct 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants