Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Enhance TensorFlow Frontend Control Flow Support #4969

Closed
kevinthesun opened this issue Feb 28, 2020 · 8 comments
Closed

[RFC] Enhance TensorFlow Frontend Control Flow Support #4969

kevinthesun opened this issue Feb 28, 2020 · 8 comments

Comments

@kevinthesun
Copy link
Contributor

kevinthesun commented Feb 28, 2020

Summary

In today’s TVM TensorFlow frontend, there is only limited support for control flow, which resulting in difficult in covering TensorFlow object detection models. In this RFC, we will discuss how to improve current TVM TF frontend to fully support TensorFlow control flow.

Solution

Visit TensorFlow Graph Def

Currently TVM parses TensorFlow Graph Def node by node in topological order. However, topological order might not stand any more if control flow is introduced into graph def. We do the following to alter the visiting order:

  1. Visit all nodes in original graph def order and fetch all control flow nodes.
  2. For all control flows nodes, modify the order so that for each while_loop, all Exit nodes sit at the end.
  3. Visit all control flow nodes in modified order and generate corresponding Relay Functions. For each control flow node, we backtrack all its ancestor nodes until input nodes.
  4. Visit and parse other unvisited nodes.

Parse Control Flow nodes

In TVM, while loop is represented as a recursive function. To convert a TensorFlow while_loop block into a Relay function, the following assumption must stand:

  1. All Exit nodes under the same while_loop must have the same node name prefix as this while loop block, which is got with node.name.rsplit('/', 1)[0]. In TensorFlow 1.x, user can specify the name of while_loop, but Exit node prefix is automated generated with ancestor nodes’ names.
  2. Merge, Switch, NextIteration and Exit nodes with the same postfix number are belong to the same iteration body. For example, Merge_1, Switch_1, NextIteration_1 and Exit_1 should refer to the same iteration body. Since these four nodes are generated when user creating while_loop, the postfix number can’t be arbitrarily set.

We convert a while loop block to a Relay function with the following method:

  1. When we get a Merge node inside a while_loop block: if it is a direct child of a while_loop block, we create a Loop wrapper to start adding all sub-nodes under this loop block. Otherwise, we create a complete branch op with previously stored Switch condition.
  2. When we get an Enter node: we simply convert its input node to Relay node.
  3. When we get a LoopCond node: we first convert its input node to Relay node. Then we set the converted node as our Loop wrapper condition.
  4. When we get a Switch node: we first convert both its input and condition to Relay nodes. If it is a direct child of a while_loop block, we add converted input to loop variable list of the corresponding Loop wrapper. Otherwise, we create a Branch wrapper, set its condition to be converted condition node and add it to a global branch dictionary.
  5. When we get an Exit node: we generate Relay function for the corresponding while loop body.

Tested models

With above changes(together with pending PR of dynamic NMS), we can convert and compile TensorFlow object detection models. The following models from TensorFlow object detection model zoo are tested:
ssd_mobilenet_v1_coco
ssd_mobilenet_v2_coco
ssd_resnet_50_fpn_coco
mask_rcnn_inception_v2_coco
mask_rcnn_resnet50_atrous_coco
faster_rcnn_resnet50_coco

TODO

Investigate TensorFlow 2.0 control flow and what changes are required.

@tqchen @jroesch @srkreddy1238 @zhiics @yongwww @wweic

@zhiics
Copy link
Member

zhiics commented Feb 28, 2020

Just a reminder, to support these models we need some patches for tensor array as well. mask_rcnn seems requiring some more debugging.

@zhiics
Copy link
Member

zhiics commented Feb 28, 2020

ahh, one more reminder, for all these models, we will have OOM problem for pretty printing after the ANF pass. It is very likely due to more and more intermediate result saved by recursively visiting the AST

@masahi
Copy link
Member

masahi commented Feb 28, 2020

wow, supporting mask rcnn is a great achievement! I was also trying to support mask rcnn in torch, but I was stuck at dynamic nms. https://discuss.tvm.ai/t/supporting-non-maximum-suppresion-op-with-dynamic-output-shape/5818/

Is "pending dynamic NMS PR" going to be useful to pytorch use case?

@zhiics
Copy link
Member

zhiics commented Feb 28, 2020

Yes, @yongwww had one months back

#4312

@masahi
Copy link
Member

masahi commented Feb 28, 2020

great, will take a look. Hopefully the PR would become active soon.

@kevinthesun
Copy link
Contributor Author

One blocking issue for dynamic NMS is that it needs to use dynamic strided_slice. However, current fusion pass can't handle dynamic shape well yet. As a result, we can't fuse dynamic strided_slice and have to temporarily mark strided_slice as Opaque. However, this will result in other normal static strided_slice not fused and cause performance issue. Maybe we need to directly handle fusion for dynamic shape. @masahi do you have any idea on how to improve this?

@masahi
Copy link
Member

masahi commented Feb 29, 2020

I'm not familiar with dynamic shape support in Relay, so I cannot comment on how it relates to fusion. But it sounds like a interesting problem.

@kevinthesun
Copy link
Contributor Author

This feature is now supported in TVM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants