Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LOGISTIC operator to relay tflite frontend #3313

Merged
merged 1 commit into from
Jun 11, 2019

Conversation

apivovarov
Copy link
Contributor

LOGISTIC operator is used in TFLite SSD Resnet50 and Mobilenet models.
Model graph: ssd_resnet_50_fpn_coco_nopp.tflite.pdf

### ssd_resnet_50_fpn_coco summary:
# op_id: op_name - count
 0: ADD - 58
 2: CONCATENATION - 2
 3: CONV_2D - 110
14: LOGISTIC - 1
17: MAX_POOL_2D - 4
18: MUL - 42
22: RESHAPE - 14
34: PAD - 1

This PR adds LOGISTIC operator support to relay tflite frontend.


assert isinstance(op, Operator)
input_tensors = self.get_input_tensors(op)
print("LOGISTIC")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove it.

Copy link
Contributor Author

@apivovarov apivovarov Jun 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

assert isinstance(op, Operator)
input_tensors = self.get_input_tensors(op)
print("LOGISTIC")
for i in input_tensors:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

@tqchen tqchen added status: need review status: need update need update based on feedbacks labels Jun 7, 2019
Copy link
Member

@yongwww yongwww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yongwww
Copy link
Member

yongwww commented Jun 7, 2019

pls fix ci

@apivovarov
Copy link
Contributor Author

apivovarov commented Jun 10, 2019

The test which I added run successfully
Running relay TFLite frontend test.. ok
http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-3313/3/pipeline/239
I think tests/scripts/task_python_frontend.sh "Frontend GPU" run for 1 hour and 2 seconds and was killed automatically (duration limit is 1 hour). Last outputs from task_python_frontend.sh are:

test_forward.test_forward_resnet50 ... Sending interrupt signal to process
sh: line 1: 17972 Terminated

@apivovarov
Copy link
Contributor Author

apivovarov commented Jun 10, 2019

Opened issue "Too small time limit for frontend: GPU integration test" #3334

@kevinthesun kevinthesun merged commit 7c1c97d into apache:master Jun 11, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Jun 26, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Jun 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need review status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants