Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend] Keras Support #2336

Merged
merged 4 commits into from
Jan 5, 2019
Merged

[Relay][Frontend] Keras Support #2336

merged 4 commits into from
Jan 5, 2019

Conversation

Huyuwei
Copy link
Contributor

@Huyuwei Huyuwei commented Dec 26, 2018

Keras -> relay frontend migrated from nnvm with one main change that SymbolTable is replaced by ExprTable.

@kazum @thefiddler welcome to review.

python/tvm/relay/frontend/common.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/keras.py Outdated Show resolved Hide resolved
test_forward_multi_outputs()
test_forward_reuse_layers()
test_forward_rnn()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this line.

python/tvm/relay/frontend/keras.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/keras.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/keras.py Outdated Show resolved Hide resolved
python/tvm/relay/frontend/keras.py Outdated Show resolved Hide resolved
Jenkinsfile Outdated
@@ -236,6 +236,17 @@ stage('Integration Test') {
}
}
},
'frontend: GPU': {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tqchen please check the Jenkinsfile change to add relay frontend tests. Do we have to merge it so that it can take effect?

Should we put all relay frontend tests in tests/python/frontend? I see mxnet tests in both tests/python/frontend and tests/python/relay/frontend

if pad_t == pad_b and pad_l == pad_r:
params['padding'] = (pad_t, pad_l)
else:
inexpr = _op.nn.pad(data=inexpr, pad_width=(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we extend convolution’s padding param into 4D [top, left, bottom, right] so that we can avoid inserting pad? we have done it for pooling, see this file line 315.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, this is doable. It would be great if someone is interested in contributing this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am worry the compatibility too. NNVM / our relay FE uses 2D padding.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreever, x86 backend assume padding is 2D too, which assume only pad_h and pad_w. So this is great, but seem that many places we should be careful.

@tqchen
Copy link
Member

tqchen commented Jan 3, 2019

@jroesch please take another look, @kazum please take some time to take a look at the PR, I plan to bring this in by the end of the week if there is no further comments

Copy link
Contributor

@kazum kazum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, great work!

@tqchen tqchen merged commit a7d39d7 into apache:master Jan 5, 2019
@tqchen
Copy link
Member

tqchen commented Jan 5, 2019

Thanks to @Huyuwei @kazum @jroesch @FrozenGene , this is merged. @FrozenGene let us discuss the additional padding option support as a separate thread

@tqchen
Copy link
Member

tqchen commented Jan 5, 2019

@Huyuwei there is a test case error in master, which could be caused by our recent change of convention to const http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/master/577/pipeline #2349
can you make a quick fix?

@Huyuwei
Copy link
Contributor Author

Huyuwei commented Jan 6, 2019

@tqchen fixed by #2376

zhiics pushed a commit to zhiics/tvm that referenced this pull request Jan 7, 2019
FrozenGene pushed a commit to FrozenGene/tvm that referenced this pull request Jan 10, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Feb 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants