Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[1.x] Backport of LSTM and GRU fix (#17898) and RNN op (#17632) #18317

Merged
merged 2 commits into from
Jun 3, 2020

Conversation

bgawrych
Copy link
Contributor

@bgawrych bgawrych commented May 14, 2020

Description

Fix for LSTM and GRU layers without DNNL enabled give wrong gradients #17898
[Large Tensor] Fixed RNN op #17632

Checklist

Essentials

  • Changes are complete (i.e. I finished coding on this PR)
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Comments

@mxnet-bot
Copy link

Hey @bgawrych , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [windows-cpu, miscellaneous, centos-gpu, unix-gpu, website, sanity, unix-cpu, centos-cpu, clang, edge, windows-gpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@bgawrych bgawrych changed the title [1.x] Backport of fix LSTM and GRU layers gradient calculations [1.x] Backport of LSTM and GRU fix (#17898) and RNN op (#17632) May 18, 2020
@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [edge, unix-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [unix-gpu, edge]

@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [edge]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [edge]

@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [edge]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [edge]

@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [edge]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [edge]

@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [centos-cpu, centos-gpu, unix-cpu, unix-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-cpu, centos-gpu, unix-gpu, unix-cpu]

@bgawrych bgawrych force-pushed the 1.x_rnn branch 2 times, most recently from 82b9578 to 4ad92b1 Compare May 28, 2020 07:50
@bgawrych
Copy link
Contributor Author

@mxnet-bot run ci [centos-cpu, centos-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-cpu, centos-gpu]

@bgawrych
Copy link
Contributor Author

bgawrych commented Jun 1, 2020

@mxnet-bot run ci [centos-cpu, centos-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-cpu, centos-gpu]

@ciyongch
Copy link
Contributor

ciyongch commented Jun 1, 2020

Hi @bgawrych , I found recently there's a PR #18437 which delete the ln -s /usr/bin/ninja-build /usr/bin/ninja from the ci script file.
So the new code base will not run into such failure, please try rebase your code. Thanks.
I've create another PR for fixing the issue for v1.7.x branch.

connorgoggins and others added 2 commits June 1, 2020 10:25
* Changed relevant function args to index_t

* Added nightly test for RNN

* Added fix for LSTM, GRU, RNN-ReLU, RNN-tanh

* Using const instead of literals

* Added nightly test for RNN ReLU & tanh, LSTM, GRU

* Type assertion to force evaluation of output NDArray

* Incorporated latest round of comments
…che#18203)

* Fix input gradient calculation for bidirectional LSTM

For bidiractional LSTM with number of layers > 2 input gradient calculation was incorrect.
Reason of wrong calculations was overwriting y derivative (dy) tensor by
calculated x derivative (dx) tensor before right2left layer could use dy for own
gradient calculations.
Propsed fix uses additional space to avoid overwriting.

* Fix gradient calculation for GRU

For GRU with number of layers > 2 i2h_weight gradient for
layers in the middle (all except last and first) was incorrect.
Wrong caluculations were caused by assigning output pointer to
input instead of calculating new input pointer.

* Enable tests for GRU and LSTM gradients

* Fix comments

* Change loop iteration deduction

* Add more test cases for fused rnn layers
@bgawrych
Copy link
Contributor Author

bgawrych commented Jun 1, 2020

@mxnet-bot run ci [all]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-gpu, clang, miscellaneous, sanity, unix-cpu, unix-gpu, windows-cpu, centos-cpu, edge, website, windows-gpu]

@bgawrych
Copy link
Contributor Author

bgawrych commented Jun 1, 2020

Hi @bgawrych , I found recently there's a PR #18437 which delete the ln -s /usr/bin/ninja-build /usr/bin/ninja from the ci script file.
So the new code base will not run into such failure, please try rebase your code. Thanks.
I've create another PR for fixing the issue for v1.7.x branch.

@ciyongch Done, but there is only 1 check now. Tried to retrigger all jobs, but no effect

@ciyongch
Copy link
Contributor

ciyongch commented Jun 1, 2020

@bgawrych , it's good, all the tests got passed now :)

@bgawrych
Copy link
Contributor Author

bgawrych commented Jun 2, 2020

@mxnet-bot run ci [unix-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [unix-gpu]

@bgawrych
Copy link
Contributor Author

bgawrych commented Jun 3, 2020

I thinnk it's ready too
cc @ciyongch @pengzhao-intel @TaoLv

Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pengzhao-intel pengzhao-intel merged commit 8986e3f into apache:v1.x Jun 3, 2020
ChaiBapchya pushed a commit to ChaiBapchya/mxnet that referenced this pull request Aug 15, 2020
…17632) (apache#18317)

* [v1.x] [Large Tensor] Backport of Fixed RNN op (apache#17632)

* Changed relevant function args to index_t

* Added nightly test for RNN

* Added fix for LSTM, GRU, RNN-ReLU, RNN-tanh

* Using const instead of literals

* Added nightly test for RNN ReLU & tanh, LSTM, GRU

* Type assertion to force evaluation of output NDArray

* Incorporated latest round of comments

* [v1.x] Backport of Fix LSTM and GRU layers gradient calculations (apache#18203)

* Fix input gradient calculation for bidirectional LSTM

For bidiractional LSTM with number of layers > 2 input gradient calculation was incorrect.
Reason of wrong calculations was overwriting y derivative (dy) tensor by
calculated x derivative (dx) tensor before right2left layer could use dy for own
gradient calculations.
Propsed fix uses additional space to avoid overwriting.

* Fix gradient calculation for GRU

For GRU with number of layers > 2 i2h_weight gradient for
layers in the middle (all except last and first) was incorrect.
Wrong caluculations were caused by assigning output pointer to
input instead of calculating new input pointer.

* Enable tests for GRU and LSTM gradients

* Fix comments

* Change loop iteration deduction

* Add more test cases for fused rnn layers

Co-authored-by: Connor Goggins <cgoggins0@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants