Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add init member to ReduceNode #6138

Merged
merged 1 commit into from
Aug 27, 2020
Merged

Add init member to ReduceNode #6138

merged 1 commit into from
Aug 27, 2020

Conversation

quic-sanirudh
Copy link
Contributor

  • This patch adds a new member to ReduceNode called init which allows
    initialization with a custom ProducerLoad or a Float/Int immediate.
  • This allows initialization of the output Tensor of a reduction with
    another Tensor instead of the identity_element defined in the
    CommReducer
  • One example use case for this node is to initialize the Output of a
    convolution reduction with the Bias values thereby saving the
    Bias-add computation.

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

@quic-sanirudh
Copy link
Contributor Author

quic-sanirudh commented Jul 26, 2020

This is my first time contributing to TVM and I have tried to follow all the instructions for Contributors from the documentation, but please let me know I could change anything.

I ran clang-format on all the files, but looks like I confused clang-format with linting tools for cpp. I'll try and fix them ASAP.

Thanks,
Anirudh

@tqchen
Copy link
Member

tqchen commented Aug 9, 2020

Thanks @quic-sanirudh for proposing the new change. My only conern wrt to the custom initialization value is that it might break the follow up primitives, e.g. rfactor and cross thread allreduce will require the init value to be the identity element. As a result, we might want to pause a bit.

The initial value can still be added by introducing an additional stage(with a small overhead).

There is early plan to introduce scheduling for TIR, which might bring the possibility to include such custom initialization stage, which we can then support this feature

@tqchen tqchen self-assigned this Aug 9, 2020
@quic-sanirudh
Copy link
Contributor Author

quic-sanirudh commented Aug 10, 2020

Hi @tqchen
Thanks for the comments. I was aware of the issue with rfactor primitive, and so I added a check that fails if init is used with the rfactor primitive. We could do something similar for the crossthread_allreduce as well. My thought process was that since we expect the user to understand what custom initialization means, they would in general be cautious of using it when running parallel reductions.

Also, I'm not aware of the TIR level scheduling that's planned, is there an RFC or PR where this is being discussed that I can read about.

Thanks

@tqchen
Copy link
Member

tqchen commented Aug 10, 2020

TIR level scheduling is still in an early stage so no RFC yet, will keep you updated once the RFC is out.

@tqchen
Copy link
Member

tqchen commented Aug 10, 2020

Thanks @quic-sanirudh what you said about rfactor makes sense. We can still support rfactor, by checking the factor indices, and only assign the init value if the factor indices equals the initial one, however, we may not be able to express the computation as a related primitive. Given that rfactor is not usually used together with the usage of init, this might be fine.

It would also be great to add a few compiled testcases

@tqchen tqchen added the status: need update need update based on feedbacks label Aug 10, 2020
@quic-sanirudh
Copy link
Contributor Author

quic-sanirudh commented Aug 11, 2020

Thanks @tqchen for the suggestion. I'll work on adding the rfactor support and update the PR once its done.

Also, could you explain what you meant by adding "compiled" testcases as I'm a little confused by that. Did you mean cpp tests?

@tqchen
Copy link
Member

tqchen commented Aug 11, 2020

Oh, i meant test cases that use this feature to compile a reduction with init value like those in tests/python/integration

@quic-sanirudh
Copy link
Contributor Author

Oh, i meant test cases that use this feature to compile a reduction with init value like those in tests/python/integration

Ah okay, thanks for the clarification. I'll add those too.

- This patch adds a new member to ReduceNode called init which allows
  initialization with a custom ProducerLoad or a Float/Int immediate.
- This allows initialization of the output Tensor of a reduction with
  another Tensor instead of the `identity_element` defined in the
  CommReducer
- One example use case for this node is to initialize the Output of a
  convolution reduction with the Bias values thereby saving the
  Bias-add computation.
@quic-sanirudh
Copy link
Contributor Author

I added the support for initializing with rfactor, but doesn't work with crossthread_allreduce. I also added a few unit and compiled tests.

@tqchen tqchen merged commit c6dd26b into apache:master Aug 27, 2020
@tqchen
Copy link
Member

tqchen commented Aug 27, 2020

Thanks @quic-sanirudh ! this is now merged

@tqchen tqchen added status: accepted and removed status: need update need update based on feedbacks labels Aug 27, 2020
@quic-sanirudh quic-sanirudh deleted the ReduceNodeWithInit branch August 27, 2020 15:02
kevinthesun pushed a commit to kevinthesun/tvm that referenced this pull request Sep 17, 2020
- This patch adds a new member to ReduceNode called init which allows
  initialization with a custom ProducerLoad or a Float/Int immediate.
- This allows initialization of the output Tensor of a reduction with
  another Tensor instead of the `identity_element` defined in the
  CommReducer
- One example use case for this node is to initialize the Output of a
  convolution reduction with the Bias values thereby saving the
  Bias-add computation.
kevinthesun pushed a commit to kevinthesun/tvm that referenced this pull request Sep 18, 2020
- This patch adds a new member to ReduceNode called init which allows
  initialization with a custom ProducerLoad or a Float/Int immediate.
- This allows initialization of the output Tensor of a reduction with
  another Tensor instead of the `identity_element` defined in the
  CommReducer
- One example use case for this node is to initialize the Output of a
  convolution reduction with the Bias values thereby saving the
  Bias-add computation.
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Sep 18, 2020
- This patch adds a new member to ReduceNode called init which allows
  initialization with a custom ProducerLoad or a Float/Int immediate.
- This allows initialization of the output Tensor of a reduction with
  another Tensor instead of the `identity_element` defined in the
  CommReducer
- One example use case for this node is to initialize the Output of a
  convolution reduction with the Bias values thereby saving the
  Bias-add computation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants