-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[Bug Fix] Fix GroupNorm Implementation #18199
Conversation
Hey @hgt312 , Thanks for submitting the PR
CI supported jobs: [clang, website, windows-cpu, windows-gpu, unix-gpu, sanity, centos-cpu, unix-cpu, edge, miscellaneous, centos-gpu] Note: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sxjscience pls also take a look.
LGTM, FYI @Jerryzcn @zhreshold |
allow_deferred_init=True) | ||
self.beta = self.params.get('beta', grad_req='write' if center else 'null', | ||
shape=(num_groups,), init=beta_initializer, | ||
shape=(in_channels,), init=beta_initializer, | ||
allow_deferred_init=True) | ||
|
||
def hybrid_forward(self, F, data, gamma, beta): | ||
norm_data = F.GroupNorm(data, gamma=gamma, beta=beta, num_groups=self._num_groups, eps=self._epsilon) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized one quick issue. Should we consider to move GroupNorm to npx? Currently, the layer won’t be usable in the new numpy interface. @zhreshold
I've merged this PR because it's a bugfix. If we need GroupNorm in |
We should have a list of ops that need to be migrated. |
* init * add in_channels
* init * add in_channels
See #17139.