-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Support for group convolution #149
Comments
Thanks for bringing this up! The reason that we do not support grouped convolution is that it does not offer much speedup for sparse workloads. This is mainly because sparse convolution is memory-bounded instead of computation-bounded. That said, I think supporting this is still meaningful, and we probably need to do more optimization for it. |
I'm also interested in support for grouping operations~ For the design of lightweight networks such as MobileNet, the use of depthwise separable convolutions (that is, setting the number of groups to the number of input channels) can reduce the amount of parameters. For a kernel size of KxK, C channels (assuming the same number of input and output channels)
The bottleneck of computing relative to IO may be broken by increasing the size of the convolution kernel. RepLKNet made an attempt, arXiv: https://arxiv.org/abs/2203.06717. |
Thanks for providing the model size perspective! We will take that into our consideration. |
Is there any update on the group sparse convolution? I am trying to build some Capsule layers using 3d depthwise convolution. |
any updates on this? |
Grouped convolution is supported well in
pytorch
's convolution layers / ops. If possible, it would a great to add that ability totorchsparse
.The text was updated successfully, but these errors were encountered: