-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Groups added to Conv2d #100
Comments
@jagrit06 in conv2d op call, there is an assert to check if dilations is set to 1, but from a quick look through the implementation it seems to support dilations. I removed this check and messed with it a little comparing to pytorch and am seeing the same results with different values for dilations. Is there a specific case where dilations isn't supported? |
Well at the moment the convolutions with dilations revert to a fairly naive and slow kernel on the GPU (though I think we do fine on the CPU) That restriction was added to basically side step that performance hit |
Should the assert be removed and create a separate issue to improve Convolution speed on GPU when dilation is not 1? Creates more visibility to the core issue and opens the possibility of a contributor able to help out |
agree with @dc-dc-dc, this would be really helpful for implementations that require dilation |
FWIW this PR now also depends on a |
Added dilation in #766 |
Is anyone working on groups for convolution? E.g. @Stealeristaken ? I am closing #637 as it's inactive but it might be useful as a starting point. |
Been a bit busy these days. I tried adding groups to convs, but it didn't work very well. So, I closed that pull request. I'll try again in the next few days. |
I'm eager to get this working as well. I can pick this up if no one is working on it. |
I don't think anyone is actively working on it, would be great to have you work on it @Rifur13 ! |
feel free to take. gl! |
π What is the purpose of this issue?
According to the documentation, these are the parameters available for
Conv2d
.However, the dilation and groups parameter as defined in the torch implementation of Conv2d, would be desirable to implement well-known architectures, e.g. ASPP. The following is the description of these two parameters according to torch:
dilation controls the spacing between the kernel points; also known as the Γ trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups control the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, at groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
Thank you for your attention π
The text was updated successfully, but these errors were encountered: