-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow variable (computed) weights in convolution #853
Comments
Have you found a solution? |
Also struggling with this, converting StyleGAN2 and variants from PyTorch to CoreML has had roadblock after roadblock and this is one of them we're seeing as well. |
@praeclarum have you solved it? |
I'm also getting this error with the latest coremltools version 5.1 when trying to convert the generator network from stylegan2-ada-pytorch
|
I think I've narrowed down the problem. The error "Input weight must be const" only occurs for |
Hello, Any news on this feature request? @kuprel did you manage to find a way to "rewrite" conv_transpose so that it can be used in coreml ? |
This worked for me. Still uses conv_transpose but with constant data import torch
from torch.nn import functional
def conv_transpose_stride2(x: torch.Tensor, w: torch.Tensor) -> torch.Tensor:
dilate = torch.nn.ConvTranspose2d(in_channels=128, out_channels=128, kernel_size=1, stride=2, groups=128, bias=False)
dilate.weight.data = torch.ones([128, 1, 1, 1])
pad = torch.nn.ZeroPad2d([1, 1, 1, 1])
return functional.conv2d(dilate(pad(x)), w.transpose(0, 1).flip(2, 3))
if __name__ == "__main__":
torch.manual_seed(0)
x = torch.randn([1, 128, 256, 256])
w = torch.randn([128, 64, 3, 3])
y = functional.conv_transpose2d(x, w, stride=2)
y_ = conv_transpose_stride2(x, w)
size = torch.tensor(y.shape).prod()
with torch.no_grad():
print((y - y_).square().mean().numpy(), y_.square().mean().numpy(), y.square().mean().numpy()) |
Awesome! thanks |
1 similar comment
Awesome! thanks |
Hi @kuprel, I tried your implementation @kuprel but I get # After your code.
import coremltools as ct
traced_model = torch.jit.trace(conv_transpose_stride2, x) This gives the following error:
Did you get this error? Did you solve it? Best, |
Can someone share a minimal example (i.e. a toy network which fails conversion because of variable weight convolution)? |
Following is a code snippet to reproduce the error @TobyRoseman. Now come on Apple (@TobyRoseman), please help me with this issue #1723 (comment), please! I want StyleGAN2 on my iPhone!! We developers can't wait longer. This is extremely important, DL research is moving too fast & CoreML is kind of lacking behind (another case is the FFT ops). import torch
from torch import nn
from torch.nn import functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x, w):
return F.conv_transpose2d(x, w, padding=0, stride=2, groups=1)
x = torch.randn(1, 128, 128, 128)
w = torch.randn([128, 64, 3, 3])
model = Model().eval()
traced_model = torch.jit.trace(model, (x, w))
import coremltools as ct
inputs = [ct.TensorType('x', x.shape), ct.TensorType('w', w.shape)]
mlmodel = ct.convert(traced_model, inputs=inputs) Then BOOM! You get an error:
|
Is there any update regarding this? I am facing the same problem I am not able to get rid of this error:
Could you suggest something please @TobyRoseman Here is my implementation:
I am trying to migrate this model: https://github.com/nipponjo/deepfillv2-pytorch cc: @RahulBhalley @kuprel Thanks in advance |
The issue here is that the Please submit this Core ML Framework issue using the Feedback Assistant. Once you have done that please the id value you get. The id value should start with "FB" followed by seven digits. |
I don't understand. Why can't CoreML Tools team ask CoreML team to fix this issue? No diss, but I think the current team probably doesn't take this work seriously (this issue is 3 years old). It's really affecting us! Please understand that marketing and sales are really hard enough for us indie developers. At least make this development a breeze for us. |
Any updates or not fixing this??? @TobyRoseman |
@RahulBhalley - did you submit this issue via the Feedback Assistant as I suggested? If so, do you have a Feedback ID? |
Any updates? |
Looks like this is still an issue with macOS 15 and tip of |
Description
I would like to be able to use Conv2d and Conv2dTranspose with variable weights. Currently, I get this error:
when trying to convert StyleGAN 2.
Use cases
This is needed in modern GANs where classes and other embeddings are used to change the statistics of the weights of convolution. In StyleGAN 2, this is used to implement "Weight Demodulation".
Analyzing and Improving the Image Quality of StyleGAN
They removed instance normalization in favor of this technique. (Instance normalization was causing quality issues.)
Describe alternatives you've considered
The text was updated successfully, but these errors were encountered: