Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reshape tensor to rank 6 #1723

Closed
Amshaker opened this issue Jan 7, 2023 · 6 comments
Closed

Can't reshape tensor to rank 6 #1723

Amshaker opened this issue Jan 7, 2023 · 6 comments
Labels
question Response providing clarification needed. Will not be assigned to a release. (type)

Comments

@Amshaker
Copy link

Amshaker commented Jan 7, 2023

Hi,

Coremltools can't reshape a tensor from rank 4 to rank 6. For example, it can't reshape [1, 96, 28, 28] to [1, 96, 14, 2, 14, 2] using torch.reshape. The compilation is finished correctly but I can't measure the performance of this model from Xcode because of the following warning :

"RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: {
NSLocalizedDescription = "Error in declaring network
.";"

Do you have any solution to this problem?

The version of coremltools is 5.2.0.

Thank you.

@Amshaker Amshaker added the question Response providing clarification needed. Will not be assigned to a release. (type) label Jan 7, 2023
@TobyRoseman
Copy link
Collaborator

Core ML does not support tensors with a rank greater than five.

Since this is issue with the Core ML Framework, not the coremltools Python package, I'm going to close this issue.

@RahulBhalley
Copy link

RahulBhalley commented Jan 13, 2023

This feature is needed as upfirdn2d_native() in StyleGAN2 requires reshaping tensor to a 6 rank tensor. Following is the implementation in PyTorch.

Can you @TobyRoseman suggest some alternative approach to reshaping tensor to bypass such rank error in CoreML?

def upfirdn2d_native(
    input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
):
    _, channel, in_h, in_w = input.shape
    input = input.reshape(-1, in_h, in_w, 1)

    _, in_h, in_w, minor = input.shape
    kernel_h, kernel_w = kernel.shape

    out = input.view(-1, in_h, 1, in_w, 1, minor) # ** This might be hindering CoreML conversion. **
    out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
    out = out.view(-1, in_h * up_y, in_w * up_x, minor)

    out = F.pad( # ** Problem is up to this operation. **
        out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
    )
    out = out[ # ** From here onwards, everything seems okay atm. **
        :,
        max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
        max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
        :,
    ]

    out = out.permute(0, 3, 1, 2)
    out = out.reshape(
        [-1, 1, 
        in_h * up_y + pad_y0 + pad_y1, 
        in_w * up_x + pad_x0 + pad_x1]
    )
    print(out.size())
    w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
    out = F.conv2d(out, w)
    out = out.reshape(
        -1,
        minor,
        in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
        in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
    )
    print(out.shape)
    out = out.permute(0, 2, 3, 1)
    out = out[:, ::down_y, ::down_x, :]

    out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
    out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1

    return out.view(-1, channel, out_h, out_w)

@RahulBhalley
Copy link

We should reopen this issue until we have a solution because this is definitely a required feature.

@TobyRoseman
Copy link
Collaborator

You should be able to get around the rank 5 limitation by reshaping your tensors.

Having a coremltools GitHub issue for this does not make sense. This issue can not be fixed in the coremltools repository. This will require changes to the Core ML framework. Please use Feedback Assistant to submit feature requests for the Core ML framework.

@RahulBhalley
Copy link

You should be able to get around the rank 5 limitation by reshaping your tensors.

I don't understand how? Can you suggest a code? I've shared the function.

@RahulBhalley
Copy link

This will require changes to the Core ML framework. Please use Feedback Assistant to submit feature requests for the Core ML framework.

No one ever responds there. I think CoreMLTools team can ask CoreML team to implement this feature and they will give it a high priority as well. You guys have more power than a random developer suggesting a feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Response providing clarification needed. Will not be assigned to a release. (type)
Projects
None yet
Development

No branches or pull requests

3 participants