-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't reshape tensor to rank 6 #1723
Comments
Core ML does not support tensors with a rank greater than five. Since this is issue with the Core ML Framework, not the coremltools Python package, I'm going to close this issue. |
This feature is needed as Can you @TobyRoseman suggest some alternative approach to reshaping tensor to bypass such rank error in CoreML? def upfirdn2d_native(
input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
):
_, channel, in_h, in_w = input.shape
input = input.reshape(-1, in_h, in_w, 1)
_, in_h, in_w, minor = input.shape
kernel_h, kernel_w = kernel.shape
out = input.view(-1, in_h, 1, in_w, 1, minor) # ** This might be hindering CoreML conversion. **
out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
out = out.view(-1, in_h * up_y, in_w * up_x, minor)
out = F.pad( # ** Problem is up to this operation. **
out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
)
out = out[ # ** From here onwards, everything seems okay atm. **
:,
max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
:,
]
out = out.permute(0, 3, 1, 2)
out = out.reshape(
[-1, 1,
in_h * up_y + pad_y0 + pad_y1,
in_w * up_x + pad_x0 + pad_x1]
)
print(out.size())
w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
out = F.conv2d(out, w)
out = out.reshape(
-1,
minor,
in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
)
print(out.shape)
out = out.permute(0, 2, 3, 1)
out = out[:, ::down_y, ::down_x, :]
out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
return out.view(-1, channel, out_h, out_w) |
We should reopen this issue until we have a solution because this is definitely a required feature. |
You should be able to get around the rank 5 limitation by reshaping your tensors. Having a coremltools GitHub issue for this does not make sense. This issue can not be fixed in the coremltools repository. This will require changes to the Core ML framework. Please use Feedback Assistant to submit feature requests for the Core ML framework. |
I don't understand how? Can you suggest a code? I've shared the function. |
No one ever responds there. I think CoreMLTools team can ask CoreML team to implement this feature and they will give it a high priority as well. You guys have more power than a random developer suggesting a feature. |
Hi,
Coremltools can't reshape a tensor from rank 4 to rank 6. For example, it can't reshape [1, 96, 28, 28] to [1, 96, 14, 2, 14, 2] using torch.reshape. The compilation is finished correctly but I can't measure the performance of this model from Xcode because of the following warning :
"RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: {
NSLocalizedDescription = "Error in declaring network.";"
Do you have any solution to this problem?
The version of coremltools is 5.2.0.
Thank you.
The text was updated successfully, but these errors were encountered: