You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we cannot figure out a workaround (e.g. using a combination of ggml_view + ggml_reshape + ggml_conv), we might have to increase GGML_MAX_DIMS. Not sure how difficult this change would be. At the moment, it is not a priority (currently needed by the sd.cpp project leejet/stable-diffusion.cpp#491 (comment)), but it's something we should keep in mind.
The text was updated successfully, but these errors were encountered:
If we do this it would maybe make sense to add something like ggml_backend_op_max_supported_dim to prevent the misuse of high-dimensional tensors without having to touch every single op in a single PR.
The 3D convolution operator requires 5D tensors:
https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html
If we cannot figure out a workaround (e.g. using a combination of
ggml_view
+ggml_reshape
+ggml_conv
), we might have to increaseGGML_MAX_DIMS
. Not sure how difficult this change would be. At the moment, it is not a priority (currently needed by thesd.cpp
project leejet/stable-diffusion.cpp#491 (comment)), but it's something we should keep in mind.The text was updated successfully, but these errors were encountered: