Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml : increase GGML_MAX_DIMS #1042

Open
ggerganov opened this issue Dec 8, 2024 · 2 comments
Open

ggml : increase GGML_MAX_DIMS #1042

ggerganov opened this issue Dec 8, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@ggerganov
Copy link
Owner

The 3D convolution operator requires 5D tensors:

https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html

If we cannot figure out a workaround (e.g. using a combination of ggml_view + ggml_reshape + ggml_conv), we might have to increase GGML_MAX_DIMS. Not sure how difficult this change would be. At the moment, it is not a priority (currently needed by the sd.cpp project leejet/stable-diffusion.cpp#491 (comment)), but it's something we should keep in mind.

@ggerganov ggerganov added the enhancement New feature or request label Dec 8, 2024
@ggerganov ggerganov moved this to Todo in ggml : roadmap Dec 8, 2024
@JohannesGaessler
Copy link
Collaborator

If we do this it would maybe make sense to add something like ggml_backend_op_max_supported_dim to prevent the misuse of high-dimensional tensors without having to touch every single op in a single PR.

@JohannesGaessler
Copy link
Collaborator

Although on second thought you could also just bake this logic directly into ggml_backend_op_supports_op.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Todo
Development

No branches or pull requests

2 participants