-
Notifications
You must be signed in to change notification settings - Fork 364
Doing math with Tensor
can be a lot more ergonomic
#130
Comments
Need to do more investigation to make sure the temporary |
I'm a bit hesitant about this. On one hand, it reads much better, but on the other hand the ownership semantics for GGML stuff are already quite tricky. So making things more implicit can obscure important details about who owns the tensor data. When you call If we were to model this properly, a tensor operation would need to intertwine the lifetimes of Changes to improve the soundness of the API are welcome, but so far making fully sound and idiomatic bindings to GGML hasn't been a priority. This is not just a theoretical concern. Right now, we already have a situation where the context that stores the model's weights and the temporary context used for an inference step are different. It's not a problem because the temporary context is temporary, and the weights for the model have to be alive for as long as the temporary context is. Anyway, what I meant to say about all this, is that the problem of having tensors use their internal context pointer to do anything other than validation is that the order of operations changes the owner context. Writing |
That's fair (and I'm not sure if you noticed but I actually closed this myself already). Maybe a better approach would be to have Anyway, I need to do more research on this but I think there definitely should be a way to make building the graph much more ergonomic. It also could be done in a way that's not a breaking change by just adding a type that those operations could be performed on. |
Agreed! Changes in this direction would be more than welcome :) Building a Rust API that ends up "compiling down to" GGML operations in a safe way sounds like a good abstraction. Good news is, it will be easy to verify that the changes introduce no regressions since we have the existing code to validate against. |
It seems pretty straightforward to implement
std::ops::{Add, Div, Mul}
, etc.It's a little awkward with the vanilla version of GGML though since it doesn't even support subtraction or division so it kind of needs my map ops stuff: KerfuffleV2@7e2d3d0
I repurposed
^
(XOR) format_mul
but it could be something else like&
or|
. I'm not really sure what operator makes the most sense here.Just for example, code can go from:
to
It's actually really convenient that every
Tensor
has an embeddedContext
otherwise this wouldn't be possible. As far as I know, the temporary upgrade should be safe (obviously it'll die horribly if you try to perform operations with a deadContext
.)Note: This isn't quite as straightforward as it may appear for the operations that aren't currently supported because my
OP_MAP_UNARY|BINARY
ops currently only supportf32
values. So you couldn't use those to divide/subtract quantized values currently.The text was updated successfully, but these errors were encountered: