Even less safe way to access PyLong digits for blazing speed #3365
iliya-malecki
started this conversation in
Ideas
Replies: 1 comment 3 replies
-
I think this is mainly about being independent from as many implementation details of CPython as reasonably possible, e.g. working with different versions of CPython, with its limited API, but also with alternative implementations like PyPy. (We could probably skip zero initialization, but that makes the code significantly more complex as we have to make sure to not pass uninitialized data where it isn't expected.) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was exploring packing pythons digits into normal u32 digits the other day (see this function), then i noticed pyo3 already has support for BigInts with all possible safety features and got super happy that i can compare my code to something good. However, i immeadiately noticed the buffer that not only gets zeroed out right before being written to, but also gets discarded right afterwards. I understand that it is absolutely unrealistic to have an int of such mind-boggling size that copying it twice is a performance concern, however, im still curious about this approach from an unreasonable idealistic perspective. Are there any guarantees that we get by using the ffi rather than just looking at the bytes directly (which would save us from the unnecessary allocation and reduce the wall time by 70% for a halfway-reasonable int 10**300)?
Beta Was this translation helpful? Give feedback.
All reactions