You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A task one sometimes has to deal with is to take a number and look at the bytes it is made of. In other words, a number can be seen as an array of bytes. Specifically, there is a one to one inlay of numbers into byte strings and its inverse from byte strings back to numbers. It would be convenient if such a pair of functions could be found in this package.
There are two ways to encode a number into an array: least or most significant bit first. We can pick whichever is faster. There is also some confusion as to whether the encoding of 0 should be an empty byte string or a lonely zero byte. Neither is more mathematically principled than the other — we can pick whichever is more handy for our algorithm.
I think there is a linear time algorithm for either most significant or least significant digit first. But we can talk about implementation details once the design is agreed upon.
The text was updated successfully, but these errors were encountered:
ghc-bignum provides some primitives along these lines. See naturalToAddr and naturalFromAddr (and their ByteArray# analogues) in GHC.Num.Natural.
integer-simple is not so kind, though its representation does allow repeated shifts and integerToWord calls to extract the bytes in linear time. But it's probably not worth fiddling with the older bignum libraries at all.
A task one sometimes has to deal with is to take a number and look at the bytes it is made of. In other words, a number can be seen as an array of bytes. Specifically, there is a one to one inlay of numbers into byte strings and its inverse from byte strings back to numbers. It would be convenient if such a pair of functions could be found in this package.
There are two ways to encode a number into an array: least or most significant bit first. We can pick whichever is faster. There is also some confusion as to whether the encoding of 0 should be an empty byte string or a lonely zero byte. Neither is more mathematically principled than the other — we can pick whichever is more handy for our algorithm.
I think there is a linear time algorithm for either most significant or least significant digit first. But we can talk about implementation details once the design is agreed upon.
The text was updated successfully, but these errors were encountered: