-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add i128 and u128 types #521
Comments
We had quadruple precision floating point types at one point (@thestinger added them) but they were later removed from the language, citing lack of support across compilers. Is there good compiler support for 128 bit integrals? |
That was misinformation. It worked well and provided a useful feature. If there was a valid reason to remove it, it wasn't stated in any notes that were made public.
LLVM has fully working support for 128-bit integers and quadruple precision floating point. GCC and Clang both expose a 128-bit integer type. It's no harder to implement 128-bit implements in terms of 64-bit ones than it is to implement 64-bit integers in terms of 32-bit ones. Rust already exposes numeric types with efficient software implementations: |
I am not saying it was true or not, I am saying what was said when they were removed. Maybe a full RFC for implementing 128bit numbers is a good idea at this point. |
👍 for |
It would be great to have 128-bit integer types -- for one, this would make |
An aside: rust-lang/rust#24612 (new float to decimal conversion code) has a custom bignum module, and it has a note that its generic |
+1, The Duration type could be simplified with this: instead of using a u64 for seconds with a seperate u32 for nanoseconds, it could just be a single u128/i128 count of nanoseconds. |
I really want this.
System libm must support variety of non-trivial mathematical operations (sine, cosine, etc) on f128 on all platforms we support in order for the type to be useful. (EDIT: unless f128 is emulated as double-double, which is not IEEE-conformant then) This does not apply to i/u128 and, as long as LLVM and Rust support i/u128, nothing else needs to support them. This also means that i/u128 are relatively easier to add compared to f128. |
I've actually come to the opinion that since llvm supports arbitrarily-sized fixed-width integers (e.g., |
That would actually provide a fairly easy and efficient way of parsing and writing bit-packed binary structures (like TCP packets) you could write a packed struct full of your oddly sized integers and transmute a buffer ptr to your struct ptr. |
@rrichardson, it's a perfect way of writing tightly-packed structures, yes. It also allows the programmer to more finely tune the acceptable range of a particular data point if they so wish. For example, if you want to be able to store exactly one hexdigit, an I've been hoping for this feature, but again, I seriously doubt I'll ever see this in Rust. |
@rrichardson Currently packed structs will only pack to the granularity of 1 byte AFAICT, so adding non-multiple-of-eight sized integers wouldn't help much without more extensive changes. |
@Diggsey, just because they only pack to the nearest octet doesn't mean that arbitrarily-sized fixed-width integers wouldn't make things simpler. Think of them as being a more well-supported version of C's bitfields. |
@HalosGhost Packing means how fields are layed out relative to each other. Even when you enable packing, fields are still aligned to 1 byte, which makes arbitrarily sized integer useless. C++ bitfields allow you to have bit-level packing because it effectively merges adjacent bitfields together into a single field as a pre-process step before the struct layout code sees them. This is also why bitfields are so poorly supported, because only the struct layout step is standardised on each platform, and that step can't deal with less than 1 byte alignments. It's not impossible to get bit-level packing, but you just have to go quite a bit further than just adding the types to the language: you have to either do what C++ does by having a strategy for merging sub-byte fields into whole-byte fields, or extend the struct layout code to support sub-byte fields directly, making sure that it never affects the layout for structs without those fields. You'd also have to decide what to do for |
I must be misunderstanding you. struct ex {
uint8_t a: 4;
uint8_t b: 4;
}; If you were to take My point is that it'd be handier (and cleaner imho) to be able to just say: struct ex { // forgive me if this is the wrong syntax
a: u4, b: u4;
}; |
In your example, 'a' and 'b' are merged into a single |
I suppose. Sounds like that should be addressed as well. Even without bit-level packing in structs, the additional boundaries that the arbitrary-width offers is enough for me to want the feature. |
Or maybe the answer is not arbitrary width ints but, bit level pattern On Thu, Apr 30, 2015, 9:12 PM Sam Stuewe notifications@github.com wrote:
|
@rrichardson: that could be done with a macro or syntax extension. I've been wanting that for a while now :) |
Add +1 request for u128 ints :) |
Another +1 for {u,i}128. I could really use it for an implementation of murmur3. Is there a downside to adding it? |
+1 This would often be helpful for implementing various cryptographic algorithms without needing to either jump through hoops to stay within the bounds of |
This cannot be implemented efficiently without compiler support for various reasons. It is clear that certain people have no interest in this feature and that this issue will be ignored by them unless someone turns it into an RFC. |
Does LLVM have native support for 128-bit integers even on 32-bit architectures like x86 or ARM? |
LLVM exposes arbitrarily-sized fixed-width integers. |
@cesarb That's irrelevant since it can always be emulated before the code is translated to LLVM IR. |
@HalosGhost I have found that this claim is only true for machine friendly types. In our experience anything larger than 128-bits on x86_64 is extremely poorly supported. Additionally, when dealing with types smaller than 128-bits one is only safe to use (again, x86_64) 8-, 16-, 32-, 64- and 128-bit integers. Other bit-widths can work, but one has to be careful to not form vectors out of them. We have also have had issues with storing non-standard bit-width integers in memory, but haven't tried to pinpoint the problem exactly. Seems like this is very poorly tested area of LLVM. Tread carefully. |
Nope. |
@mjbshaw A fallback is possible (like rem u64). |
@mjbshaw LLVM supports it, clang does not. |
Would it be possible to implement I need EDIT: It would actually be great if the |
i128/u128 is not needed and IMO not very useful for that kind of thing. I much prefer the intrinsics that Microsoft made: https://msdn.microsoft.com/en-us/library/3dayytw9(v=vs.100).aspx, which operate exclusively on 64-bit types. |
@briansmith the API only uses 64bit types, but the implementation (which is not shown there) probably won't. EDIT: the way the 128bit version is typically implemented is: // 64-bit version
unsigned __int64 _umul128(unsigned __int64 Multiplier, unsigned __int64 Multiplicand, unsigned __int64 *HighProduct) {
unsigned __int128 __Result = (unsigned __int128) Multiplier * Multiplicand;
*HighProduct = (unsigned __int64) (__Result >> 64);
return (unsigned _int64) __Result;
};
// 32-bit version
unsigned __int32 _umul64(unsigned __int32 Multiplier, unsigned __int32 Multiplicand, unsigned __int32 *HighProduct) {
unsigned __int64 __Result = (unsigned __int64) Multiplier * Multiplicand;
*HighProduct = (unsigned __int32) (__Result >> 32);
return (unsigned _int32) __Result;
}; On platforms without 128bit registers you can do something like this, but backends do not recognize it due to its complexity, and won't generate 128bit (4 instructions) or |
@gnzlbg It's an intrinsic, so it isn't implemented as a function like that. The compiler treats intrinsics special and compiles them down to a certain set of instructions. The I really wish Rust exposed these intrinsics somehow, they're incredibly useful. Instead all we have are intrinsics for doing multiply with overflow which is far less useful since you don't get the high part, just a bool indicating whether you needed a high part. |
@retep998 I failed to mention that I am implementing the intrinsic, in Rust. LLVM doesn't offer it because it can recognize it. |
For example, this is what GCC generates for both versions on a platform that does support 128bit integers (but not void mult64to128(uint64_t op1, uint64_t op2, uint64_t *lo)
{
uint64_t u1 = (op1 & 0xffffffff);
uint64_t v1 = (op2 & 0xffffffff);
uint64_t t = (u1 * v1);
uint64_t w3 = (t & 0xffffffff);
uint64_t k = (t >> 32);
op1 >>= 32;
t = (op1 * v1) + k;
k = (t & 0xffffffff);
op2 >>= 32;
t = (u1 * op2) + k;
*lo = (t << 32) + w3;
} it generates: mult64to128(unsigned long, unsigned long, unsigned long*):
movl %edi, %eax
movl %esi, %r8d
shrq $32, %rdi
movq %rax, %rcx
shrq $32, %rsi
imulq %r8, %rcx
imulq %r8, %rdi
imulq %rax, %rsi
movq %rcx, %r9
movl %ecx, %ecx
shrq $32, %r9
addl %r9d, %edi
leaq (%rsi,%rdi), %rax
salq $32, %rax
addq %rcx, %rax
movq %rax, (%rdx)
ret while for the 128bit version uint64_t umulx(uint64_t x, uint64_t y, uint64_t* p) {
unsigned __int128 r = (unsigned __int128)x * y;
*p = (uint64_t)(r >> 64);
return (uint64_t) r;
} it generates umulx(unsigned long, unsigned long, unsigned long*):
movq %rdi, %rax
movq %rdx, %rcx
mulq %rsi
movq %rdx, (%rcx)
ret On platforms with umulx(unsigned long, unsigned long, unsigned long*):
movq %rdx, %rcx
movq %rdi, %rdx
mulx %rsi, %rax, %rdx
movq %rdx, (%rcx)
ret for the 128bit version and still "crap" for the version that does not use 128bit integers. EDIT: this is what clang generates with the LLVM backend, which is identical for the 128bit version, and a bit different for the 64-bit version. |
Although... is there an LLVM intrinsic for static __inline__ unsigned __int64 __DEFAULT_FN_ATTRS
_umul128(unsigned __int64 _Multiplier, unsigned __int64 _Multiplicand,
unsigned __int64 *_HighProduct) {
unsigned __int128 _FullProduct =
(unsigned __int128)_Multiplier * (unsigned __int128)_Multiplicand;
*_HighProduct = _FullProduct >> 64;
return _FullProduct;
} |
TL;DR: to be able to use
No, there is no need for it since LLVM recognizes the pattern automatically. It might also be worth to mention that, actually, LLVM intrinsics get removed over time as the optimizer lerns to recognize new patterns. A pretty good example is the TBM instruction set. It used to have one intrinsic per instruction ( And LLVM is not unique in this. GCC (as shown above) does it the exact same way. I wouldn't be surprised if the MSVC intrinsic was written in plain C++ as well. The main difference between the 128-bit version and the 64-bit version, is that the 64-bit version is very complex. There are probably multiple ways to write it and achieve the same thing, but writing the code that "canonicalizes" it into something that LLVM/GCC/... optimizers can recognize is non trivial. Doing so for the 128bit implementations is much easier. |
LLVM’s “intrinsic” (note the quotes) for widening multiply is %x = zext i64 %a to i128
%y = zext i64 %b to i128
%r = mul nuw i128 %x, %y
ret i128 %r and ends up becoming a movq %rsi, %rax
mulq %rdi
retq It is somewhat important that it is easy for LLVM to prove both arguments have been zero extended in order to generate such code, because otherwise it very quickly may become a imulq %rdx, %rsi
movq %rdx, %rax
mulq %rdi
addq %rsi, %rdx
imulq %rdi, %rcx
addq %rcx, %rdx
retq instead, which you, obviously, do not want. Implementing a rust intrinsic which emits the first sequence of LLVM instructions exactly guarantees the emission of the 3 instruction case, whereas using arbitrary 128-bit sized integers may make LLVM fall back to the other case even in obvious cases for no apparent reason. |
@nagisa Thanks! I will give that approach a try, it should work on all architectures. The only "non-issue" is that we cannot offer widening multiply as an intrinsic unless we actually have 128-bit integers, but that is irrelevant for
That's a huge may though, since it would be a regression, clang relies on this, ... Following the same reasoning we should be generating assembly directly, since even if we generate the correct IR the backend could still "fall back to the other case for no apparent reason". So, while you are technically correct (which is the best kind of correct), I don't think we should give this argument that much weight. |
Closing, we now have accepted an RFC for this feature: rust-lang/rust#35118 |
This produces efficient code that cannot be written in Rust without inline assembly.
The text was updated successfully, but these errors were encountered: