Description
An idea that has been kicking around for years, but never written down:
The current definition of int
(and correspondingly uint
) is that it is either 32 or 64 bits. This causes a variety of problems that are small but annoying and add up:
- overflow when constants like math.MaxUint64 is automatically promoted to
int
type - maximum size of a byte slice is only half the address space on a 32-bit machine
int
values can overflow silently, yet no one depends on this working. (Those who want overflow use sized types.)- great care must be taken with conversion between potentially large values, as information can be lost silently
- many more
I propose that for Go 2 we make a profound change to the language and have int
and uint
be arbitrary precision. It can be done efficiently - many other languages have done so - and with the new compiler it should be possible to avoid the overhead completely in many cases. (The usual solution is to represent an integer as a word with one bit reserved; for instance if clear, the word points to a big.Int or equivalent, while if set the bit is just cleared or shifted out.)
The advantages are many:
int
(anduint
, but I'll stop mentioning it now) become very powerful types- overflow becomes impossible, simplifying and securing many calculations
- the default type for
len
etc. can now capture any size without overflow - we could permit any integer type to be converted to
int
without ceremony, simplifying some arithmetical calculations
Most important, I think it makes Go a lot more interesting. No language in its domain has this feature, and the advantages of security and simplicity it would bring are significant.