Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JavaScript中Number的一些表示上/下限 #5

Open
alvarto opened this issue Aug 31, 2015 · 0 comments
Open

JavaScript中Number的一些表示上/下限 #5

alvarto opened this issue Aug 31, 2015 · 0 comments

Comments

@alvarto
Copy link
Owner

alvarto commented Aug 31, 2015

http://alvarto.github.io/VisualNumeric64/

本文来自对http://javascript-puzzlers.herokuapp.com/ 相关的精度问题的探索,并尝试着以一个数轴链接一些关于Number,关于数字表示的知识点。欢迎帮忙捉虫、补充。

从题目开始

What is the result of this expression? (or multiple ones)

var end = Math.pow(2, 53);
var START = end - 100;
var count = 0;
for (var i = START; i <= end; i++) {
    count++;
}
console.log(count);
A:0 B:100 C:101 D:other

答案:D:other
it goes into an infinite loop, 2^53 is the highest possible number in javascript, and 2^53+1 gives 2^53, so i can never become larger than that.

这里答案的解释比较让人混淆,让我们深入到内存模型,来看看Number的表示上下限的由来。

数轴

请输入图片描述

说明

关于Number表示的内存模型

参考国际标准IEEE 754,我画了一张图帮助理解:

请输入图片描述

注,这里的字符是从左到右排的,和wiki之类的资料顺序相反。wiki资料考虑的是比较的顺序(符号-指数位-有效数字),而我这里考虑到的是阅读顺序(从0到63位,从左到右)。

中间的指数位是如何同时表示正负指数值的呢,和“符号位+有效数字位”的常规表示方法不同,指数是使用偏移法来做的:

IEEE 754:指数偏移值
指数偏移值(exponent bias),是指浮点数表示法中的指数域的编码值为指数的实际值加上某个固定的值,IEEE 754标准规定该固定值为2^(e-1)-1,其中的e为存储指数的比特的长度。
以单精度浮点数为例,它的指数域是8个比特,固定偏移值是28-1 - 1 = 128−1 = 127.单精度浮点数的指数部分实际取值是从128到-127。例如指数实际值为1710,在单精度浮点数中的指数域编码值为14410,即14410 = 1710 + 12710.
采用指数的实际值加上固定的偏移值的办法表示浮点数的指数,好处是可以用长度为e个比特的无符号整数来表示所有的指数取值,这使得两个浮点数的指数大小的比较更为容易。

因此,在JavaScript里面的指数位,是从1-2^(11-1),也就是从-1023开始,表示了(-1023,1024)这个区间。

实际指数值 存储的指数值
-1022 1
0 1023
1023 2046

Number保留了指数值0和2047用于表示一些特殊的值。总的表示表格如下:

X Y 表示的值
=0 =0 ±0
≠0 =2047 NaN
=0 =2047 ±Infinity
≠0 =0 反规格化值(Denormalized):f(0.x , 1 , z)
∈(0,2047) 规格化值(Normalized):f(1.x , y , z)

f(i,j,k) = (-1)k · 2-1023+j · i

精确表示到个位的最大整数

前52位能表示的最大值是下面这个(下面是52位+1位默认的1):

parseInt("11111111111111111111111111111111111111111111111111111",2)
-> 9007199254740991 //即2^53-1

而下一个值是:

parseInt("100000000000000000000000000000000000000000000000000000",2)
-> 9007199254740992 //即2^53

根据内存模型,画一张图就可以知道:

请输入图片描述

从第2^53位开始,第一个进制被舍弃,这个时候,2^53+1==2^53,每两个值都会有一个值出现这种不精确的情形。再过N个值,会出现每4个值里面都有3个值不精确;再过M个值,会出现每2^K个值里有2^K-1个值不精确;以此类推……(小题目:这个N值是多少?)

最大可表示的正数

请输入图片描述

验证:

Number.MAX_VALUE.toString(2)
-> "1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"

var a = Number.MAX_VALUE.toString(2).split("") , b = [ a.filter(function(i){return i==0}).length , a.filter(function(i){return i==1}).length ] ; b
-> [971, 53]

Number.MAX_VALUE === (Math.pow(2,53)-1)*Math.pow(2,971)
-> true

QED

最小可表示的正数

还记得前面的表格吗:

X Y 表示的值
≠0 =0 反规格化值(Denormalized):f(0.x , 1 , z)
∈(0,2047) 规格化值(Normalized):f(1.x , y , z)

f(i,j,k) = (-1)k · 2-1023+j · i

非规格化值是这样表示的:

请输入图片描述

最小正数的内存模型

请输入图片描述

验证:

Number.MIN_VALUE.toString(2)
-> "0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001"

var a = Number.MIN_VALUE.toString(2).split(""); a.filter(function(i){return i==0}).length - 1
-> 1073

Number.MIN_VALUE === Math.pow(2,-1074)
-> true

参考资料

除了IEEE 754的维基页面,还有这篇文章,解释的非常清晰:"How numbers are encoded in JavaScript"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant