-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: int type #195
Comments
Great start; thanks for the examples. I have some follow-up questions Compat of literals vs divide expressionsi = 1.1; // error;
i = 3 / 4; // valid, type casted By what rule is the first line an error, but the second line OK? I have to assume that the type of var indexable: { [i: int]: bool } = {};
indexable[n] = 3; // error
indexable[i] = 3; // valid I don't understand either of these. First, Treatment of optional
|
Ok I guess my example were not really clear, what I meant is : var i: int;
var n: number;
i = n; //error;
i = 1.1 //error However the division of 2 var i: int;
var i1: int;
i = i / i1 // valid and type casted because `i` and `i1` are `int-compatible`
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible`
I made an error in this example a correct example would be : indexable[n] = true; // error
indexable[i] = true // valid I just wanted to notify that the same rules applied for variable assignment and indexing (and it should also apply to parameters) I'll update the example right now.
Not really in the same way that
there is 4 cases where the compiler must emit
In fact I would tend to think that having for rule that : function logInt(i: int) {
console.log(i);
}
var t: any = 3;
logInt(t); function logInt(i) {
console.log(i);
}
var t = 3;
logInt(t|0); // t is any so type casted Still I think that emitting that cast at the beginning of the function could allow JIT compilers to perform some optimization. But that's perhaps another topic. |
Diving into the contextual typing, it sounds like you have a few new rules:
Is that right? Maybe elaborate some examples here. Going back to this example: var i: int;
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible` What's the emit here? |
For me it seems logic than when you assign something to an var i: int;
i = 3 / 4 // valid and type casted because `3` and `4` are `int-compatible` would be translated to : var i;
i = (3 / 4) | 0; However that division part of what I proposed is just a tool to avoid boilerplate of manually casting it, if that implicit cast is not something desired the compiler could just report an error. |
Ok trying to summing up of the discussion I obtain the following rules :
var i: int;
function addInt(a: int, b: int) {
return a + b;
}
var a: any = 3.5;
i = a; emit: var i = i | 0;
function addInt(a, b) {
a = a | 0;
b = b | 0;
return a + b;
}
var a = 3.5;
i = a | 0;
var n = 1; // n is number
var i: int = 1; // i is int
i = n; // error
var objWithNumber = { i: 1 }; // objWithNumber type is { i: number; };
var objWithInt: { i: int } = { i: 1 }; // objWithInt type is { i: int };
objWithInt = objWithNumber; // error
function getValue() { return 1; } // getValue type is : () => number
function getInt(): int { return 1; } // getValue type is : () => int
var n1 = getValue(); //n1 is number
var i1 = getValue(); //i1 is int
i1 = n1; //error
var n: number;
var i : int;
var indexible: { [index: int]: any };
indexible[n]; // error
indexible[i]; // valid
indexible[1]; // valid `1` is here contextually typed to `int`
indexible[1.1]; // error
|
I have an alternative proposal, which introduces also a double type. This one is more in line with asm.js, for example Integer & double proposalThis proposal introduces 2 new primitive types: NamingI chose for ASM.jsThis proposal was designed with asm.js in mind. ValuesA When you declare a variable with the type double or integer, it will automaticly be 0. Any number literal that contains no decimal point and does not have an negative exponent after the
CastsYou can cast between the types
Optional argumentsA optional argument typed as Generated JavaScriptMost expressions will be wrapped with Function argumentsArguments will be reassigned according to the ASM.js spec. TypeScript:
JavaScript:
Adding 0 explicitly as the default value should generate the same code as adding no default value. Function returnThe expression of the return statement should be wrapped: JavaScript:
Assigning to a variableWhen declaring a variable (integer or double) without assigning a value, it gets the value 0 (since an integer or double cannot be undefined). TypeScript:
JavaScript:
TypecastingA cast to double is wrapped with TypeScript
JavaScript
A cast from anyting to number, from integer to integer or from double to double will emit not extra JavaScript. OperatorsUnary
Binary
Generated errorsAn error is thrown when:
|
How does this existing TypeScript code behave? Is it now an error when integer is inferred for x? var x = 1;
x = 1.5; It's very strange to add runtime semantics to certain cast operations. |
Indeed, there needs to be a rule that a non-typed variable declaration won't get the type integer or double, but always number. If you want a variable to be an integer or double, you'll need to specify that explicitly. I chose for runtime semantics to cast operations because of various reasons. For performance, js engines know better how much space they need to allocate for a number and they know which overload of the + operator is used. Integer calculations are most times faster than floating point ones. There also needs to be a way to convert the different number types between each other. If you already generate JavaScript for cast operations, why not use a cast to convert a number type? Also this doesn't introduce a new syntax. An alternative would be to write ~~ or + to convert numbers, but in my opinion the cast look better: var d: double = 3.5;
var int: integer = <integer> d;
d = <double> int;
// Or
var d: double = 3.5;
var int: integer = ~~d; // Because |0 isn't supported on doubles according to the asm.js spec.
d = +d; |
How do you propose to deal with constants in an expression, for instance what does It feels to me like you need to buy into the asm.js type system explicitly, so either do something like |
Compare it to this OOP example: interface Base { // number
base: string;
}
interface Foo extends Base { // double
foo: string;
}
interface Bar extends Base { // integer
bar: string;
}
function add(first: Base, second: Base): Foo;
function add(first: Bar, second: Bar): Bar;
// ... implementation of add ...
var base: Base, bar: Bar;
base = add(base, bar); // first signature, returns Foo, which extends Base. When you call
undefined + undefined = NaN;
null + null = 0;
null + undefined = NaN;
undefined + 2 = NaN;
null + 2 = 2; |
This is awesome. PLEASE make ints and doubles! |
If union types are adopted a cool way to make this suggestion simpler to implements from type system point of view would be to infer var x = 1;
x = 1.5; would still be valid and same would be for : var t: { [i: int]: boolean } = {};
t[1] without contextual typing |
I honestly believe that for both ints and doubles, null and undefined should become (at least) NaN, if not retaining their original value. |
One major benefit of an int type is the ability to optimize arithmetic operations (particularly bitwise ops) to do multiple calculations without converting back and forth between floating point between each one. To do this, I'm pretty sure int has to be non-nullable. |
Another point for integer and float types are for writing definition files. When APIs call for specifically an int or a float, it is dishonest to claim that the type is a number. |
And there are certain operations that only return 32-bit integers:
Others only produce 32-bit integers under certain circumstances:
And many of the core language APIs accept only integers (and coerce the ones that aren't):
As for numbers within the range 0 ≤ |x| < 253 ("safe" integers), add the following: Always produce safe integers:
Require safe integer arguments:
|
I've written a proposal in #4639. Big difference is that the emit is not based on type info (which means you can use it with |
I'd vote for |
Do we have function Also I think the point is var x = 5 / 4; // number 1.25
var y : int = 5 / 4; // int 1
var z = 5; // int 5
var w = x / 4; // number 1.25 |
@Thaina No, but the closest you could get to that would be either |
If you are interested in integers, follow the BigInt proposal which is already Stage 3 and could solve most of your integer use cases. |
@styfle That'll require a separate type from this, because they would require infinite precision, and most DOM APIs would reject them until the WebIDL spec gets updated to accept BigInts where integers are expected. In addition, such integers can't be used with numbers, as the implicit ToNumber coercion would throw for them. (This is similar to how ToString throws for symbols.) |
@isiahmeadows if you take a look at the writeup I did in this issue: #15096 ...the BigInt proposal is useful for both fixed-width/wrapping and arbitrary precision types. For example, you could map the following fixed-width types as follows using the given BigInt constructors:
My understanding is that these particular constructors are supposed to hint to the VM to use the correspondingly sized CPU architecture native integer types, but even if they don't, they should be semantically equivalent to the correspondingly sized wrapping integer types. Also, the V8 team just announced Intent to Ship for TC39 BigInts 🎉 https://groups.google.com/forum/#!msg/v8-dev/x571Gr0khNo/y8Jk0_vSBAAJ |
@tarcieri I'm familiar with that proposal. (I've also got a low need for BigInt, but that's a different deal.) I'm still interested in a glorified As for those constructors, I could see frequent use of |
I think there's a pretty natural mapping of those constructors to sized integer types, e.g.
with BigInt would compile down to:
I don't think it makes any sense to build any sort of integer type on number. JavaScript finally has native integers, and ones which will raise runtime exceptions if you attempt to perform arithmetic on a mixture of BigInts and numbers. A great way to avoid those runtime exceptions is static type checking, so I think having TypeScript assert that would be greatly helpful. |
@tarcieri I like the new types but I disagree with the sugar. The value assigned to |
@errorx666 TypeScript could use the same literal syntax as JavaScript, but that's unnecessary when the compiler already has the type information This is no different from almost every other statically typed language on earth, where you are free to write something to the effect of |
@tarcieri I've been cautious on suggesting any of that, since that kind of thing has been repeatedly shot down by the TS team. (They seem to prefer sugar to be syntactic, not type-based.) |
@tarcieri Suppose you have the type definition for Should your compiled emit fundamentally change, without error or warning? Suppose further that the change introduced some sort of bug. How much of a nightmare would it be to ultimately trace that bug down to a change in a |
@errorx666 That scenario can already happen with the following code: status.d.tsdeclare module server {
const enum Status {
None,
Pending,
Approved,
}
} main.tsconsole.log(server.Status); main.jsconsole.log(2); |
@styfle Fair point, but at least both possible emits evaluate to 2. - Or, more importantly, the same type (Number). |
This may be too much syntax sugar to swallow, but the benefits outweigh the drawbacks, IMO. There is an opportunity here for TypeScript to model sized, typed integers in a way JavaScript VMs can understand, and also statically assert that programs are free of integer/number type confusion.
So, again, for context: we're discussing integer literals. Every statically typed language I can think of, even where they do support type suffixes/tags, will interpret untagged literals according to the type they're being bound to. So to answer your question: yes, for untagged integer literals, there shouldn't be an error or warning even though the type changed. If you're worried about type confusion there, yes the tagged syntax should be supported to, and that should fail if a type changes from a BigInt to a number. |
Looks like #15096 is on the TypeScript 3.0 roadmap: https://github.com/Microsoft/TypeScript/wiki/Roadmap#30-july-2018 |
We're still holding the line on type-directed emit. BigInt seems like a "close enough" fit for these use cases and doesn't require us inventing new expression-level syntax. |
Agreed, adding other syntax or types for "double based integers" would only be confusing. |
Make it explicit: let i:int = 7
i = 1.1; // error, is a number
i = 3 / 4; // error, produces a number
i = Math.floor(3 / 4); // valid, might require a typedef update
i = (3 / 4) | 0; // valid and don't do the automatic compile from And same for floats. |
Just |
I want to know when we can use the int declaration symbol |
Introducing
int
type could allows some error to be caught at compile time (like trying to index an array with a float) and perhaps improve performance of outputted javascript, to obtains trueint
, TypeScript could systematically emit a cas with|0
when a variable/parameter is an integer :would emit
There will perhaps be a problem with generic and method but I guess the compiler could in this case type cast when passing the parameter:
emit :
also perhaps the compiler should always infer 'number' if there is not explicit type annotation
The text was updated successfully, but these errors were encountered: