Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved checking of destructuring with literal initializers #4598

Merged
merged 17 commits into from
Sep 11, 2015

Conversation

ahejlsberg
Copy link
Member

Fixes #2469.

This PR makes the checking of destructuring patterns with an object literal or array literal initializers more flexible.

When an object literal is contextually typed by the implied type of an object binding pattern:

  • Properties with default values in the object binding pattern become optional in the object literal.
  • Properties in the object binding pattern that have no match in the object literal are required to have a default value in the object binding pattern and are automatically added to the object literal type.
  • Properties in the object literal that have no match in the object binding pattern are an error.

When an array literal is contextually typed by the implied type of an array binding pattern:

  • Elements in the array binding pattern that have no match in the array literal are required to have a default value in the array binding pattern and are automatically added to the array literal type.

Some examples:

// Type of f1 is (arg?: { x?: number, y?: number }) => void
function f1({ x = 0, y = 0 } = {}) { }
f1();
f1({});
f1({ x: 1 });
f1({ y: 1 });
f1({ x: 1, y: 1 });

// Type of f2 is (arg?: (x: number, y?: number) => void
function f2({ x, y = 0 } = { x: 0 }) { }
f2();
f2({});        // Error, x not optional
f2({ x: 1 });
f2({ y: 1 });  // Error, x not optional
f2({ x: 1, y: 1 });

// Type of f3 is (arg?: [number, number]) => void
function f3([x = 0, y = 0] = []) { }
f3();
f3([]);   // Error, [] not assignable to [number, number]
f3([,]);  // Error, [undefined] not assignable to [number, number]
f3([,,]);
f3([1, 2]);
f3([1, 2, 3]);

In the f3 example, note that the initializer for the parameter binding pattern is allowed to omit elements (and, in particular, is allowed to be an empty array literal). This is a common pattern that we want to support. However, we still require tuples to specify all elements in other cases, as is demonstrated by the calls to f3.

UPDATE 1: A question remains about how far we go in allowing tuple literals to omit elements. Possible rules are:

  • Allow omitted elements when contextually typed by a binding pattern that specifies default values for the elements, e.g. function foo([x = 0, y = 0] = []). Since that pattern is common, this is the minimum bar.
  • Allow omitted elements when contextually typed by a binding pattern, regardless of whether the binding elements specify default values, e.g. function foo([x, y] = []). We allow this currently, but we could say this is an error. That, however, would perhaps be inconsistent with function foo([x, y, ...z] = []), which we allow because spread and rest elements cause us to infer arrays instead of tuples.
  • Allow omitted elements everywhere, e.g. even for var x: [number, number] allow the assignment x = []. This would effectively amount to saying that tuple elements are always optional, i.e. we don't guarantee they're present, but we do guarantee they have the right type when they are present.
  • Go all out and support both optional and required elements in tuples. This would be a lot more work and IMO not worth the effort.

UPDATE 2: The PR implements option 1 above.

@JsonFreeman
Copy link
Contributor

the extra elements are automatically added to the resulting tuple type

What if the extra binding elements are not initialized? Shouldn't that be an error?

function bar([x, y = 0] = []) { } // Should error
function bar2([x, y = 0] = [0]) { }
bar2([]); // should error


// (arg: [number, number]) => void
function g2([x = 0, y = 0]) { }
g2([1, 1]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also

g2([1]);
g2([]);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those would be errors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, ideally they wouldn't be errors, but it would require optional tuple elements or some other mechanism to allow them.

@JsonFreeman
Copy link
Contributor

Can you add the case from #2469 as a test?

@JsonFreeman
Copy link
Contributor

I think it may be challenging to get the array portion correct because there is no concept of optional tuple elements. It seems like that would be required to make array patterns work in a way that parallels how object patterns work in this change.

@ahejlsberg
Copy link
Member Author

@JsonFreeman Thanks for the comments. You're right, some of this is complicated by the fact that tuple types don't support optional elements, but I don't think it is too big of an issue. Certainly I wouldn't advocate adding optional tuple elements as a full blown feature.

On multiple occasions I've seen the pattern of supplying an empty array literal as the default value for an array binding pattern. For example:

function f1([x, y, text] = []) { }               // f1(arg?: [any, any, any])
function f2([x = 0, y = 0, text] = []) { }       // f2(arg?: [number, number, any])
function f3([x, y, text = ""] = []) { }          // f3(arg?: [any, any, string])
function f4([x = 0, y = 0, text = ""] = []) { }  // f4(arg?: [number, number, string])

The emply array literal serves to indicate that the argument may be omitted and, when it is, the destructuring parameters get their default value or undefined if they don't have a default. I expect this to be a fairly common pattern in ES6 and I think we handle it the right way now.

We currently only allow the same pattern with an empty object literal if the corresponding object binding pattern supplies a default values for each variable. However, I'm thinking of a slight tweak where if a binding variable has neither a default value in the object binding pattern nor a property specified in the object literal initializer, we add an optional property of type any to the resulting type:

function f1({ a, b, c } = {}) { }             // f1(arg?: { a?: any, b?: any, c?: any })
function f2({ a, b = "", c } = { a: 0 }) { }  // f2(arg?: { a: number, b?: string, c?: any })

I think this would complete the picture nicely.

@ahejlsberg
Copy link
Member Author

I'm thinking it would also make sense to report an error if an object literal initializer for a destructuring pattern includes a property that isn't listed in the destructuring pattern. For example:

var { x, y } = { x: 0, y: 0, z: 0 };  // Should be an error

We permit this today, but I can't think of any good reason why we should.

@JsonFreeman
Copy link
Contributor

Thanks @ahejlsberg. I agree with your notion that excess properties in the initializer, that are not mentioned in the object binding pattern, should be errors. It's an indication that the user probably made a spelling error. Interestingly, I think this could be achieved by requiring that the type of the initializer be assignable to the implied type of the binding pattern (the initializer is a fresh object literal, so it would fail assignability if it had extra properties).

By the same token, you could make it an error if an array literal that initializes an array binding pattern is longer than the binding pattern. The extra elements would never be used.

Regarding your analysis at #4598 (comment), what you are saying makes sense, but I have a few concerns:

  1. You mention that function f1([x, y, text] = []) { } would not be an error because the elements would be copied to the initializer. The rationale is that the elements will just be undefined at runtime. However, it seems then that it should not be an error to call it like this, f1([]), for the same reason. Or even f1([0]). It seems like your change would not allow this. That would essentially mean that the initializer of a parameter is privileged over arguments passed in that position. It is subject to more lenient assignability rules, or at least, it is typed such that the same expression would be assignable as an initializer, but not assignable as an argument.
  2. About copying over uninitialized properties and giving them type any in the initializer: It is true that the case function f1({ a, b, c } = {}) { } is innocuous because all the binding elements will be undefined, but I believe at runtime it becomes an error if you destructure further, like function f1({ a: [d], b: { c } } = {}) { }. It seems like the most correct behavior is not to copy the property over with type any, but to allow destructuring just one level if the property is missing. The same danger exists for array binding patterns, consider the second element in function f1([x, [y, z]] = []) { }.

@ahejlsberg
Copy link
Member Author

You mention that function f1([x, y, text] = []) { } would not be an error because the elements would be copied to the initializer. The rationale is that the elements will just be undefined at runtime. However, it seems then that it should not be an error to call it like this, f1([]), for the same reason. Or even f1([0]). It seems like your change would not allow this. That would essentially mean that the initializer of a parameter is privileged over arguments passed in that position. It is subject to more lenient assignability rules, or at least, it is typed such that the same expression would be assignable as an initializer, but not assignable as an argument.

Correct, we're more permissive in a destructuring of an array literal vs. a regular assignment of an array literal. Given that we want to support the foo([x, y, z] = []) idiom and that we don't have support for optional tuple elements, I think this is a reasonable compromise.

About copying over uninitialized properties and giving them type any in the initializer: It is true that the case function f1({ a, b, c } = {}) { } is innocuous because all the binding elements will be undefined, but I believe at runtime it becomes an error if you destructure further, like function f1({ a: [d], b: { c } } = {}) { }. It seems like the most correct behavior is not to copy the property over with type any, but to allow destructuring just one level if the property is missing. The same danger exists for array binding patterns, consider the second element in function f1([x, [y, z]] = []) { }.

That seems reasonable. I think it boils down to only allowing a property with no default value in the contextual type to be omitted if it has type any (because if it is a nested destructuring it will have an object type or a tuple type).

@JsonFreeman
Copy link
Contributor

I think it boils down to only allowing a property with no default value in the contextual type to be omitted if it has type any

When would you do that? When you compute the contextual (implied) type? I thought it would be easier to do it when you process the destructuring and finally assign types to the binding elements. If you wait till then, you can explicitly only allow one level of destructuring for elements that are missing. The other reason to wait till the destructuring itself is that we've always said contextual typing should not cause errors. My reading of your suggestion is that it may cause an error during contextual typing itself.

@JsonFreeman
Copy link
Contributor

Additionally, if you allow a one level destructuring for missing elements, then you don't really have to copy any elements over to the initializer for the array binding pattern case. The type of the initializer can be exactly as it was before, and the treatment of the destructuring operation would be able to handle the missing elements.

@JsonFreeman
Copy link
Contributor

In terms of being more lenient for destructuring an array versus just assigning an array, I think it sounds odd to me, but you could make a case for it. I think the key is that I have to think of the initializer as determining the type of the parameter, while an argument would not. In this way, the initializer has more authority because when it's processed, the type of the parameter position hasn't been finalized yet.

@ahejlsberg
Copy link
Member Author

Latest set of changes add a check for excess properties in an object literal being destructured. For example:

var { x, y } = { x: 0, y: 0, z: 0 };  // Error, excess property z

Destructuring of an array literal allows a particular element to be omitted provided it has a default value or is not itself a destructuring. For example:

var [x, y] = [];            // Ok
var [x, [a, b]] = [];       // Error, no value for [a, b]
var [x, [a, b] = []] = [];  // Ok

Destructuring variable declarations, destructuring parameter declarations, and destructuring assignments now all behave the same (there were some differences previously).

I ended up deciding against the following change:

We currently only allow the same pattern with an empty object literal if the corresponding object binding pattern supplies a default values for each variable. However, I'm thinking of a slight tweak where if a binding variable has neither a default value in the object binding pattern nor a property specified in the object literal initializer, we add an optional property of type any to the resulting type.

So, the following is still an error:

var { x, y } = {};  // Error, x and y have no default values and must be specified

@JsonFreeman
Copy link
Contributor

@ahejlsberg Great, thanks! I haven't looked at the commits yet, but two questions:

  1. For the excess property check, do you do something similar for arrays? The equivalent would be that if the initializer is a longer array than the binding pattern, it is an error. I think trailing array elements would be equally useless like excess properties.
  2. Why is var { x, y } = {} an error, but var [x, y] = [] is not? They seem very similar to me. At runtime, both just result in undefined getting assigned to x and y if the default initializer takes effect.

@ahejlsberg
Copy link
Member Author

@JsonFreeman Again, thanks for the careful review.

For the excess property check, do you do something similar for arrays? The equivalent would be that if the initializer is a longer array than the binding pattern, it is an error. I think trailing array elements would be equally useless like excess properties.

Yes, but since we allow the array literal to have fewer elements than the binding pattern, it seems consistent to also allow it to have more.

Why is var { x, y } = {} an error, but var [x, y] = [] is not? They seem very similar to me. At runtime, both just result in undefined getting assigned to x and y if the default initializer takes effect.

For object literals the intuition is that elements can't be omitted unless they have a default value (i.e. unless they're optional). This is consistent with our assignment compatibility rules and I think it harmonizes well with our excess property check (i.e. you can't have excess properties on either side unless there's a default involved). If anything we should more strictly check that array binding patterns and array literals have matching lengths, but I actually like the looser rules for arrays. After all, the minute you have rest or spread elements involved, we can't accurately check lengths. And, whereas we may think of something as a tuple, the user might actually think of it as an array.

@JsonFreeman
Copy link
Contributor

Yes, but since we allow the array literal to have fewer elements than the binding pattern, it seems consistent to also allow it to have more.

Fewer and more mean different things though. Fewer elements than the binding pattern means that some elements will become undefined or error (depending on whether the element is destructured further). More means that some values will be unused, not assigned anywhere. To me, consistency between those concepts isn't super meaningful.

This is consistent with our assignment compatibility rules and I think it harmonizes well with our excess property check.

That is fair, although I tend to think of destructuring as basically a declarative syntax for property accesses. So it seems more similar to a property access with subsequent assignments, rather than one big assignment. But I realize it's sort of halfway in between.

but I actually like the looser rules for arrays. After all, the minute you have rest or spread elements involved, we can't accurately check lengths. And, whereas we may think of something as a tuple, the user might actually think of it as an array.

I realize there are certain cases that may be more common for arrays, but what principle dictates that the rules for arrays should be looser? I understand looseness with rest elements because we don't know what indices we are dealing with, but I think array destructuring is better treated tuple-wise unless there is some reason not to. I think the line between tuples and arrays is rather fuzzy, and it makes sense to think about what principles govern whether something is treated as an array versus a tuple.

A few more thoughts I've had: I think there are a lot of concepts we are discussing here that are kind of independent, and may be good topics for the design meeting as separate topics. I can think of at least three:

  1. Excess elements in the initializer that are not present in the destructuring. Should they be errors for object destructuring? What about arrays?
  2. Is it okay to permit destructuring of elements that are absent in the initializer? Both for arrays and objects, both for literals and non-literals. Does this mean that a property access a.x should be allowed if a has no member x?
  3. Should inner initializers make certain properties optional, even if they were present in the parent initializer? Basically issue An initialized binding element should be optional even if its parent is initialized too #2469.

Again, I think it is useful to discuss them separately at the design meeting. In particular, what is the intended behavior, and what are the principles driving those decisions? By principles, I mean for example that object destructuring an array destructuring should be consistent except where there is a fundamental difference that needs to be accounted for. That may not be one of the principles you used, but it is one that I was assuming, and may not be as important as I am making it. Once those principles are nailed down, it will be easier to review the language design mechanisms and the implementation strategies involved. I think it is easier than thinking about them all at the same time.

One more point. Now that we have flags to track whether certain types came from literals, is it really necessary/better to use contextual typing as a device to make certain values assignable? Would it make more sense to just build those rules into assignability? For example, the excess property check on fresh object literals (non-destructuring) is done in assignability with no aid from contextual typing. It's not clear to me what determines whether contextual typing is the chosen vehicle.

@JsonFreeman
Copy link
Contributor

To summarize what we have now, it seems that for object destructuring, the rules are just as before except for two changes:

  1. Excess properties in an initializer are not allowed. This is similar to a regular assignment of an object literal.
  2. Nested initializers in destructuring patterns can cause them to be optional, even if there is a parent initializer.

For array destructuring:

  1. The array initializer is still allowed to be longer than the binding pattern, but if it is an array literal, it is also allowed to be shorter.
  2. The types involved in the destructuring are formed by combining the elements of the parent initializer with the nested initializers in the binding pattern (by copying the inner initializers to the parent initializer) if it is an array literal.
  3. Further destructuring of elements that have no initializer is not allowed.

@ahejlsberg
Copy link
Member Author

I realize there are certain cases that may be more common for arrays, but what principle dictates that the rules for arrays should be looser?

It's the fact that the same syntax is used for both arrays and tuples, and sometimes what we deem a tuple was actually intended by the programmer to be an array. For example:

var [x, y] = [];        // We treat this as a tuple
var [x, y, ...z] = [];  // We treat this as an array

If we did strict length checking, the first line above would become an error but the second line would still be fine. Now, you could argue the second line should also be an error, but then we'd have to start reasoning about hybrid tuple/array like types. And it's just not clear we're solving anyone's problem--after all, the above is perfectly valid JavaScript.

We may discuss your issues in a design meeting. Meanwhile, here's my take on them:

Excess elements in the initializer that are not present in the destructuring. Should they be errors for object destructuring? What about arrays?

They should be errors for object destructuring, consistent with our excess property error checks. They should perhaps be errors for arrays, depending on whether we think it is fine to allow fewer elements, but not more.

Is it okay to permit destructuring of elements that are absent in the initializer? Both for arrays and objects, both for literals and non-literals. Does this mean that a property access a.x should be allowed if a has no member x?

Elements that are absent in the initializer can be destructured only if they specify a default value. This is true only for array and object literal initializers, not for non-literals (if a property is missing in the static type of a non-literal, the property may actually be present at run-time, but we have no idea what its type will be). No a.x should not allowed if a has no member x.

Should inner initializers make certain properties optional, even if they were present in the parent initializer? Basically issue #2469.

Yes. I agree with your reasoning in #2469 and that's how it is implemented now.

One more point. Now that we have flags to track whether certain types came from literals, is it really necessary/better to use contextual typing as a device to make certain values assignable? Would it make more sense to just build those rules into assignability?

We could build them into assignability, but we'd still have to construct the combined contextual/literal type as I explain here. By "adapting" the literal type according to the contextual type, we basically solve both problems in one place.

@JsonFreeman
Copy link
Contributor

@ahejlsberg Thanks for the responses.

I think you identified the key question about arrays versus tuples - how do we really know what the user intended? Our original stance was that the presence of a tuple type literal, or an array destructuring (without a rest) meant a tuple type, and I feel like now there is some uncertainty about whether an array destructuring (without a rest) really indicates that the type should be treated as a tuple. So it may be a good opportunity to discuss whether that was a good assumption, and whether / where it still holds.

I think all of your points make sense, and it was helpful to see the thought process. I think it is a reasonable place to land, and am curious what others think.

@ahejlsberg
Copy link
Member Author

@mhegazy @danquirk @RyanCavanaugh @DanielRosenwasser @vladima @JsonFreeman I have updated the initial description of this feature with the currently implemented behavior. There is one final question I'd like some feedback on. Let me know what you think.

@JsonFreeman
Copy link
Contributor

@ahejlsberg Regarding your question, there is one more option, also possibly not worth the effort:

  • Allow omitting only elements with default initializers, like the first bullet, but implement a rest capability for tuples as well. This would mean that something could be a tuple up to a certain index, and after that index it acts like an array. That would eliminate the inconsistency you were concerned about in [x, y, ...z] = [].

Going back to the options you listed, I just want to clarify two things:

  1. When you say an omitted element in the initializer, are you only including elements that are not destructured further? In other words, we always want to disallow [[a, b]] = []
  2. The first three bullets only deal with the case where the initializer is an array literal, correct?

I like option 1, that only allows omitting the element when the binding pattern is initialized. The odd thing about option 2 is that there are array literals that are valid as initializers, but not valid as arguments. And option 3 is not bad, though depending on the answer to my question 2 above, it may have the nontransitivity property that we've seen with optionals:

var a: [number, string] = [0, ""];
var b: [number] = a;
var c: [number, number] = b;

I think options 1 and 3 are best. One attractive thing about 3 in terms of implementation is that all the information is present in the contextual type, and you don't have to pack along a pattern as an auxiliary store of information.

@ahejlsberg
Copy link
Member Author

When you say an omitted element in the initializer, are you only including elements that are not destructured further? In other words, we always want to disallow [[a, b]] = []

Correct.

The first three bullets only deal with the case where the initializer is an array literal, correct?

Also correct. It generally isn't safe to allow a shorter tuple type to be assigned to a longer tuple type (because, as your example illustrates, we don't know for sure that the missing elements are really missing at run-time). However, when the shorter tuple is an array literal we do know for sure there are no extra elements and the assignment is actually safe.

I agree with you that option 1 and 3 are the most attractive. I'm actually leaning in favor of 3 as it is more consistent across the board.

@ahejlsberg
Copy link
Member Author

I researched implementing option 3. It turns out to have wider ranging effects on overloading because tuple literals with fewer elements now become assignable to tuple types with more elements. It's not impossible to go that route, but I think it is beyond the scope of this PR and something we could do later if we think it is necessary.

My latest commits implement option 1 and furthermore improves error reporting. Specifically, for code like the following:

var [x, y] = [];
var { x, y } = {};

we now report the error message "Initializer provides no value for this binding element and the binding element has no default value" on x and y in both examples.

I think this takes care of all the feedback, so check out the latest commits and give me a thumbs up if you're good with this.

function f1() {
var { x, y } = {};
~
!!! error TS2525: Initializer provides no value for this binding element and the binding element has no default value
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should have a period at the end of this error

@danquirk
Copy link
Member

👍

ahejlsberg added a commit that referenced this pull request Sep 11, 2015
Improved checking of destructuring with literal initializers
@ahejlsberg ahejlsberg merged commit dcb9e1c into master Sep 11, 2015
@ahejlsberg ahejlsberg deleted the destructuringInitializers branch September 11, 2015 16:13
@microsoft microsoft locked and limited conversation to collaborators Jun 19, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants