Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt at implementing propagated inference for free type parameters (#9366) #24626

Closed
wants to merge 4 commits into from

Conversation

kpdonn
Copy link
Contributor

@kpdonn kpdonn commented Jun 2, 2018

I started playing with this as a tangent off of #23809 after I was starting to add some type inference there and was running into a lot of {} inferences. Initially I wasn't planning on getting anywhere near something worth sharing but after I played with it for a while it started to feel like it would be possible to come up with something that could at least serve as a working straw-man implementation of #9366, which I think I ended up accomplishing.

Check out the updated baselines (especially inferringGenericFunctionsFromGenericFunctions2.ts which is mostly borrowed from @gcnew's extremely helpful test suite) for full details on how it behaves, but it mostly does what it should for the examples I've tried:

// everything below compiles without error

// example from https://github.com/Microsoft/TypeScript/issues/9366
function flip<a, b, c>(f: (a: a, b: b) => c): (b: b, a: a) => c {
    return (b: b, a: a) => f(a, b);
}
function zip<T, U>(x: T, y: U): [T, U] {
    return [x, y];
}

const flipped = flip(zip);
var expected: <T, U>(y: U, x: T) => [T, U] = flipped;
const actualCallResult = flipped("test", 1234)
const expectedResult: [number, string] = actualCallResult;


// from https://github.com/Microsoft/TypeScript/issues/16414
declare function compose<A, B, C>(f: (x: A) => B, g: (y: B) => C): (x: A) => C;
declare function box<T>(x: T): { value: T };
declare function list<U>(x: U): U[];

const composed = compose(list, box);
const expectedComposed: <U>(u: U) => { value: U[] } = composed;
const callComposed = composed("test");
const expectedCallComposed: { value: string[] } = callComposed;

const composedContextual = compose(x => [x], x2 => ({ boxed: x2 }));
const expectedComposedContextual: <U>(u: U) => {boxed: U[]} = composedContextual;
const callComposedContextual = composedContextual("test");
const expectedCallComposedContextual : {boxed: string[]} = callComposedContextual;

It also manages to only make a few changes to existing baselines, and the changes it does have seemed positive or neutral to me.

For a high level summary of the implementation that glosses over a lot of the details:

  • The first thing I did was change getInferredType() to return the type parameter being inferred if there are no inferences available for it instead of defaulting to {}. The inference does not become fixed if this happens.
  • In resolveCall(), immediately after chooseOverload() successfully returns a signature I resolve the return type of the chosen signature and check to see if it contains any free type parameters. (This is done eagerly now. A polished version should probably be refactored to be lazy like everything else)
  • If the return type does contain free type parameters then I see whether the the return type is a type with a single call signature and nothing else. If it is then I clone the signature and the free type parameters and add the free type parameters as explicit type parameters of the new signature. A new return type is created from the new signature and replaces the old one.
  • If the return type contained free type parameters but is anything besides a type with a single call signature and nothing else, then the free type parameters are erased to {} like they would have been before this PR.
  • All of the above logic about free type parameters is ignored for type parameters that are found in the outerTypeParameters of the call expression node. This avoids erasing type parameters for calls inside the body of a function with generics.
  • Also, if strictFunctionTypes is disabled then the old behavior of defaulting to {} when there are no inferences still happens. This is because I ran into assignability problems if contravariant inferences of type parameters were treated as covariant inferences, which is what happens without strictFunctionTypes.
  • inferFromSignatures() will try to instantiate a source signature with type parameters in the context of the target signature in the same way as compareSignaturesRelated(). This also only happens if strictFunctionTypes is enabled.
  • When chooseOverload() has multiple candidate signatures to choose from and has some contextually typed arguments, any contextual types that were assigned are reset when trying the next candidate signature.

Plan from here

I don't have any specific plans for this from here, I just wanted to share what I have since it turned out pretty well. If anyone tries it out and runs into things that aren't working right let me know and I'll see if I can handle them. Otherwise though I'm probably going to let this stand as it is now unless the TS team expresses interest in this approach, which they very easily may not since this obviously makes significant changes to the core inference and signature resolution code.

kpdonn added 3 commits June 2, 2018 12:06
A summary of the bigger pieces I can think of:

* When strictFunctionTypes is true if there are no inferences for a
type parameter then the type parameter itself will be returned instead
of defaulting to `{}`. After chooseOverload returns a successful
signature, the return type of that signature is checked and if the
return type is a function type with a single call signature then any
free type parameters found in that call signature are converted to type
parameters for the call signature. If the return type is not a call
signature then any free type parameters are converted to `{}` at that
point.

* inferFromSignatures will instantiate the source in the context of the
target if the source signature has type parameters and
strictFunctionTypes is true.

* StrictFunctionTypes is required in both of the above cases because of
errors that pop up otherwise when there are not contravariant inferences.

* Seemingly free type parameters that are outerTypeParameters of the
call being inferred are ignored instead of being erased. This is to
avoid erasing type parameters when calling functions inside the body of
a generic function.

* When choosing an overload with multiple signatures that have
contextually typed arguments, any contextual types that are assigned are
reset when trying a new candidate.

Updated baselines will be committed in the following commit.
@kpdonn kpdonn force-pushed the propagateFreeTypeParameters branch from 20264f0 to ccb5dd4 Compare June 2, 2018 20:33
@clayne11
Copy link

clayne11 commented Jun 2, 2018

This is such an important missing feature in TS, I really hope this gets picked up.

@falsandtru
Copy link
Contributor

@sandersn @ahejlsberg Can you take a look?

@KiaraGrouwstra
Copy link
Contributor

best addition since conditional types! Ramda badly needs this. /cc @ikatyang

@joshburgess
Copy link

Any updates from the TS team? TypeScript needs this feature.

@ahejlsberg
Copy link
Member

@kpdonn Thanks for taking time to experiment with this. It definitely would be nice to be able to infer free type variables--but it is quite complicated to do right, as I'm sure you've discovered. Your particular approach works for some scenarios, but there are several common ones that really only can be solved using a unification algorithm. For example:

declare function compose<A, B, C>(f: (x: A) => B, g: (y: B) => C): (x: A) => C;
declare function arrayToList<T>(x: Array<T>): List<T>;
declare function box<U>(x: U): { value: U };

const f = compose(arrayToList, box);  // Should infer <T>(x: Array<T>) => { value: List<T> }

Ideally this would infer type <T>(x: Array<T>) => { value: List<T> } for f, but it requires a more capable unification algorithm to do so, along the lines of what @gcnew prototyped. A number of the challenges are outlined in #15016 (comment)). It would definitely be complex to integrate unification in a non-breaking manner.

But I totally understand the value of the feature and it is certainly something I continue to think about.

@kujon
Copy link
Contributor

kujon commented Jul 27, 2018

I'd like to add that your typical FP library is not the only place where the lack of this feature is making things hard. Writing react components with generics, even though it's supported, is pretty much impossible because of this (as soon as you drop your component into a typical HOC, you'll get {} everywhere).

@clayne11
Copy link

For React, we've moved away from using HOCs and moved to using render props instead since it plays a lot better with TypeScript at the moment.

@RyanCavanaugh RyanCavanaugh added the Experiment A fork with an experimental idea which might not make it into master label Sep 6, 2018
@RyanCavanaugh RyanCavanaugh added this to the Community milestone Sep 17, 2018
@EvanCarroll
Copy link

Is this patch dead? Perhaps TS could create some tests they want to see pass, and we can either get the job done or bounty it off?

@nateabele
Copy link

I realize the degree of complexity involved here, but is there any way we could get an update from a TS core gatekeeper?

Despite the (already considerably high) level of sophistication & expressiveness of the type system, it is still effectively impossible to do real, type-safe FP beyond naïve function composition. A proper implementation of this feature would be a massive step forward.

If we could at least know what the goalposts are for getting this accepted, I'm sure more than a few of us (myself included) would be happy to step up and help, but it's hard to want to put up the effort when it feels like you might just as likely be shouting into the void, so to speak.

Anyway, I appreciate that this is a large open source project with many competing priorities, I appreciate the efforts made on it, and your time and attention. Thanks.

@RyanCavanaugh
Copy link
Member

@nateabele

If we could at least know what the goalposts are for getting this accepted, I'm sure more than a few of us (myself included) would be happy to step up and help, but it's hard to want to put up the effort when it feels like you might just as likely be shouting into the void, so to speak.

Three things here.

First, we do have far too many open PRs at the moment, and we have "dropped the ball" on following up on community members. I apologize for that, and we're taking steps to address this: all open PRs have been assigned to a core team member for investigation and follow-up. For transparency, this backlog started building while I was on paternity leave coinciding with two engineers taking other opportunities, and we haven't backfilled those positions yet, so we're quite short-staffed -- but we need to reprioritize things here to keep things moving. My goal here is that we don't any actionable community PRs sitting open for more than a few days.

Second, a key aspect of this is "actionable" - we're basically staffed at a level where we can readily accept PRs for bugs and features which are already "approved", meaning that we understand the impact on the language and are comfortable accepting a correct PR implementing that change. We're also barely staffed to work through the current backlog of 1,200 open "Suggestions" before the heat death of the universe. A PR for a language feature with no approved issue tracking it is just not something we're equipped for in terms of process: The design needs to be hammered out and approved before the PR can be looked at, otherwise there are just too many free variables in play. Again for transparency, while there's substantial overlap between design and implementation on the team, in general there are different people and processes addressing design issues vs implementation issues.

This is basically to say that this PR is "outside our process", hence the "Experiment" tag and subsequent lack of responses. Without an issue describing the design being implemented, the only thing we (as code reviewers) can comment on is whether the code is facially correct (not useful), not whether or not we cant merge it. IOW, we (as designers) can't take PRs to the design process because it's not useful to just put the code up on screen and reverse engineer what's happening. A useful next step to make this more actionable would be a fresh issue outlining what proposal is being implemented and how.

Third, the comments above from @ahejlsberg are effectively blocking to the design being implemented. Speaking with him offline, we're in agreement that only a full (or "nearly full") unification algorithm will produce correct inferences in all the cases of interest here. Going back to an issue outlining the proposed design would be a clearer venue for going over various examples and how a potential unification algorithm would handle them.

@sandersn
Copy link
Member

sandersn commented Feb 3, 2020

@kpdonn, thanks for this prototype. I'm cleaning up our PRs and since the original bug was closed with #30215, I'm going to close this.

We continue to think about how to make inference better, so we would appreciate any more discussion on the topic. Prototypes are good too!

@sandersn sandersn closed this Feb 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Experiment A fork with an experimental idea which might not make it into master
Projects
None yet
Development

Successfully merging this pull request may close these issues.