Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run-time impact #24

Closed
jbmonroe opened this issue Dec 13, 2015 · 6 comments
Closed

Run-time impact #24

jbmonroe opened this issue Dec 13, 2015 · 6 comments

Comments

@jbmonroe
Copy link

The expressed solution for functions with more than one required argument looks to me like a serious performance sink because of the way JavaScript handles in-lined functions at run-time (at least as I understand it):

let newScore = person.score
  |> double
  |> _ => add(7, _)
  |> _ => boundScore(0, 100, _);

While this would work reasonably well in C++, because we can think of it as a form of template meta-programming that only impacts compiler performance, but in JavaScript there ain't no such animal and this is going to hurt. My jsPerf testing appears to support the idea that arrow functions are also slower than ordinary functions (http://jsperf.com/interior-function-performance), so unless current implementations get better, I wouldn't sacrifice performance on the altar of enhanced readability (which, to me, is a mileage-may-vary concept at best). [For some reason Opera has arrow functions out-performing the others...but it's not widely used. Ditto Vivaldi.] See http://www.incaseofstairs.com/2015/06/es6-feature-performance/ for further data. I understand that this situation will not remain static and that we'll probably get better performance in the future. (But maybe we won't. So far forEach() is still a performance sink and there's been a lot of time to work on that.)

@jbmonroe jbmonroe changed the title Run-time impact, impact on first-time solution correctness and learning Run-time impact Dec 13, 2015
@n3dst4
Copy link

n3dst4 commented Dec 13, 2015

I think it's been mentioned elsewhere, but this is something that can be optimised by the compiler (which does exist, just at runtime). The logic is: if the RHS of a pipeline operator is a unary arrow function, replace the whole expression with the body of the function, substituting the LHS of the pipeline where the argument appears.

The same optimisation could be done as Babel transform too.

@gilbert
Copy link
Collaborator

gilbert commented Dec 13, 2015

@n3dst4 is correct. I've even implemented the optimization in the babel-plugin I wrote, and it works in more cases than you might expect.

The optimization is basically this:

// Before:
let result = calc() |> x => x + x;

// After:
let x = calc();
let result = x + x;

And it works regardless of how many arrows you have:

// Before:
let result = calc() |> x => more() |> y => x + y

// Intermediate step:
let x = calc();
let result = more() |> y => x + y

// Final step:
let x = calc();
let y = more();
let result = x + y;

This is a contrived example, but hopefully it shows how easily reducible arrows function usage is :)

@n3dst4
Copy link

n3dst4 commented Dec 14, 2015

I like how the associativity means you can gather arguments at each step.

@gilbert
Copy link
Collaborator

gilbert commented Jul 26, 2017

@bterlson @littledan I think it would be good to mention this compile-time optimization with arrow functions at the meeting, if you're not planning to already. The optimization also works when paired with the partial application proposal:

// Before:
let result = calc()
  |> handle(20, ?);

// After:
let x = calc();
let result = handle(20, x);

@littledan
Copy link
Member

I agree; I was just discussing this with @bterlson actually. I think we should look carefully at how optimizable this pattern is.

Patterns like this should be optimizable, and Lisp-based systems have been optimizing them for years. But there's a few things that go into it as it would happen in JavaScript:

  • For an optimizing JIT compiler, if a sufficiently advanced JIT is invoked, I imagine that many engines would already be able to inline the locally created function and escape-analyze it to avoid the allocation.
  • In the baseline/interpreter case, code may run more slowly. This case is still important, e.g., if the pattern is used all over the place and could affect startup cost in code that runs fewer times. Still, in this case, I believe a language frontend/bytecode generator should be able to detect the specific pattern x |> f(?) and turn it into f(x) (with the right evaluation order, and including with additional arguments). It's pretty straightforward to recognize that this pattern doesn't escape.

I'd like to have implementations verify, between Stage 3 and Stage 4, that both of these optimization strategies work, so that programmers can be confident that this pattern won't be slow before we add it to the language. If it's not reliably optimizable, we might be better off with the "Elixir option", which doesn't depend on such optimization.

@tabatkins
Copy link
Collaborator

Closing this issue, as the proposal has advanced to stage 2 with Hack-style syntax. This syntax should be optimizable with little or zero run-time impact.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants