Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No mutate context when passing errors around #1156

Closed
wants to merge 0 commits into from

Conversation

mohawk2
Copy link
Contributor

@mohawk2 mohawk2 commented Dec 16, 2017

Currently errors are captured by mutating the ExecutionContext. This PR changes the passage of data and errors to using the same scheme as the final ExecutionResult: an object with keys data and errors. This also turns the "context" into a read-only entity.

As a bonus, there is also a MaybePromise<T> type, to avoid constantly declaring Promise<some stuff> | some stuff.

The first commit ("fix bogus lint errors") may not be necessary, as I don't really know how to operate eslint. Please advise if you want it zapped. EDIT turns out it did in fact make Travis fail. I dropped it and now the CI passes.

@leebyron
Copy link
Contributor

Could you share a little more about the intent? I’m concerned this adds some complexity and might have performance cost due to the additional object allocations and merge steps, it’s not immediately clear to me what new capability this seeks to enable

@mohawk2
Copy link
Contributor Author

mohawk2 commented Dec 17, 2017

Sure! The main idea is to achieve actual immutability of all the data structures. The only one that gets mutated currently is exeContext. That will eliminate side-effects and render (as far as I can tell) the whole of execute.js pure.

You raise a great point about the possibility of performance cost. Is there a recognised benchmark you folks use?

@leebyron
Copy link
Contributor

I think running the introspection query is a reasonable benchmark test for execution overhead. I'd recommend trying it with a sufficiently complex schema such that running the introspection query consumes more than a few milliseconds and using that as a starting point to gain performance numbers

@leebyron
Copy link
Contributor

Immutability and pure execution is a great theoretical accomplishment, however I'd only like to merge this if we find real benefit from those properties in both simplification (where this currently feels like a regression) and performance (I'll wait for your benchmarking).

@mohawk2
Copy link
Contributor Author

mohawk2 commented Dec 17, 2017

@leebyron I'll put together a benchmark. In the meantime, do you feel the MaybePromise idea is worth pulling out into a separate PR? It does rather beautifully remove a number of repetitive declarations.

@mohawk2
Copy link
Contributor Author

mohawk2 commented Dec 17, 2017

@leebyron Good instincts! The benchmark I've submitted as #1163 produces on my laptop about 24.5 ops/sec. Cherry-picking it onto this branch gives now about 19.7 ops/sec, so that's clearly an impact on performance.

I believe that both the complexity you mention, and the performance issue, can be solved by not having ExecutionPartialResult generated all the way down. Since only completeValueCatchingError and executeOperation actually catch and log errors, only functions above their level would need to produce EPRs. I will explore this.

@@ -108,6 +107,23 @@ export type ExecutionResult = {
data?: ObjMap<mixed>,
};

/**
* A partial result of GraphQL execution. `data` and `errors` as above, but
* tuple to avoid allocating/destroying objects. "NP" = "Not Promise".
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it matters (requires profiling) but a tuple is just an array, so will also incur allocation & GC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. This was an experiment to see whether objects were more expensive than tuples, and the difference (according to my benchmark) is negligible!

@leebyron
Copy link
Contributor

I really like this latest version - it definitely has reduced complexity and I can see how the purity could help us out in the future.

If objects and tuples don't show a performance difference then I would prefer the use of objects since they're easier to understand in the codebase. I would imagine that JS VMs would quickly treat both cases as an optimized hidden class, so it's not too surprising to me that there's no visible performance difference between the two.

Another approach you should consider testing the performance of is to have a special class instance for return values with errors:

class ExecutionResult {
  data: mixed;
  errors: $ReadOnlyArray<GraphQLError>;

  constructor(data, errors) {
    this.data = data;
    this.errors = errors;
  }
}

Then when merging, and check for that instance - and continue to return normal values otherwise.

function completeFields(
  results: ObjMap<ExecutionResult | mixed>,
): ExecutionResult | mixed {
  const responseNames = Object.keys(results);
  let containsExecutionResult = false;
  for (responseName in responseNames) {
    if (results[responseName] instanceof ExecutionResult) {
      containsExecutionResult = true;
      break;
    }
  }
  if (!containsExecutionResult) {
    return results;
  }
  const data = Object.create(null);
  const errors = [];
  for (responseName in responseNames) {
    if (results[responseName] instanceof ExecutionResult) {
      data[responseName] = results[responseName].data;
      errors.push(...results[responseName].errors);
    } else {
      data[responseName] = results[responseName];
    }
  }
  return new ExecutionResult(data, errors);
}

That would ensure that for the common case where no errors are encountered, there would be no created ExecutionResult instances. However there would still be some additional logic in iterating over list and objects during completion. I'm curious what you find profiling that approach.

@mohawk2
Copy link
Contributor Author

mohawk2 commented Dec 22, 2017

@leebyron I will look at that very soon! One thing that would really help is if you could take a look at #1163 and #1167 and see what of each approach you like? I feel like a standardised benchmark and/or profiler is an idea whose time has arrived. I've had another idea on these which I'll comment on the relevant PR.

@leebyron
Copy link
Contributor

@mohawk2 are you interested in resurrecting this work? It looks like there are some merge conflicts to address where common utilities were factored out of the executor

@mohawk2
Copy link
Contributor Author

mohawk2 commented Feb 16, 2018

@leebyron I very much am! Been busy in the Perl side but I will come back to this very shortly. Thanks for asking!

@leebyron
Copy link
Contributor

I think I terribly broke something :( I did not mean to close this PR, I tried pushing an update to it.

@mohawk2 I have a copy of your PR locally and have been iterating on it, but you may need to reintroduce it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants