Skip to content

Performance of validate, in the context of graphql(_sync) #116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
alexchamberlain opened this issue Nov 9, 2020 · 5 comments
Closed

Performance of validate, in the context of graphql(_sync) #116

alexchamberlain opened this issue Nov 9, 2020 · 5 comments
Assignees
Labels
investigate Needs investigaton or experimentation

Comments

@alexchamberlain
Copy link

I've benchmarked a few of our use cases and up to 85% of the runtime is in validation.validate; as far as I can tell, there's been no discussion of this on Issues, so I thought I'd open an issue to see if this is a known issue?

I'll follow up with a PR that demonstrates the issue and has some improvements, but won't be production quality at this stage.

@Cito
Copy link
Member

Cito commented Nov 9, 2020

Hi @alexchamberlain. Not sure if we have explicitly discussed this already. One issue is always when and how to avoid the validation step and not validating the same query multiple times (#70 is related to that).

I'm glad about PRs that try to improve performance. However, the code should not deviate too much from the original code in GraphQL.js since this will make it difficult to merge future change in GraphQL.js back here. If you want to introduce larger algorithmic changes, it may make more sense to suggest them on the GraphQL.js issue tracker, implement these in JavaScript and then merge back here.

@alexchamberlain
Copy link
Author

Thanks @Cito

I've opened #117 with the benchmark script I used. The main change is to memoise the lookup of visit functions, which is more of a Python optimisation I think?

@Cito
Copy link
Member

Cito commented Nov 9, 2020

Thanks. I will look into this if I find some time. Please note that there are already some benchmark scripts. It would make sense to add another benchmark that explicitly focuses on validation then.

@Cito Cito self-assigned this Nov 9, 2020
@Cito Cito added the investigate Needs investigaton or experimentation label Nov 9, 2020
@alexchamberlain
Copy link
Author

Ah, sorry - I completely missed that! I can look to refactor my benchmarks there if you like?

@Cito
Copy link
Member

Cito commented Dec 11, 2021

Sorry for leaving this open for so long. I think it has now been addressed and solved in ac3caf6 and #150.

@Cito Cito closed this as completed Dec 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigate Needs investigaton or experimentation
Projects
None yet
Development

No branches or pull requests

2 participants