-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
making fast AST rewriter faster #7726
Conversation
Signed-off-by: Harshit Gangal <harshit@planetscale.com>
Signed-off-by: Harshit Gangal <harshit@planetscale.com>
Signed-off-by: Harshit Gangal <harshit@planetscale.com>
…eters Signed-off-by: Harshit Gangal <harshit@planetscale.com>
Signed-off-by: Harshit Gangal <harshit@planetscale.com>
2fast2furious |
Signed-off-by: Harshit Gangal <harshit@planetscale.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. Reviewed during pairing this morning. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I request a detailed PR description?
What parts are slow and why and how you improved them?
Along the lines of #7710 will be nice. That way everyone can understand what was done and we can convert this to a blog post in the performance series we are planning.
Very nice description. Thank you @harshit-gangal and @systay! |
The work started with #7716 to make AST Rewriter faster. That achieved 20% improvement.
While doing the first part of the optimisation, we removed the use of
panic
/recover
, and made the rewriter methods return an explicit error instead.In this PR, we changed our minds, and now use
panic
without a recover to signal a bug in the rewriter framework. This is still safe to do, since Vitess recovers higher up in the stack anyway.By this time, @vmg pointed us to the fact that we had a a couple of variables that were escaping the methods they lived on and were being moved to the heap. After fixing that, we got these splendid numbers.
The following optimisations resulted in improvement of 17%+ in CPU time and 15% on memory allocation.
For future references:
We can look at the compile output to spot problems like these. In the
go/tools/asthelpergen/integration
directory, you can run:We can then search for these types of variables by doing:
This will show that a bunch of
out
variables are moved to the heap, these are expected to be moved to the heap. But we also see a few lines like these:These are the variables that were escaping their blocks and needed to live on the heap.
err
was easy to fix - we are no longer returning errors, so we don't need this variable any more.The
i
however was a little trickier to see. The old code was doing this:To fix it, we changed the code to:
The difference is that before,
i
was being sent over as a closure, and that is costly. By sending the index through a function argument, nothing is being closed over and we save on memory allocations.Like always with performance optimisations, it's not intuitive that one is faster than the other. The second code block here looks like it will need to create a function object for every call to
rewriteAST
, which sounds slow. Instead we get a massive performance boost. ¯_(ツ)_/¯