Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache of AST tree by module path #153

Merged
merged 1 commit into from
Jul 24, 2018

Conversation

bcaller
Copy link
Collaborator

@bcaller bcaller commented Jul 24, 2018

The dependency graph of a large Python app can be quite a mess.

Previously, we were regenerating the AST of files many times, as they were
imported by different modules.

Adding a simple LRU cache [path -> AST tree] sped up pyt on one of my
code bases by 5x.

We currently only mutate newly created artificial nodes, so we can store
one copy of each module's AST and expect it to be static.

We can make it even faster by setting maxsize=None, so there is no eviction
logic, but for now I think it's already an improvement.

An alternative could be to rewrite the code which deals with imports.

The dependency graph of a large Python app can be quite a mess.

Previously, we were regenerating the AST of files many times, as they were
imported by different modules.

Adding a simple LRU cache [path -> AST tree] sped up pyt on one of my
code bases by 5x.

We currently only mutate newly created artificial nodes, so we can store
one copy of each module's AST and expect it to be static.

We can make it even faster by setting maxsize=None, so there is no eviction
logic, but for now I think it's already an improvement.

An alternative could be to rewrite the code which deals with imports.
@KevinHock KevinHock self-requested a review July 24, 2018 18:15
@KevinHock
Copy link
Collaborator

You're the greatest! 💯 ❤️

@KevinHock KevinHock merged commit c6820ff into python-security:master Jul 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants