Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance improvement for large data #146

Merged
merged 3 commits into from
Jun 16, 2020

Conversation

craigpepper
Copy link
Contributor

The current implementation performs a deep copy of Renderer::m_data every time there is a loop statement in the template. This copying becomes a significant overhead with large input data and complex templates.
This change introduces Renderer::m_loop_data which is used to track the loop context and avoids the deep copy.
The updated benchmark shows no difference in performance between the small and large input data. If this benchmark is run against the original code, rendering the large data takes 10x longer than the small data. In real-world scenarios I have observed up to 75x performance improvement with this change.

@pantor pantor merged commit c85f9a3 into pantor:master Jun 16, 2020
@craigpepper craigpepper deleted the loop_context branch June 16, 2020 20:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants