Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cap root module loading via worker pool #256

Merged
merged 2 commits into from
Aug 10, 2020
Merged

Conversation

radeksimko
Copy link
Member

@radeksimko radeksimko commented Aug 10, 2020

This represents a foundation for resolving #186

A capped worker pool is used when loading root modules, which helps spread the CPU usage and effectively flatten the spikes and therefore improve the UX.

Charts generated via https://github.com/mkevac/debugcharts

CPU Before

plot-cpu-without-cap

CPU After

plot-cpu-with-cap

pprof Before

plot-pprof-without-cap

pprof After

plot-pprof-with-cap

Deferred ideas

We can also make parallelism configurable, if deemed necessary, but I think that can be discussed and addressed separately.

@radeksimko radeksimko added the enhancement New feature or request label Aug 10, 2020
@radeksimko radeksimko requested a review from a team August 10, 2020 13:08
@radeksimko radeksimko added this to the v0.6.0 milestone Aug 10, 2020
@@ -40,9 +43,14 @@ func NewRootModuleManager(fs tfconfig.FS) RootModuleManager {

func newRootModuleManager(fs tfconfig.FS) *rootModuleManager {
d := &discovery.Discovery{}

defaultSize := 3 * runtime.NumCPU()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 is the magic number I'm guessing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 is basically an educated guess. I did some simple testing as shown above with 1000 root modules in a hierarchy (which seemed to be the higher end of range reported by folks in #186 ). That effectively triggers terraform version 1000 times and I ran that on my i7 (Macbook) with 4 cores (= pool size 12) and it finished within a few seconds.

I don't expect this to be the best configuration nor the best way of testing this, but I do expect it to behave better than before just because we trade time+memory for less CPU in principle.

I do expect we will tweak the number in the near future based on the feedback from end users and further (more scoped) testing. This is also why I added the logging.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally the performance will largely depend on the time it takes to finish the loading, which after terraform version depends on the size of the schema and exec time of terraform providers schema -json alone may span from half a second to 10+ seconds in some worst case scenarios, so there is no easy answer to this I'm afraid.

We can however make it configurable and let users deal with edge cases (too slow CPU and/or too many modules) that way. Also it would be even better to just avoid calling terraform version where we don't need to as discussed in #186 (comment)

@radeksimko radeksimko merged commit ef82870 into master Aug 10, 2020
@radeksimko radeksimko deleted the f-autoloading-cap branch August 10, 2020 15:21
@ghost
Copy link

ghost commented Sep 9, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the context necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants