Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cache for bazel to prove why bazel rocks #505

Merged
merged 4 commits into from
Jan 6, 2021

Conversation

bogdandrutu
Copy link
Member

@bogdandrutu bogdandrutu commented Jan 5, 2021

Some initial numbers.

Before
Screen Shot 2021-01-05 at 11 05 47 AM

After:
Screen Shot 2021-01-05 at 11 24 10 AM

Signed-off-by: Bogdan Drutu bogdandrutu@gmail.com

@bogdandrutu bogdandrutu requested a review from a team January 5, 2021 18:46
@bogdandrutu
Copy link
Member Author

Please do not press update branch because I want to do that as the second round to prove the benefit :)

@codecov
Copy link

codecov bot commented Jan 5, 2021

Codecov Report

Merging #505 (af52891) into master (e8ef7ea) will increase coverage by 0.01%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #505      +/-   ##
==========================================
+ Coverage   94.36%   94.37%   +0.01%     
==========================================
  Files         189      189              
  Lines        8410     8410              
==========================================
+ Hits         7936     7937       +1     
+ Misses        474      473       -1     
Impacted Files Coverage Δ
sdk/src/logs/batch_log_processor.cc 95.06% <0.00%> (+1.23%) ⬆️

@bogdandrutu bogdandrutu force-pushed the bazelcache branch 3 times, most recently from 0814e98 to 5ee3094 Compare January 5, 2021 19:42
@maxgolov
Copy link
Contributor

maxgolov commented Jan 5, 2021

@bogdandrutu - What is the maximum amount of cached artifacts for the cache action - 5 GB? Cache Limits. A repository can have up to 5GB of caches. Once the 5GB limit is reached, older caches will be evicted based .. . I think we need to assume that the other parts of the build could also be cached in that shared cache.

Do you know roughly how much space the Bazel build consumes? vcpkg build with all deps, unfortunately, consumes about 8GB on Windows (exceeds the maximum allowed cache size).

@maxgolov
Copy link
Contributor

maxgolov commented Jan 5, 2021

What I am trying to say is once we start caching for all build loops (not just for Bazel), the cumulative net-benefit of that will be diminished.

@jsuereth
Copy link
Contributor

jsuereth commented Jan 5, 2021

Hey, this gives us more time to run benchmarks!

@bogdandrutu
Copy link
Member Author

@maxgolov right now it is ~1GB per bazel job, but I think with this change we can actually combine multiple jobs and I think the cache will be lower.

@maxgolov
Copy link
Contributor

maxgolov commented Jan 5, 2021

@bogdandrutu - 1GB is very good. I can apply similar logic for CMake + add ninja as default build for CMake. There are a few pieces which we can dramatically speed-up. gRPC being the heaviest hitter (slowest to build) right now. We may build it with vcpkg , and cache the entire (or most of) vcpkg package cache. Thus, bringing CMake build times to what you are quoting for Bazel.

@ThomsonTan
Copy link
Contributor

@bogdandrutu - 1GB is very good. I can apply similar logic for CMake + add ninja as default build for CMake. There are a few pieces which we can dramatically speed-up. gRPC being the heaviest hitter (slowest to build) right now. We may build it with vcpkg , and cache the entire (or most of) vcpkg package cache. Thus, bringing CMake build times to what you are quoting for Bazel.

As you mentioned gRPC installation with vcpkg could take much more than 5GB, also vcpkg build debug/release flavors at the same time, perhaps we could separate them into different builds.

@maxgolov
Copy link
Contributor

maxgolov commented Jan 6, 2021

As you mentioned gRPC installation with vcpkg could take much more than 5GB, also vcpkg build debug/release flavors at the same time, perhaps we could separate them into different builds.

I'm almost thinking now we have to build and cache a docker image nightly? Or similar setup that'd preserve the prebuilt deps in GitHub Action cache. This should include a prebuilt of fresh CMake 3.1x (for older distros), Google Test, Google Framework, and gRPC. We should be able to build the image on Linux, cache the image ( e.g. using this approach https://github.com/marketplace/actions/docker-layer-caching ), then run docker-in-docker on Linux.

For Windows / vcpkg: need to see what tools we can fetch, vs. what to cache using GitHub cache action. Huge size of vcpkg package install dir is an issue indeed..

@lalitb
Copy link
Member

lalitb commented Jan 6, 2021

@bogdandrutu - Sorry to ask, but would you be able to resolve conflict so that we can push it to master.

@lalitb
Copy link
Member

lalitb commented Jan 6, 2021

I'm almost thinking now we have to build and cache a docker image nightly? Or similar setup that'd preserve the prebuilt deps in GitHub Action cache. This should include a prebuilt of fresh CMake 3.1x (for older distros), Google Test, Google Framework, and gRPC. We should be able to build the image on Linux, cache the image ( e.g. using this approach https://github.com/marketplace/actions/docker-layer-caching ), then run docker-in-docker on Linux.

Agree, would be good to test how docker layer caching perform here, and switch to it if we see benefit. @maxgolov has checked-in docker files so these can be reused.

Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
@lalitb lalitb merged commit ff797b5 into open-telemetry:master Jan 6, 2021
kxyr pushed a commit to open-o11y/opentelemetry-cpp that referenced this pull request Jan 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants