Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

github action optimizations (builds repeated needlessly) #3688

Open
dseurotech opened this issue Jan 11, 2023 · 1 comment · May be fixed by #3787
Open

github action optimizations (builds repeated needlessly) #3688

dseurotech opened this issue Jan 11, 2023 · 1 comment · May be fixed by #3787

Comments

@dseurotech
Copy link
Contributor

dseurotech commented Jan 11, 2023

The current github action "kapua-ci.yml" is heavily under-optimized. A full build is performed as a first step, then for each of the different subtasks a near-full build is performed again (to recreate docker images needed for tests), then a further maven verify is performed in order to execute the specific subset of tests. This is horribly inefficient.
There is trace in a previous commit of a change in behaviour introduced to mitigate an error in docker layer caching:
9b457b1

That commit needs to be partially reverted, at least to regain the ability to build once and reuse previous caches (maven artifacts/docker layers).

The original github build action (satackey/action-docker-layer-caching@v0.0.11) has been forked and updated since, this could be worth exploring:
original: https://github.com/marketplace/actions/docker-layer-caching
fork: https://github.com/marketplace/actions/docker-layer-caching2

Alternatively, an approach based on artifacts could be considered: https://github.com/orgs/community/discussions/26723#discussioncomment-3253091

or maybe just using the cache:
https://stackoverflow.com/questions/71348621/docker-re-use-container-image-by-caching

Also lookup: https://github.com/marketplace/actions/multi-stage-docker-build

@Agnul97
Copy link
Contributor

Agnul97 commented Jun 7, 2023

I've created one PR that re-introduces the maven artifacts caching along with various tweaks, it's the one shown above. (#3785 )

I will now create a second PR that introduces docker image caching, but it is important to discuss one limitation that, as of now, blocks its actual implementation in the project: I discovered this important piece of information on the GitHub doc. regarding the caching limit size here (https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy). Basically, the maximum cache size allowed by the platform is 10 GB and it seems independent of “premium plans”. This limitation is important for our case, because if you sum the Kapua docker image size you exceed this limit. It has to be said that, for tests performed on my Kapua fork regarding this PR, I was managing to run a series of workflows caching those images…so it could be the case that for one single workflow run (or for some limited time) the service allows to exceed with the limit BUT the real problem is when, for example, we will run 2 or more workflows (for example, for 2 open PRs). I suspect that in this case, the service will start to evict the cache following the policy presented in the link. interfering with the workflows and, ultimately, the images needed for the test run. Other than this, on some rare occasions, some jobs were throwing a 429 error during the cache retrieval step (maybe correlated to this 10 gb limitation?). One alternative one could follow to bypass this caching size limitation could be to use the docker hub to cache images (https://docs.docker.com/build/cache/backends/gha/) inside the GitHub workflows. This alternative could be very costly for the network activity required to upload/download images to the hub (think about the fact that ~30 jobs should download 10 GB of images). The network speed on the GitHub Vms is fast but this is still a limitation to consider. I will not follow this alternative route because honestly I'm not confident in it.

With the interest to provide a path for future developers for this kind of work I will create such PR in draft mode. Maybe in the future, those technical limitations will be surpassed and so my work may be merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants