Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support docker multi architecture builds #379

Closed
wants to merge 1 commit into from
Closed

Support docker multi architecture builds #379

wants to merge 1 commit into from

Conversation

LeszekBlazewski
Copy link

@LeszekBlazewski LeszekBlazewski commented Jan 31, 2022

As in the issue #375.

Just expanding the platforms on build and push step should be enough. The change I made will increase the build time a little bit, but wetty is a pretty small up so it shouldn't be really long after the cache is present. I included the most used architectures, since probably there won't be many users looking for the rest ones.

I tested the build locally, the base image in your Dockerfile supports all the listed architectures, so the pipeline should run correctly.

Here after inspecting logs for QUEMU step we can see the available architectures: https://github.com/butlerx/wetty/runs/5005410338?check_suite_focus=true

Here are the docs how to build for multi platform: https://github.com/docker/build-push-action/blob/master/docs/advanced/multi-platform.md

Here more details on the platforms input:
https://github.com/docker/buildx/blob/master/docs/reference/buildx_build.md#platform

Great app, let me know if you have any questions!

Copy link
Owner

@butlerx butlerx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the info very interesting.

Once the run finishes successfully ill merge to main

@butlerx
Copy link
Owner

butlerx commented Jan 31, 2022

All the arm builds appear to be stalling at yarn with network issue There appears to be trouble with your network connection. Retrying...

@LeszekBlazewski
Copy link
Author

LeszekBlazewski commented Jan 31, 2022

Sorry for the force push, was testing stuff. Oh boy, the download issues strike again 😢

I didn't see this coming, since I built wetty for different architectures on my local mac which does not have the problem described here: nodejs/docker-node#1335.

tl;dr -> The issue is that using QUEMU builds emulate the whole architecture and the processor instructions when building the image need to be switched constantly from our native builder (ubuntu github runner which runs on arch amd64) and the target (arm) which result in significantly longer build times. More to add installing node packages is a process of downloading multiple small files which also increases the build speed and results in the network error on other arch builds (simply it takes to longer for yarn to download and it shows as network error)

The issue is worked on, but for now sadly the only solution is to increase the timeout, by passing --network-timeout 1000000 to first yarn command in Dockerfile in builder stage

I ran some benchmarks with that change on my local fork repo with github actions and here are the results:

image

As you can see, the other architecture builds take around 7 mins each, resulting in total ~20 minute. In comparison to ~1.5 min build which you have right now, this is a big overhead. Sadly there is not much we can do about this.

The other solution that came to my mind is cross compiling directly node packages with yarn for different architectures in build stage which runs on native architecture. Not sure if this will work since this all depends on the dependencies that wetty project uses but I might to try to work on this in some spare time. Here is a detailed guide about the solution I described: https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/.

So all in all, I won't be sad if you don't accept the changes proposed in this PR because of the long build times. Just of curiosity hopefully I will try to make the second approach work in some spare time and come back here with the results 😄

PS

RUN yarn && \
    yarn build && \
    yarn install --production --ignore-scripts --prefer-offline

Does this really need to run yarn so many times? I understand that the first yarn installs packages along with snowpack which is used in the yarn build command and then only the prod modules are installed? Can't this be done in such a way that we call the last yarn install first which will also install snowflake and the trigger yarn build? Didn't check this but aren't right now dev dependencies in node modules present in the node_modules directory when coping to prod images?

PS2

Your docker build is already fast but since you use already build-push step, you can take a look at available caching mechanism to make it even more faster! More info in the docs -> It also might be worth checking how adding cache will affect the time of multi arch build. Will also check this in some spare time.

So for now, can you make this PR as blocked maybe? And when I find time I will update it with relevant code and info 😃

Thanks for the great app 🚀

@butlerx
Copy link
Owner

butlerx commented Mar 3, 2022

You appear to be correct the last yarn install doesn't appear to clean up the dev dependencies.

The intention is to remove dev dependencies from prod containers but thats clearly not working.

Thanks fo the Work and effort on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants