Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce OpenWrt build times #137

Closed
paidforby opened this issue May 17, 2018 · 6 comments
Closed

reduce OpenWrt build times #137

paidforby opened this issue May 17, 2018 · 6 comments

Comments

@paidforby
Copy link

Currently building only the firmware for the N600 takes upwards of an hour (see full build report below), to reduce the build time we should figure out how to stop rebuilding the OpenWrt toolchain every time we run a build. I'm guessing the solution is to be found somewhere in the OpenWrt config file, created by the build script here

On my personal computer (a five year old laptop with an i5-3210M CPU @ 2.50GHz and 12GBs of DDR3 RAM)

+ echo 'Building [ar71xx] done.'
Building [ar71xx] done.
+ echo 'Building firmware for [ar71xx] done.'
Building firmware for [ar71xx] done.

real	83m12.340s
user	61m5.468s
sys	10m7.264s
@paidforby
Copy link
Author

While sitting and waiting for test builds to finish for #116 I had this idea. If we could figure out how to run a rebuilds correctly, then we could run an entire initial build in a docker container, upload that to docker hub, and just perform rebuilds on it? It should be noted that the extender-node firmware takes about half as long as the initial home node firmware build. I think this is because it is a partial rebuild, so a lot of the toolchain has already been set up. I'm going to test this theory by doing a targeted rebuild of the firmware. one for the N600 and then one for the N750. I suspect the second build will take substantially less time than the first.

@paidforby
Copy link
Author

After talking to @jkilpatr at Our Networks and taking at a stab at building a VM of our firmware (related to sudomesh/peoplesopen-dash#5 ). I've significantly reduced build times. It still takes forty some minutes to build the first time, by logging directly into the docker container with

docker run -it --entrypoint=/bin/bash sudomesh/sudowrt-firmware:0.3.0 -i

I can trigger a rebuild, like so

root@a408fdd34ad2:/usr/local/sudowrt-firmware/built_firmware/builder.ar71xx# time make
 make[1] world
 make[2] target/compile
 make[3] -C target/linux compile
 make[2] package/cleanup
 make[2] package/compile
 make[3] -C package/libs/toolchain compile
 make[3] -C package/libs/libnl-tiny compile
 make[3] -C package/libs/libjson-c compile
 make[3] -C package/utils/lua compile
 make[3] -C package/libs/libubox compile
 make[3] -C package/system/ubus compile
 make[3] -C package/system/uci compile
 make[3] -C package/network/config/netifd compile
 make[3] -C package/system/opkg host-compile
 make[3] -C package/system/ubox compile
 make[3] -C package/libs/lzo compile
 make[3] -C package/libs/zlib compile
 make[3] -C package/libs/ncurses host-compile
 make[3] -C package/libs/ncurses compile
 make[3] -C package/utils/util-linux compile
 make[3] -C package/utils/ubi-utils compile
 make[3] -C package/system/procd compile
 make[3] -C package/system/usign host-compile
 make[3] -C package/utils/jsonfilter compile
 make[3] -C package/system/usign compile
 make[3] -C package/base-files compile
 make[3] -C package/boot/grub2 host-compile
 make[3] -C package/boot/grub2 compile
 make[3] -C package/firmware/linux-firmware compile
 make[3] -C package/kernel/linux compile
 make[3] -C package/network/utils/iptables compile
 make[3] -C package/network/config/firewall compile
 make[3] -C package/network/ipv6/odhcp6c compile
 make[3] -C package/network/services/dnsmasq compile
 make[3] -C package/network/services/dropbear compile
 make[3] -C package/network/services/odhcpd compile
 make[3] -C package/libs/libpcap compile
 make[3] -C package/network/utils/linux-atm compile
 make[3] -C package/network/utils/resolveip compile
 make[3] -C package/network/services/ppp compile
 make[3] -C package/system/fstools compile
 make[3] -C package/system/mtd compile
 make[3] -C package/libs/ocf-crypto-headers compile
 make[3] -C package/libs/openssl compile
 make[3] -C package/system/opkg compile
 make[3] -C package/utils/busybox compile
 make[3] -C package/utils/mkelfimage compile
 make[2] package/install
 make[3] package/preconfig
 make[2] target/install
 make[3] -C target/linux install
 make[2] package/index

real	1m0.072s
user	0m26.752s
sys	0m5.680s

As can be seen, it takes 1 min! I confirmed that this rebuild successfully included a new test file. Next, I plan on taking a look at our docker setup and build scripts to figure out how to trigger this type of rebuild from outside the container.

@bennlich
Copy link
Collaborator

bennlich commented Aug 3, 2018

Holy crapsauce. That's incredible.

paidforby pushed a commit that referenced this issue Aug 4, 2018
@paidforby
Copy link
Author

@bennlich holy crapsauce is right. After this revelation, I went about reworking the build process.
Initially, I thought to reapproach #116 and try running the entire build inside of travis. After a few attempts (ending with 503c1f9 ), I found out that it is impossible to run the full build in travis since it will always timeout after 50mins on a public repo.
But with the incentive of 1min long build times, I came up with a new plan.
So here was my thinking:

  1. Run full build (i.e. build_pre and build_only) once on a local machine in a docker container (as though I was trying build the firmware).
  2. Commit the changes made to that docker container in a new docker image, which I temporarily backed up here, https://hub.docker.com/r/sudomesh/sudowrt-builder/
  3. Push this new "builder" image to docker hub, in place of the old image, in which only build_pre had run, see the difference in size here https://hub.docker.com/r/sudomesh/sudowrt-firmware/tags/
  4. Modify travis script to pull the new image, copy in latest changes to repo, and rebuild the firmware, so now it looks like this, https://github.com/sudomesh/sudowrt-firmware/blob/master/.travis.yml
  5. Add openwrt_rebuilder and remove_images functions to build_lib to handle changes to the local copy of repo, modifcations to openwrt_config, and the "re-making" of the binaries. See those additions here, https://github.com/sudomesh/sudowrt-firmware/blob/master/build_lib#L310
  6. Push new travis and build scripts to master to trigger a rebuild that piggybacks off the initial local build that I pushed in step 3.

And guess what, it works!!! https://travis-ci.org/sudomesh/sudowrt-firmware/builds/411983666

Unfortunately, Travis CI takes ~25 mins to complete the pull, update, rebuild process, but the important thing is that it actually finishes!!! Also, I swear, it takes less than one minute to rebuild on my local machine!

Rebuilding [ar71xx] done.
Building firmware for [ar71xx] done.
+ echo 'Building firmware for [ar71xx] done.'

real	0m48.218s
user	0m20.808s
sys	0m4.428s

One thing left to figure out is to see if we can deploy the binaries somewhere. Does anyone (maybe @jhpoelen ?) know how to push files from travis to zenodo? I'm not sure how to access the SudoMesh zenodo account?

@paidforby
Copy link
Author

Issue has served its purpose, opened new issue for discussion regarding deployment of builds.

@bennlich
Copy link
Collaborator

bennlich commented May 6, 2019

This is good reading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants