-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] Heads buildstack directions, past and future #927
Comments
@tlaurion The first thing that should be nailed down are the minimal requirements for reproducibility:
This may like overkill but give the number of times I have made incorrect assumptions, I think they should be stated and defined somewhere. Talking with @daym yesterday, I realized we were approaching a Dockerized Guix in two different ways. I was using the Guix package manager over Alpine, while he used Alpine as a base Docker image but replacing it with Guix. The direct quote from @daym was:
Which is possible but unlikely. This can also be said of any library used to compile Heads and reminds me of the Ken Thompson Hack. Where should the bounds of trust be? While Guix can be applied on top of other distros, it is not pretty. Even if the user installs GuixSD or the Guix package manager, it would not be reproducible as variables like username and timezone will alter a build. This is fine if the user is not looking for reproducibility or cannot reproduce the hashes of the CI builds due to modifications they have made. At this point, I am sold on using Guix and my suggestion would be to ultimately require Guix to build Heads. The details on how can be debated. Perhaps use the official GUIX VM running on QEMU to generate Docker environments. In Docker, Guix only commit hashed apps and utilities can build Heads. Any module that can be reproducibility built independently, can be publish in Guix or a Heads/coreboot special repo to be pulled instead of being rebuild from scratch each time or relying on CircleCI cache. Even if musl-cross is still used, this would not require the CI or end user to build it from scratch but allows them to if they want to. If all works as expected, any individual part or the build environment and modules will be reproducible as well as the final build but only the final build would need to be compiled. |
@Thrilleratplay : absolutely and thanks for your step back. On the more practical level (needed now), my PoC was able to build all modules with musl-cross-make excluding coreboot, and build coreboot with coreboot's musl-cross per version maintained toolchain: https://app.circleci.com/pipelines/github/tlaurion/heads/656/workflows/79a8d811-f72f-4a91-bf4f-1499539587f4/jobs/707. I will test builds on x230 personally, while I invite others to report problems. @MrChromebox @Tonux599 : do you see any problem reverting to 4012895#diff-18936189b28399cf48703d0c1ec1df33e57c559de2a12f4438be00e6813bdb68 ? |
@tlaurion As long as Coreboot's own toolchain still produces reproducible builds I think this is fine. Only question may be is if musl-cross-make starts throwing up errors with other modules in the future. But I guess that's a bridge to cross when we get there. |
Small script to find non-reproducibility against local vs remote builds output:
|
@tlaurion This brings up a good point. How should reproducible hashes be tracked and stored? Once there is a database of Heads build hashes, should similar script be added to automatically verify a local build to the remote hash or should the user verify themselves? |
The reproducibility of CI builds (required for LVFS upload and fwupd upgrades) should upload hashes.txt and firmware and confirm. Users are supposed to be able to have the same ROM hash, where hashes.txt shows where the difference are. Above was unclean build. My surprise was that my local kernel and remote kernel matched. |
Normally I do something in the lines of: but for the sake of retesting everything...Redoing cleanly, will take a while.
No idea how to play nicely with CircleCI API to get a board hashes.txt programmatically. But lets suppose we can, that would then look like the following to compare the hashes of local vs CircleCI for the actual a81ae6e
Results to come and following issues to be opened, intuition is that #892 will show its consequences... |
@Tonux599 @Thrilleratplay @MrChromebox : comparing master build vs local build:
here: gawk should not be measured (host tool), flashrom is not reproducible(comprised inside of tools.cpio, which is consequently non-reproducible, making the whole ROM non-reproducible). |
Please read #571 (comment) |
As of right now, I am waiting to find the cause and solution to a build error in Guix (Guix issue 45165). As it seems to be related to a recent change in the Linux kernel, this may lead to a larger debate of the expectations in reproducibility. Assuming this is just a standard bug. The current plan is as follows:
Once the concept has been proven, it can then be made more efficient. Any library, such as If all works as planned, the build environment will be bit for bit reproducible at any given time based on a pull commit id and a specific version controlled manifest. This should be a solid base to create deterministic Heads builds going forward* (still to be verified) |
For prosperity and history, here is the PR that have set the bases of where we are now, removing the docker image we relied on and replacing it by a debian-10 one on which we build and reuse caches: |
The libvirt + qemu path won't be helpful for CI builds. Any explanation on the fragility of deploying guix on top of other docker images? Small steps I see:
Any problem with that smaller steps approach? |
@tlaurion The first step in this plan would be likely be manual and outside of the CI environment. The output of this first step would be a docker container that would be cached in a registry. This idea is to be have this step reproducible for those who want/need to reproduce it but not for every build, just like the In theory, there is nothing wrong with the steps you have laid out. One possible issue is a similar Debian library/command and a Guix library/command may be installed but referenced in different scopes in make files. Thus far, my experience is that Guix is very finicky. I had an issue building one package and the suspected cause may be a change in the kernel of the HOST machine. A few days later, I realized I should be using the 32 bit libraries, |
#1661 Was merged, next steps is to have Heads depend on musl and coreboot buildstack provided by next and go forward implementing things the right way. |
@daym @Thrilleratplay @osresearch @flammit @MrChromebox
Historically, Heads is built with musl.
It then changed from musl to musl-cross-make for more portability of produced build toolchain.
Then with coreboot 4.12 being integrated for newer boards, CI broke for coreboot 4.8.1 boards.
Patches were integrated in each coreboot so coreboot is built against musl-cross-make, neglecting coreboot's own musl-cross, well tested for each release. That was good. Until boards tried to use coreboot's integrated measured boot patches and vboot+measured, since musl-cross-make toolchain doesn't build IASL nor gnat by default, and those boards failed compilation with alignment errors.
I think it was an error to move away of coreboot being built by its own maintained musl-cross stack and patches from each release.
I still think its not a bad idea to have all the other modules beside coreboot being built with musl-cross-make.
And I think the path we should take, let it be nixos or guix-buildstack or coreboot-sdk(really not convinced) should try to replace musl-cross-make toolchain (versions selectible inside of docker image if needed), but not necessarily replace coreboot's musl-cross. That base docker image could be reused with statements specifying changes of buildstack requirements if needed (those statements being declared in CI where/when needed), or the nixos/guix-buildstack layer could be deployed on top of any linux end-user chosen OS, where the CI could spin different base docker images, retrieve nixos/guix-builstack layer and CI statements clearly declare what is needed. In that way, Heads could finally leave host tools changes, which is the neglected part of reproducible builds where the moto is "If we can build the same outcome, we don't care about replicating the buildstack in a reproducible way". Here again under Heads, we see that it comes with a lot of historical problems with added checks required to be able to have the good make version, the good gawk version and the growing number of questions and issues opened by users thinking they can build Heads on top of their favorite OS and failing.
I'm attempting to revert those side changes attemtping to build the whole Heads ROM on top of musl-cross-make, that were made at time of coreboot 4.12 integration under Heads in the goal of being able to have vboot boards and vboot+measureboot boards and measured boot without VBOOT being built and be able to move on.
Local individual builds worked (functionalities untested) and i'm now building for all current and to be included boards under https://app.circleci.com/pipelines/github/tlaurion/heads/652/workflows/624bc858-1296-425a-82bd-9e875b3236e6/jobs/700 with my associated testing branch https://github.com/osresearch/heads/compare/master...tlaurion:9element_vboot_rebased?expand=1
Thoughts welcome.
The text was updated successfully, but these errors were encountered: