-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Move dist-aarch64-linux to an aarch64 runner #133809
Conversation
First let's see if it works, and what's the time impact :) @bors try |
ci: Move dist-aarch64-linux to an aarch64 runner Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on. r? `@Kobzol` try-job: dist-aarch64-linux
☀️ Try build successful - checks-actions |
@bors try |
ci: Move dist-aarch64-linux to an aarch64 runner Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on. r? `@Kobzol` try-job: dist-aarch64-linux
☀️ Try build successful - checks-actions |
The cached duration now seems to be ~1h 20m. Before, it was approximately ~1h 50m, although on a free runner. |
Before meaning when it was running on the x86 runner? That would make sense given that it had to build the whole cross-compiling toolchain first I suppose. |
Yeah, before meaning when it was cross-compiling, and also running on a 4-core machine (vs now running on a 8-core machine). The costs of the ARM machine is quite different to us than the previous machine though, so we'll need to discuss this in the infra team. |
Fair enough, best of luck! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our documented glibc baseline for aarch64 is 2.17, while this is switching to ubuntu 22.04, which is glibc 2.35.
Ah, good point! To clarify, before |
I can switch the build to CentOS, though are we sure that's a good idea given that it's now past its end of life? EOL for the CentOS 7 used in the x86 dist build was June 2024. Afaict all distros' versions that ship 2.17 are now past their EOL. |
That is indeed an issue, although orthogonal to ARM, I suppose. @nikic CentOS 7 is indeed EOL since June 2024 (https://www.redhat.com/en/topics/linux/centos-linux-eol). Should we do https://blog.rust-lang.org/2022/08/01/Increasing-glibc-kernel-requirements.html again? Our HPC center has updated to glibc 2.28 last year, which I personally use as a litmus test to see where the current baseline is =D |
Debian Buster has glibc 2.28 and Extended LTS until 2029, how does trying that one sound? |
The extended LTS is a commercial offering, if I understand correctly, we don't currently make use of any such extended variants. Created https://rust-lang.zulipchat.com/#narrow/channel/242791-t-infra/topic/Increasing.20glibc.20baseline/near/486974115 to discuss this bump. |
I don't think we really care whether the base image we use for building is EOL. Lack of security updates shouldn't have a material effect on our artifacts -- I don't think we're linking anything statically from the host that we aren't building ourselves? Like, stuff like openssl in cargo comes from a vendored build, and we even build our own zstd on these. The last time we bumped the glibc requirement it was because it was not possible to both target old glibc and build new LLVM with crosstool-ng. I expect we're still a bit off from hitting that problem again (maybe next time LLVM bumps requirements...) cc @cuviper who probably has opinions on both glibc baseline and our use of centos:7 base images :) |
From what I understand it's commercial in that a 3rd party gets paid to backport security fixes and such, it's still free to use. Besides it's effectively an opt-in repo so we just use an older official Debian docker image, then if a security fix is needed it's just a matter of adding in the repo and running an update. Whether this is relevant or not is a different question - I imagine not - but it's slightly better than just using a completely dead version I suppose. Especially compared to CentOS where relying on an archive of old builds of a dead distro seems pretty dodgy if we can avoid it. |
To clarify, the only concern I have here is that CentOS has been discontinued so I'm wondering whether it's still okay to rely on the repos being up. It's probably good to have at least a migration plan in case they get taken down. If we're confident that they'll stay available for the foreseeable future then there's no problem at all imo. |
Based on https://rust-lang.zulipchat.com/#narrow/channel/242791-t-infra/topic/Increasing.20glibc.20baseline, we're currently OK with using the EOL runner on CI, as we're not really using anything from it statically, we just use it to build the compiler and the stdlib. |
4122380
to
c7e5ebb
Compare
Fine by me, let's try the CentOS variant then :) |
@bors try |
The job had uncached Docker image, and apparently that times out now :/ I'll probably have to increase the timeout again. @bors try |
ci: Move dist-aarch64-linux to an aarch64 runner Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on. r? `@Kobzol` try-job: dist-aarch64-linux try-job: dist-x86_64-linux try-job: dist-i686-linux
☀️ Try build successful - checks-actions |
Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on.
f1ed637
to
72b9d03
Compare
@mrkajetanp is away for the holidays so I've rebased and fixed the conflict |
Move shared helper scripts used by Docker builds under docker/scripts.
72b9d03
to
3afda7e
Compare
@bors try |
ci: Move dist-aarch64-linux to an aarch64 runner Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on. r? `@Kobzol` try-job: dist-aarch64-linux try-job: dist-x86_64-linux try-job: dist-i686-linux
💥 Test timed out |
@bors try |
ci: Move dist-aarch64-linux to an aarch64 runner Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on. r? `@Kobzol` try-job: dist-aarch64-linux try-job: dist-x86_64-linux try-job: dist-i686-linux
☀️ Try build successful - checks-actions |
The duplication is not ideal, but at least it was balanced a bit by the small CI script cleanup :) @bors r+ rollup=never |
☀️ Test successful - checks-actions |
Move the dist-aarch64-linux CI job to an aarch64 runner instead of cross-compiling it from an x86 one. This will make it possible to perform optimisations such as LTO, PGO and BOLT later on.
r? @Kobzol
try-job: dist-aarch64-linux
try-job: dist-x86_64-linux
try-job: dist-i686-linux