-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: add build image #5
ci: add build image #5
Conversation
Thanks! I'm not really sure what went wrong there. Our repos essentially redirect to GitHub releases, so maybe it's an issue on GitHub's end? I was able to successfully run the script just now. That being said, there was an issue with the latest kernel packages (should be unrelated), but those should be fixed now. Can you try and restart the action? Otherwise, this looks good to me. I'm a bit concerned about the size (the final image is ~1GB), but from what I've read that should still be okay. |
Just tried to restart the GH Actions pipeline. Sadly it leads to the same result :( |
I have honestly no idea what's going wrong here... I'll merge it into a branch to test it here. |
I have the same issue locally though. Maybe I can try to debug w/ tcpdump (or run tcpdump on the CI itself). |
Something is a bit off with GitHub actions today... |
It seems to have worked this time around: https://github.com/linux-surface/aarch64-arch-mkimg/actions/runs/4555577441. What I changed:
I somewhat doubt that any of that should really have an impact on downloading files via curl/pacman. For some reason, it still looks like it's running but all jobs have completed... so I guess GitHub actions is slightly broken today. I'll check if it behaves better tomorrow and will merge this then, if it does. |
Also: Thank you very much for your support! |
I'm puzzled on why it didn't work 😅 |
Me too. It somehow looks like some transport issue (error 18 apparently is "transfer closed with outstanding read data remaining"), but the thing is that we don't host the files in our repo ourselves, so unless the redirect is going wrong there's nothing I can check on our end. Instead, we essentially redirect to GitHub releases, so pacman should directly download those from GitHub. So I can only guess that it's some kind of GitHub infrastructure problem. The way actions behave right now I guess that might not be too far off. I mostly tried the qemu action thing because I thought maybe something weird is going on there, but that was really just a shot in the dark. Again, I don't think that this really fixes it and it might be more or less random. So I'll try without that again when things are more normal. |
Oh: This explains it, maybe |
Actions seem to be working again but it fails with the same issue. Setting up QEMU via the Docker action seems to work somehow... still absolutely no clue why. I'll push that change to this PR, wait for it to build and then merge it if it's successful. |
6fbd5e9
to
d57c33f
Compare
And it failed again... with the QEMU setup change, so I guess it's random? |
I've now also tried building the image via the docker container (4398651). This also seems to fail or succeed randomly. I'm really not sure how to properly debug this... |
Looks like the biggest issue is with archlinux for arm mirror repos taking forever to actually load. Unfortunately using DisableDownloadTimeout doesn't seem to work properly. Possibly using "pacman --disable-download-timeout" could work? |
The official Arch Linux ARM repos aren't the issue. It randomly failed with
That package stems from our repo, which in turn redirects to GitHub for the actual package downloads, so it really isn't clear to my why this sometimes fails. In addition, I haven't been able to reproduce this locally, making this even more difficult to debug. Hopefully with the new firmware package, things will work more reliably. |
huh for me it is mainly just issue after issue with alarm repo mirrors completely cutting out where I am. |
I've made some changes on top of this PR (setting up QEMU and building the image via Docker) in the feature/ci branch and I've been re-running the CI action a couple of times. Unfortunately, it still fails fairly randomly. And with different files, e.g. the latest failure was:
I'll give your mirror a try on the off-chance that it somehow is an issue with the official mirrors. If that doesn't work, I'll probably merge that branch anyways and set it to only build on certain tags. That way it doesn't fail constantly and I can re-run it manually until it does succeed. Having a pre-built image is probably worth that. |
Looks like the build passed and there is a .img file in the artifacts section. |
I still don't trust it though. These failures are random, so I'll try rebuilding the exact same commit a couple more times over the next days. We'll see after that. |
By chance do you have a discord? It would be easier to message on there then on here. I am having some issues with missing some packages, |
We have a Matrix space (https://matrix.to/#/#linux-surface:matrix.org) and an IRC channel (libera.chat/#linux-surface) for stuff like this. |
And I'm not sure how well wifi works yet, in particular there are issues like linux-surface/surface-pro-x#3 (e.g. some channels that are valid here in Germany but disallowed in the US won't work). I suspect there could be more issues similar to this or issues applying only to specific setups. |
Attempt 4 failed just now. Similar |
Just took a look at the logs and noticed it seems to be an issue with a stable download of surfacelinux packages. Could possibly need a more reliable host or have it go through through the package manager of github. |
Server-side, our repo is essentially a just script that redirects to github releases. So any package archives being downloaded are actually being downloaded directly from github servers. That's why I can't really debug this well, since I have no insight to those. |
Are we maybe rate-limited by GitHub? Should we try to install one package, sleep and then install the other twos? |
I somehow think that would manifest differently, but we might as well try. |
So I've added a retry mechanism with some backoff-timeout in 0ed1b0a. I'm hoping that that is a good enough workaround, but I haven't been able to trigger that mechanism to test it yet. |
Okay, it triggered just now and it looks like it's working. I'll clean up that branch over the next days and then merge it. |
I will also work on migrating over to a better dedicated host for the mirror hosting. Also when trying to use my mirror it has one extra portion instead of the old https://mirror.fangshdow.trade/ it is now https://mirror.fangshdow.trade/archlinuxarm/ |
Thanks @denysvitali and others for your help. I've merged this (with extensions) in #9. |
Images will now be built on each tag release and published as proper release (see e.g. https://github.com/linux-surface/aarch64-arch-mkimg/releases/tag/v0.1.3 in a couple of minutes). |
This PR adds the ability to build the image using the CI.
Unfortuantely the CI fails, but I think there is an issue with the linux-surface Arch package repository:
(see https://github.com/denysvitali/surface-pro-x-arch-linux-aarch64-images/actions/runs/4548499674/jobs/8019594944)