-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Join as maintainer to take care of migrations #839
Conversation
…nda-forge-pinning 2022.09.07.10.20.36
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@h-vetinari Can you please also apply 6643cec here to limit parallelism on Azure? Otherwise, we will block the whole of conda-forge. |
Fine in principle, but that's gonna make it very painful to move through all the migrations (though as soon as we do the abseil one, we'll be dropping half the jobs) - I've worked concurrently on big CI matrices in conda-forge before (e.g. the BLAS-testing PRs in numpy/scipy) and never noticed that other feedstocks didn't start, so I'm not sure where the limit lies, but it seems to be pretty high. Do you know if the PS. Windows builds here need conda-forge/admin-requests#482. |
Looks like it's for the whole pipeline: https://github.com/conda-forge/conda-smithy/blob/v3.21.1/conda_smithy/configure_feedstock.py#L1343-L1345 Currently there are 70 jobs, and the aarch/ppc ones take almost 5-6h based on the last successful run. With a limit of 8, that would mean waiting up to >48h for one CI job to finish, which is not reasonable IMO (considering the number of branches & migrations outstanding). I think 30 parallel jobs would be manageable, both for waiting time & the rest of conda-forge. |
Then set it to 30 jobs for now. We should limit the parallelism on this feedstock though as very briefly summarised in #825 |
OK, that makes sense. I'll be mindful not to open/merge too many PRs concurrently, and I'll lower the limit on the abi-migration branches once the migrations are through, so that when the bot comes with 3-4 PRs at once for a migration, we're not starting 100+ jobs at the same time. |
…nda-forge-pinning 2022.09.08.12.52.26
@xhochy, regarding the clang version used here, I needed to bump away from 10 at least on windows (VS2019 needs clang>=11), and decided to give it a shot with 14. It seems this is working without issue, but does this need to be checked somehow, or should I restrict this bump to windows? I saw in the build scripts that there were some comments along the lines of "We need llvm 11+ support in Arrow for this" (for osx-arm), but since that commit is almost 2 years old, I guess it might be time to revisit this? Ideally, the osx-arm parts would be changed in a different PR. It's gonna be another ~10h until the aarch/ppc builds are through, and merging this one would allow me to tackle the other branches in the meantime. |
All the failures / timeouts here are flaky and can be fixed by restarts after merging (if they reoccur). IMO this is good to go (pending the clang 10->14 question). |
Hey all
There's currently a few tricky migrations pending (abseil, grpc, libprotobuf), and since arrow plays such a central role in the ecosystem nowadays, it's good to keep this moving along across all ABI branches. Not least since I also help co-maintain those other feedstocks, I'd be happy to help take care of these migrations, and this would be easier as a maintainer (can push into bot PRs, no need for attention on trivial PRs).
I thought I'd propose joining first (considering the quantity of open migration PRs), but if desired I can also try raising independent PRs first.
PS. I'm not completely new to this feedstock, c.f. #247 & collab on conda-forge/pyarrow-feedstock#101. :)
PPS. A standard rerender seems to bring in the (now-closed) orc migration; if desired I can undo this part.
Closes #826