-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable building for Windows ARM64 on a non-ARM64 device #1144
Conversation
This replaces #1108. Note that it won't actually work until we get a setuptools release with pypa/distutils#153 integrated (which hopefully will be pretty soon). I'll add at least one test, just wanted to get the core of the change up first. |
So the test is xfail right now (I haven't marked it), until we get a setuptools update out. Since it's going to fail for literally everyone all of the time, I'd understand not merging until it's going to work. Though anyone trying to cross-compile today is going to fail earlier (when it tries to run ARM64 Python on an x64 machine), so from the POV that this is technically better, I'd be happy to get it in. That also means I won't have to redo the PR if the code changes in the meantime ;) Up to the maintainers. I'm happy to tweak/adjust/document whatever else is needed, just lmk. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for submitting this @zooba ! Cross-compilation is always difficult, but this one looks fairly manageable to me.
- We'll have to note in our documentation of this feature that it's setuptools-only.
- I suspect this isn't going to be compatible with
CIBW_BUILD_FRONTEND=build
, at least in its current form. The only workaround I can think would be to preinstall setuptools in the venv and disable build isolation--no-isolation
. Or we could say that this cross-compilation method only supportsCIBW_BUILD_FRONTEND=pip
. But I was hoping to switch the default tobuild
at some point, so maybe thebuild
-specific workarounds would be a good idea.
cibuildwheel/windows.py
Outdated
for p in env["PATH"].split(os.pathsep): | ||
distutils_cfg = Path(p) / "Lib/site-packages/setuptools/_distutils/distutils.cfg" | ||
if distutils_cfg.parent.is_dir(): | ||
break | ||
else: | ||
log.warning("Did not find setuptools install to configure") | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On this method more broadly, I do worry that it would end up modifying something outside of the build venv if setuptools isn't installed there, but is in a venv higher up.
So I might propose a change to this logic, like
for p in env["PATH"].split(os.pathsep): | |
distutils_cfg = Path(p) / "Lib/site-packages/setuptools/_distutils/distutils.cfg" | |
if distutils_cfg.parent.is_dir(): | |
break | |
else: | |
log.warning("Did not find setuptools install to configure") | |
return | |
venv_python = call( | |
["python", "-c", "import sys; sys.stdout.write(sys.executable)"], | |
env=env, | |
capture_stdout=True, | |
) | |
distutils_cfg = Path(venv_python).parent / "Lib/site-packages/setuptools/_distutils/distutils.cfg" | |
if not distutils_cfg.parent.is_dir(): | |
log.warning("Did not find setuptools install to configure for cross-compilation.") | |
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this new logic is that call(["python" ...
doesn't actually search PATH
to resolve python
. It first looks in the current executable directory (normal Windows behaviour, assuming that you want the files you installed with your own app before searching other installs). So it only works if we're already running in the correct Python, in which case we may as well go straight to sys.prefix
.
Also, if we're going to run Python and import it, I'd rather import distutils; print(distutils.__file__)
and work from that to save a few assumptions. Or deliberately keep the venv path around from earlier - I tried that briefly and decided it wasn't worth the changes, but it's going to be the most reliable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, and we alse know that env["PATH"]
contains the right venv first because we just updated it ourselves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this new logic is that
call(["python" ...
doesn't actually searchPATH
to resolvepython
. It first looks in the current executable directory (normal Windows behaviour, assuming that you want the files you installed with your own app before searching other installs).
🤯 whoa really? when you say the 'current executable directory', do you mean the working directory, which would be the project dir in this case? We do call(["python" ...
all over the place and we've never had an issue with the wrong one being invoked. Maybe because people don't tend to have a python.exe
in the root of their project.
Also, if we're going to run Python and import it, I'd rather
import distutils; print(distutils.__file__)
and work from that to save a few assumptions.
I'd prefer that, I think. We probably won't preinstall setuptools in the build venv forever. Though I think it should be import setuptools._distutils; print(setuptools._distutils.__file__)
, right? (though, depending on the outcome of the isolated build-backend venv conversation, it might be a moot point...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoa really? when you say the 'current executable directory', do you mean the working directory, which would be the project dir in this case?
No, it's more like (don't reuse this code) os.path.split(sys.orig_argv[0])
. The current working directory lookup is a "feature" of the old cmd shell, but looking in the application directory is a "feature" of CreateProcess (the underlying OS API call). Best way to disable it is to pass a full path.
Though I think it should be
import setuptools._distutils; print(setuptools._distutils.__file__)
, right?
That isn't forward compatible though. setuptools._distutils
is an internal name, and I expect setuptools to deprecate their distutils
shim almost immediately and get people using their commands directly.
But yeah, isolated backends are going to break this approach anyway. Setuptools is currently broken anyway because of another change, so if they're amenable to having their build*
commands pick up default values from environment variables, we can probably get that in (though I'm about to be AFK for at least a week, and then at EuroPython the week after that, so it might have to wait until the sprints).
It is very important to me that scikit-build can support this (without using it's current dependence on setuptools), as well as mesonpy and Maturin, maybe enscons too. I think it's fine to set setuptools specific variables, maybe write out a config file, but not hack setuptools itself.
I don't think we have to (or even can) support every possible
This would be a huge downgrade for Wouldn't it be possible to add a cross compile setting to setuptools? Needing a newer version of setuptools seems better than hacking it. Or the user setting file isn't that bad - we could include a note that changing the user profile directory will break Windows ARM cross-compiles. If you have such a setup.py, you need to natively compile on ARM. :) |
So say we all! Maybe it's time to resurrect https://discuss.python.org/t/towards-standardizing-cross-compiling/10357 and @benfogle's draft PEP benfogle/cross-compile-pep-draft#1 ? I'm certainly no expert, but, this PEP makes a lot of sense to me. I do wonder why there are so many options though. I'm still a little fuzzy on why it's more complicated than passing just the arch to the build-backend, but that'll be my ignorance speaking I'm sure! My other thought is that we do this on macOS using
Well, technically, it's hacking distutils :) honestly, I also wish it was possible to do this stuff without such hacks. But it seems that cross-compiling in Python is really this hard. It's certainly no more hacky than the stuff that crossenv does to its virtualenvs! |
Hacking distutils is even worse, I would expect setuptools wants to start cleaning up the distutils / setuptools distinction after 3.12. :) I'd love a standardized way to cross compile, or if setuptools could cross compile on windows with |
Yeah, adding the Anyone relying on stdlib distutils is broken as of 3.12, and unsupported as of 3.10. setuptools._distutils is required because it has bugfixes that are not in stdlib. Adding
Technically it's using supported configuration files ;) Nothing has been hacked, setuptools just has additional feature development to support this. Adding two more env variables to setuptools for setting |
I guess it's also worth mentioning that I think this is step 1 of the process, and I expect this change to evolve over time as libraries update. If we get the current state of the world working in cibuildwheel, users can use it, and for them any future updates are transparent. My worry is that if people can't use cibuildwheel for this platform today(ish), they won't use it at all and won't benefit from later updates. I genuinely believe cibuildwheel is the best way forward for Python packaging, and want people to be able to adopt it. |
Given that this current method isn't going to work pypa/build or with projects that have a pyproject.toml, we should explore some other options. How about adding an env var option to distutils, like
in this code: |
I think the option should probably be a setuptools one - it's only really bugfixes that go into their distutils copy, and anything new belongs in setuptools. So I'd suggest it goes around https://github.com/pypa/setuptools/blob/main/setuptools/command/build_ext.py#L136 (and similar places for all the Adding @jaraco as well, since there's no point designing a feature for him without him here. |
for more information, see https://pre-commit.ci
Thank you @zooba ! |
Thanks! Hopefully we won't see any issues appear immediately from this (and that any issues that do appear aren't our fault), but feel free to ping me for anything that comes up. I at least would like to know about it, even if someone else fixes it. |
When adding
I think I'm following the docs, but |
You'll also need to add ARM64 to CIBW_ARCH, since this is a cross-compilation. |
Thanks for getting back so quickly! So declaring Edit: I was wrong, macOS doesn't already do that by default, it as explicitly stated in the Thanks! |
Rather than using BUILD, you can do
No, in this case archs is set elsewhere in pyproject.toml. |
If you want this behavior, you can set arches to all in your config file. Then you will control it entirely with BUILD. But you have to be more careful to not build things your system doesn't support (linux may not have QEMU setup, it will be slow, etc). But you can, it's an easy opt-in. |
What about setting Currently if I want to use this in |
My thinking was that other build backends would want to use their own variables anyway, and If we were to add something more universal, I'd make it, well, more universal :) But in general, ISTM that the aim is to pull backend-specific logic into cibuildwheel, rather than pushing cibuildwheel logic down into the backends (in the absence of standardised logic, that is, such as this situation). |
The only potential problem I can think of is if a tool sees the variable and assumes that it's in a VS environment, rather than only having that single variable set. |
Considering Microsoft and Qualcomm are making a big push towards Windows Arm64, how would we describe the state of Arm64 Windows wheel building in cibuildwheel? What is in there and what's missing, and is there any low-hanging fruit that can help increase the availability of Windows Arm64 wheels? |
cibuildwheel is ready for setuptools-based and setuptools_rust-based projects, but as far as I know no other build backends support cross-compilation on Windows yet. The main thing we're missing is maintainers who want to support Windows ARM64, and by "want to" I mean they want to badly enough to get their own hardware (possibly Azure-based VMs, which are available) or to ship blindly (e.g. pywin32 do this already). I don't blame maintainers for not wanting to do either of these things, but that's the next step and has been the next step for a couple of years now. (Disclaimer, maybe: I work at Microsoft and one of my responsibilities is getting the Python ecosystem to support Windows ARM64. Possibly they chose the wrong person to do this job, because I refuse to abuse the OSS community over it 😆 But hopefully enough pieces are in place that anyone who wants to, can, and once users start asking for it and the CI platforms offer it natively, they will.) |
Thanks for the quick and insightful reply!
I can imagine having the hardware would be the first step for many maintainers. Would it be possible for maintainers of large open source projects to get Snapdragon Dev Kits (or other equivalent Windows Arm hardware) shipped? I think can also be configured fine as CI runners. Talking about CI runner availability, I think that's also a major limiting factor. Tools like cibuildwheel work best on native platforms, and currently GitHub doesn't offer a Windows Arm runner (see actions/runner-images#768 and github/roadmap#955). Is there anything you can share on that front?
This is a very healthy and admirable stance! Thanks for holding the fort. |
Not at a scale that's worthwhile, we already looked into it. Our current energy is going towards conda-forge, because that gives us one organisation to provide, and it will at least get build coverage for a large part of the ecosystem. I expect some projects will be happy enough to say "our tests passed on conda-forge, so we're okay to ship an experimental cross-compiled build on PyPI".
Only because the scripts that cibuildwheel runs (belonging to the projects) assume they're only going to run on native platforms. We've spent some time working through the most central dependencies and providing patches to break some of those assumptions, so that cross-compiling should be much easier, but it's ultimately down to each individual project. cibuildwheel is in good shape here.
Nothing more than what they're saying on those issues. Except that when they do offer them, everyone had better use them or I'll have some serious egg on my face with my management! |
I believe it works with Maturin, and it at least partially works with scikit-build-core - There's some work toward better cross compiling starting with an informational PEP; I'd love to see that move forward. |
I’m curious, we’re a little bit further along now, has any of this changed? For example, does this help? |
I sure hope so! Of course, "generally available" doesn't mean freely or sufficiently available, and I believe they're still paywalled. But I do hope that when projects have easy access to test on these machines then they'll be more confident about shipping releases. |
@zooba thanks for getting back! Do you have a position at Microsoft where you can help advocate for some of the most important / largest projects to get access to these runners? |
I've been doing that for the last few years 😆 Unfortunately it's not been possible. We managed to get conda-forge access, as they're one easily negotiated organisation, but progress over there has been slow. It's starting to move now, however, though whether we'll start seeing build/test results from their infrastructure sooner than GitHub Actions is still unknown. |
Yeah, I know them well. They don't (yet) share infrastructure among their projects, unfortunately, so it hasn't been as easy to set them up with anything. They've said they're working on it though, and they know how to reach me when it's a good time for us to provide assistance. |
The PyPA org would be a good start so we could add testing for it. ;) I thought I read somewhere the target was to have OSS access / general availability by the end of the year (which isn’t that far away). |
I expect at this point GitHub will get their stuff available before we can procure more physical hardware to set up a build pool. But I hadn't realised the PyPA had an infrastructure manager and the ability (and trust) to share (persistent) build pools across the projects? That's the level of offer we've got right now (and by right now, I probably mean 6-8 months, given we're into December and it's impossible to get stuff moving in December). It would look a lot more like PCs under your own desks than virtual machines, just plugged into GitHub Actions rather than letting you have direct access. Besides, we've tested most of the PyPA projects ourselves and are happy that you're fine ;-) And you're all pure Python, so there's nothing stopping users from using them. Start adding some native extension modules and we can prioritise it, but as it is, you've already released packages and so we're not worried that you're holding up the ecosystem. |
No description provided.