-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid downloading ninja source #127
Comments
If you know that you have a ninja installed globally as a C++ command-line utility, maybe you can just package dummy metadata for it to satisfy meson-python? Alternatively patch meson-python to not depend on ninja if you know it's already satisfiable... |
I was hoping there was a way to generate the metadata or let this package use an existing build.
That's what I did now. I doubt there will be many other packages that will use it aside from |
The pipx run isn't really relevant to distro packaging, obviously. |
FYI, see scikit-build/scikit-build#731 - A build backend can specify dynamic dependencies; scikit-build-core will probably do this do only add |
Any news on this? We are trying to install |
Mesonpy should list ninja using the requires for build wheel hook and only include it if it's not already installed. CC @FFY00. Should have something like this: ninja = shutil.which("ninja")
# (could check version here)
if ninja is None:
packages.append("ninja") (Also assuming it can get ninja via |
I disagree with this approach. Listing requirements should not be impure, that is, depend on the actual build machine. These requirements are actual requirements, it is just that they can get fulfilled in different ways. In my opinion the right approach here is to have a build system flag for this
Exactly, that is what I think this package should provide. |
The actual requirement is not a python package. It's the command line tool "ninja". That's why there's a complaint that "it's already installed". The Python package is just one way to get it.
No. If someone installs this version, they should get this version. It is up to the build tool to specify what it needs (via The build tool can decide if it wants to allow configuration, it can via this dynamic hook - like |
A compiled build system in python is never pure. You’re calling a compiler with ninja. We don’t pip install GCC or Clang. Your proposed dummy package would not be pure, either; it would depend on the system installed version of ninja. Only now, instead of being controlled by the build system, which should know exactly what it requires, it’s just happening as part of an installing an SDist. Two similar systems should produce the same wheel! Accessing the Internet during the build process (by default) is probably something we should fix. I’m also OK to have opt-in workarounds for some things if needed - though I’d say this one isn’t needed, mesonpy should declare exactly what it needs - it should check to see if there’s an installed ninja and check the version of the installed ninja to verify it is adequate; if it is, it should not add the dependency. Or it might check to see if it's on a system that has wheels, and always add it if it is. Etc. |
My apologies, I'm having an exceedingly hard time parsing this paragraph. I proposed that @FRidh could solve distro integration concerns by performing third-party patching upon this repository and installing dummy "ninja" metadata for python via an OS package manager that uses the OS package manager metadata to record a dependency on an OS package manager version of ninja. Can you clarify how one might I would also be interested to know, in the context of "this package should always install exactly the version it claims to install", what possible errors you are afraid might occur if someone tries to |
I think that was what the issue was about originally, though I read things like:
This is suggestion that I'm not against a distributor providing the "ninja" package by third party patching of this repo. In fact, I can't be, as I'm not a third party. :) Though having talked with conda-forge (I'm a ninja maintainer there too, actually - maybe I'm also a third party in that case), I agree that Distributions don't/shouldn't be using isolated builds anyway (for one, isolated builds use internet during install, for a second, they ignore distribution packaging), so this doesn't really matter for distributions, though - just don't pip install ninja. But there's an underlying problem that's actually easily fixable with python-meson. In that case, the exact version of ninja and the job server support isn't a problem at all. In fact, if ninja's already on the system, you can save a tiny bit of time & storage by simply checking for the presence of a valid version of ninja, and only adding ninja to the get_requires_for_build_wheel hook. I've thought about this extensively for the last couple of months, and have come to the conclusion that it's the right path for common command line tools needed for building. And later I found the completely unrelated example: https://github.com/tttapa/py-build-cmake/blob/6e583bb7f6813794abefb1ef990fb8d20e47bd8f/src/py_build_cmake/build.py#L39 , so it's not just me. Command line build requirements should be declared in the build hooks.
I'm not sure what this refers to. The other path is if you deactivate isolated installs, in which case, it simply tries to build in the existing environment and both the It should be noted, in the default "isolated" build case, adding a ninja "python package" to the host environment will not fix the problem at all, since it's building an isolated environment and using pip, not your package manager. So I'd worry you might not actually be fixing the problem I think you are trying to fix with this dummy package approach. Footnotes
|
Do you know what the minimum version of ninja required for meson? I can make a PR to meson-python that fixes this to show what I mean. |
@henryiii According to meson's Release Notes, from version 0.54 it requires ninja 1.7 at least. |
Thanks! Turns out it's 1.8.2, nicely listed in the Meson error message. :) (though maybe that's a more recent version - guessing this should track whatever's required in the latest meson - that should be good enough I think) PR's here https://github.com/FFY00/meson-python/pull/175 |
Ah, I overlooked that -- but it's also nothing to do with the quote of mine that you replied to. :)
Well, I would argue that in such a case, the dummy python metadata should be created with a version matching the ninja version that it depends on via an OS package manager. I'm ambivalent to whether installing the "ninja" sdist should detect a system ninja and declare that version but not install it, though I do agree that it would be convenient if the sdist could be built for a distro without patching, only by passing an option. It doesn't really concern me as I'd always just pass the option, personally.
I don't really like this, never have, and prefer that such things be done under the aegis of a forked name. Heck, samurai exists along similar lines -- it has a subset of tools, but goes in the other direction also by supporting a MAKEFLAGS-alike (SAMUFLAGS), supporting transparent forwarding of color (even when the output is not a console, but the compiler has been told to force color on) and is explicitly open to adding jobserver client support although the work hasn't actually landed. It's valuable to people to know when they are actually running samurai, so, samurai has its own name.
and
Isolated builds aren't actually the issue here. Running a non-isolated build with network disabled will often still verify that build-system.requires are installed, but error out instead of attempting to download them if they cannot be met. So if projects build-depend on ninja, @FRidh likely still gets errors trying to build those projects, even when using a non-isolated build. It's just that the error is "build dependency not available" rather than "error installing XXX: network unavailable". For example,
I agree that that's probably the best solution, but in the interest of discussing how to work around projects that don't do that (which may include projects using ninja-generating build systems other than both meson and cmake, as well as projects that hand-roll their own cmake/meson bindings by running subprocess.run() in custom setuptools build steps, then copy files around
It referred to my assumption that the only sdist which builds dummy metadata is one that exists in the working directory of a distro package builder! I was still somehow under the impression that the relevant context was just my post where I suggested that @FRidh could patch this sdist to do so, meaning that pip install cannot ever replicate that. The only thing that would publicly exist is an installed wheel.
Very nice! 👍 |
There's no way for the ninja 1.10.2.4 SDist to detect how it was specified; it cannot tell if it was pinned with a hash, or just installed as I'm okay to support preparing a distribution version of this (and cmake, too, see scikit-build/cmake-python-distributions#227 - just no one has had time to work on this).
It's a historical oddity that build has an opt-out flag and pip has an opt-in flag. But distros need to opt out; that's exactly why this opt-out exists for build (and why it was not possible change the default for pip, it broke distro packaging and conda-forge). As I've said, making your "ninja" package depend on "python" simply so you can install a dummy package into "site-packages" is something no disto I'm aware of will do - most users installing ninja do not want Python as a dependency. So then you'd need a special ninja-python package in which case you could either provide the custom copy of Ninja too, or you could just add the ignore flag (which is exactly what everyone does today, including conda-forge). And, as I've pointed out, this doesn't fix isolated builds at all, so the only thing you buy is being able to run this check - which is opt-in for pip, which is what most distros use and they haven't opted-in. |
I was going to reply to this earlier, but got a bit overwhelmed and did not manage to finish my reply. I agree with Henry, checking if ninja is available on the system and skipping asking for that dependency at runtime is the best approach. The issue here stems from ninja being a native dependency, which cannot be properly declared in Python metadata. The ninja Python package is just a hack to be able to provide this native dependency via the Python packaging ecosystem, but we do not need the ninja package itself, just a ninja binary. We currently have a hard dependency on the ninja package (https://github.com/FFY00/meson-python/blob/d18930f11b1a08a718905d5263eb9e32c4eb5a1d/pyproject.toml#L30), I thought this was the best way since people packaging outside the Python ecosystem can safely ignore it and depend on their native ninja package, but if that is giving issues, we can replace it with a comment noting that ninja is a native dependency and ask for it at runtime if ninja is missing. @FRidh I'd like to hear exactly how this is affecting you as a downstream, so I can better understand your issue and try to find an alternative that works better for everyone. |
The same issue. I tried to package ninja python wrapper to test SCons experimental ninja support. The main problem that after build it wants to replace system installed ninja binary under /usr/bin with provided python-wrapper file and adds own ninja binary under other path. It's OK for pypi distributed package being installed via pip to ship ninja binary into user environment. But for system wide installation of ninja python wtrapper it's better to rely on system ninja runtime dependency. There is no any problem to specify exact or minimal system ninja version as dependency on packaging. |
Then don't install the ninja package system-wide. The correct solution is to use PEP 517 requirements to decide if ninja is required during a build and only add it if a sufficient version is not present. scikit-build-core and meson-python do this. For things other than building other Python packages, it's harder to do, but the same basic idea should apply - only add the dependency if a system dependency is not present. And generally don't install packages with pip system wide, use pipx or virtualenvs. |
Does this issue relate at all to the download of Kitware/ninja tarballs? I'm working to package this for distribution (as a required dependency of another project), and it's serving as a blocker. On build, it tries to fetch If this is unrelated, I can raise a separate issue. If it is related, any suggestions from anyone who knows this project well? |
Can you clarify why this is a required dependency of another project? It should not be. |
I thought it was odd myself. The dependency chain is as follows... spaCy -> thinc -> ml-datasets -> scipy who has test depend ninja (this project) 'required' was not the right word to use here as it's a test dependency, but as tests are almost always required to be ran on builds when they are made available by the upstream, it's required in the current context. |
Incorrect, it depends on https://ninja-build.org/ which is not this project at all.
It uses it at build time, not test time? |
Scipy's pyproject.toml lists
Scipy's pyproject.toml lists Edit: Seems that it requires ninja at BOTH test and build, via generated requires files. Still pulls from the PyPI distrib however. |
This is... not how the python ecosystem works. The dependency group "test" does not exist for any purpose other than to serve as a developer convenience, whereby one can install They do this because they want to make it extremely easy to onboard people trying to hack on the code, and installing dozens of linting tools is better than having them be confused when their PRs fail a CI linter. You'll notice it also installs pytest-cov. I promise you, you are not required to generate coverage reports of scipy in order to run its testsuite. |
yes, I already ceded that this project uses generated files in my edit after reviewing the pyproj. It's possible that the edit was made after you started replying, however, in which case you probably wouldn't have seen it
ehm..it kinda does using the pyproject file to generate extraneous requirements files isn't the standard and expected use of the pyproject file. that's one project wanting to do things differently. and yes, it's for developer convenience. but there's a reason why a group would be called test, just like there's a reason why a group would be called dev. Including devel depends in the test group is also against "python ecosystem" norms. neither of the above changes the fact that this repository is in fact the dependency in question and not the project that you linked above. everything else is just bikshedding and pedantry The question still stands for those that are reading. |
No it quite literally is the ecosystem norm for the "test" optional dependency group to contain items extraneous to requirements, which exist solely to ensure the convenience of project developers. Generating extraneous requirements files isn't necessarily the standard, but having completely unnecessary items in The python ecosystem has never had a real standard for running tests at all. You cannot even find out what command to run to execute a testsuite without consulting documentation and/or checking the contents of tox/nox/github-workflows configuration files. There is certainly no standard for what the dependencies are. And people frequently install linter plugins in CI, and enable them by default in test harness config files (mainly pytest) so you have to sed those out in order to reduce the fragility of testsuite dependencies and stop checking whether the codebase is compliant with the latest of many black standards, or whether the bugfix patch you applied maintains consistent 100% branch coverage even though you don't aggregate branch coverage for 9 different python versions and 5 different operating systems and therefore it's meaningless anyway.
I am saying you do not need the dependency in question. It is not bikeshedding, nor pedantry.
In fact, the testsuite doesn't even run ninja at all AFAIK -- it is only installed by |
Maybe not to build outright, but in order to meet standards for certain packaging guidelines, you need to at least make an effort to have them available. You could just as easily skip every single not absolutely-required depend and have a successful build, but now, dependents that rely on these optionals can be unexpectedly encumbered or broken. Your notes about the C++ build system as a source will be investigated, as I'm much more closely following what you're saying now. That is not what it was read as originally, rather, as correcting that the PyPI depend wasn't this repository. On everything else, I think a lot of it is just us getting wires crossed, because I'm with you otherwise, particularly after the added detail. A very long few days over here, about to call it for the night, so it's probably on my end tbh |
scipy uses, as its build system: This is listed in pyproject.toml as:
meson-python in turn depends on (I'm an upstream maintainer of this): meson depends on:
This project is a PyPI repackaging of https://packages.debian.org/sid/ninja-build for people that cannot There's a lot of software you can get from pip.
There are many things one can get from PyPI or from dpkg, but as a distro packager you only want to depend on the version from the distro package manager.
I, too, am a distro package maintainer. I have been down this road before and decided that it is not worth trying to get dependencies from PyPI when they are the same dependencies already available in the distro. The depends are there either way, so this isn't about standards.
If pyproject.toml files list the PyPI version as a mandatory dependency anyway, then it's better to simply patch the software to use the system copy. Every distro has rules about bundled code copies and this is pretty similar. The PyPI redistribution still provides functionality for Windows and macOS users, as well as Linux users of workstations who don't have root access and need to quickly build their data science project without asking management for packages to be installed. We don't need that for ourselves. |
Yes, this is the essence of it. No distro packager should repackage this repo ever - you already have a Fedora has some tooling to parse dependencies from Some context for the SciPy question: we recently added Hence, if you're adding a SciPy build recipe to a distro and you want to express test dependencies, depend on the distro's |
Having revisited this a day later, I'm with y'all. mb. What you're saying about the ninja vs ninja situation is 100% correct on all counts After a fairly long week I needed a bit of a reset, I guess. Things weren't adding up. Probably going to take the weekend off and go fishing, come back fresh |
I have been thinking about this lately w.r.t. building and packaging a pure CMake project. I will come around to how this relates to
These packages work in such a way that if you should be able to use it for the following dependent projects:
The key point here is having a So how does this fit in with
In order to support version parity there are a few points here:
This may not apply for all packaging environments, but in the case like RPM-based environments that have tight integration with python (e.g. through |
There's an additional complication for ninja, that @henryiii has previously raised as a concern. The pypi package contains different patched functionality not present in the official ninja releases, and it's argued that projects depending on the pypi "ninja" package should be able to assume those patches are there as it's part of the pypi package's documented functionality. The argument is that projects which don't care which ninja they get should check for the C++ program and projects which do care, shouldn't be tricked into thinking they have the patched edition when they don't -- so, packaging "just metadata" would then trick them. |
Fair point, could you point to some of those so I can check the nature of those patches. I couldn't see anything in the current In this case I would say it is an issue for the packagers, because similarly they would also have patched versions on their side for relevant CVEs and more. If we limit the usage of |
It's right there? ninja-python-distributions/CMakeLists.txt Line 50 in 6694535
It downloads a fork, not a series of patch files.
Patching a CVE isn't a new feature. People don't usually worry about bug-for-bug compatibility. |
The missing jobserver functionality looks like it's getting added to Ninja in ninja-build/ninja#2474, but currently for Unix only. So the fork likely will live on for a while longer until Windows support is also added. I feel like the whole point of this (to make adding PyPI wrappers to the the Ninja is a bit of special case due to the fact it's using a fork, but a) if you want to provide the fork, you still don't need to provide the Python files, and b) the fork should eventually go away once the jobserver support is added and supports Windows. |
I confirm that we will keep the fork until job server support is integrated upstream for both Linux and Windows. |
Hmm, iiuc the python files do not make use of this feature right? In which case, even if we patch the dependencies the build itself would not succeed.
Wait, maybe I've mispoke at some point. It's not about making
Good analogy, I've also didn't check that it points to a kitware fork instead. But in this case it's slightly different, because this part is about packaging the python files. It would be more related to 2 different aspects: debundling dependencies, like how a bundled libxz source is stripped out and replaced with linkage to the system paclage, and aligning with other package managers, like if an optional dependency is available we try to make it available as well, or if Fortran bindings are built on conda we would try to provide those as well. The Fedora python packaging guidelines encourages alignment with PyPI environment as much as possible. |
It is an unfortunate fact of life that the Python developers ecosystem is pretty bad about truthfully describing dependencies. The best-known example of this is any project that uses python-poetry: https://iscinumpy.dev/post/bound-version-constraints/ although there are lots of other examples. It is generally wise to have a mechanism for overriding automatic parsing of dependencies for cases where the dependency is wrong, such as here. The simplest approach, maybe, is to apply a patch to their pyproject.toml deleting that line. :) |
Yeah, but on the other side of the spectrum with golang it's a nightmare for distro pckagers.
Ooph yeah, whenever I see poetry I run.
Indeed, and it can be opted out, and there are a few clever ways for patching it without maintaining a manual patch file using |
That's not the other side of the spectrum, it's the same side. ;) "We only support vendored dependencies from a lockfile downloaded at build time" vs "we only support virtualenvs from a lockfile downloaded at build time". |
I'm trying to package this package as it is a dependency of
meson-python
.ninja-python-distributions
tries to download a ninja source from the web. How can we avoid that and let it use a local version that?The text was updated successfully, but these errors were encountered: