-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Support a way to specify an additional test step between sdist and wheel #856
Comments
Why would you want to run tests against the sdist instead of the installed wheel? Sdists are intermediate artifacts; tests can fail when run against an extracted sdist by design. I'd understand wanting to run tests against an sdist-built wheel during development (instead of a pip-built wheel from source, which is what you get with tox), but what's the benefit otherwise? |
Sorry, perhaps I wasn't entirely clear. If people include tests in source distributions, they expect tests to work there. So making sure that tests fail as part of a CI deployment is precisely the point — they should fail for us, so we know that we have made a mistake, rather than for users who run tests once the release is out. Of course, one can implement such a workflow manually, but given that |
I'm not entirely sure that's a valid assumption to make is what I was getting at. I assume it's gonna work in the majority of cases, but sdists provide no guarantee that the installed packages are gonna look anything like the packages in the sdist. For example, tests might attempt to access an (un)installed console script, or they might need to read the package's |
Well, I'm not saying that |
I'm not in favour of running tests against an sdist because it's fundamentally antithetical to PyPA's package build and distribution model. Tests should always be run against an installable - if not an installed - package. I'm also personally not in favour of providing a testing option in build because it's outside the tool's scope as I imagine it. If your aim is to run tests in CI, why does that need to happen before installation? Could you not block merging a change request unless the tests pass, for instance? If the aim's to avoid installing broken packages on user machines, I think you'd have a higher chance of success if you were to run the tests against the installed package in a temporary env with the package's dependencies exposed from the outer env. This could go something like this:
Maybe there's more facets to this I'm not understanding - if you could explain the problem you're having in a little more detail, we could see if another alternative would work. |
The goal is to provide a working test suite in source distribution, so that people using source distribution (such as downstreams) can successfully run tests. Therefore, it is important to make sure that the source distribution actually includes all files needed to run tests — and the simplest way of doing that is to run tests using the source distribution. I'm not saying tests should use package sources itself from the source distribution — I'm talking of using tests from the source distribution. The way I see it, a convenient way of doing that would be to run:
which would mean:
Admittedly, the wheel gets built twice here but that's a minor problem — it happens anyway with the current workflows where |
I think this is backwards. There's no good reason to let tox build a wheel for you you won't be distributing, besides the fact that it's wasteful. Firstly, tox uses pip as a wheel builder; pip does not use build. tox might be configured to install an editable wheel and pip might call different hooks in a different sequence. Secondly, wheel builds are typically not deterministic. The wheel you'll be testing will differ, even if it's only in the timestamps of its files. You are just adding entropy.
It does, but it'd be better if it didn't. A much better workflow would be:
|
I think as-of now, this is a non-starter since test running is not standardized. Also, I think this is out of scope for the main CLI, though if test running gets standardized, I am open to maybe provide such as functionality in a |
For context, a reason why such a functionality could be desired is downstream packaging, where packaging Python projects could be fully automated. Eg. In Arch Linux we currently have to manually write instructions for each package: https://gitlab.archlinux.org/archlinux/packaging/packages/rstcheck/-/blob/35395c175954852074a02cd7426144ac83a9c2e5/PKGBUILD#L34-L39 |
I think providing a way to communicate the built wheel names would help this without expanding build's (narrow) scope. It's just a bit tricky to do, since a build can produce stdout and stderr, limiting the options for where we write out the build info (unless using the API instead of CLI, where you can pass data around much easier). |
I think that introduces a fair bit of complexity, at which point I think the better way to deal with this use-case is to write a small script that does exactly what is desired, which is something we make easy to do. The main use-case for that would be script automation anyway. This is a use-case where wildcards are generally available, so reading the name from some output to use it in another command will probably even be more complex than just using wildcards (eg. |
By default,
build
first builds sdist and then uses it to build wheel — which is a good way to ensure that sdist actually works. However, for projects that also include tests in sdist, it would be awesome to have an option to also run tests on top of sdist, to make sure that they work correctly as well.What I'm thinking about is a new option that specifies a command to call on top of sdist before wheels are built, and that needs to succeed for the build to progress. Something akin to:
That would:
tox
on top of unpacked sdist.tox
succeeded, build wheels.I can write a patch if you agree with the idea.
The text was updated successfully, but these errors were encountered: