-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use aggressive forking when encountering markers #5733
Conversation
Performance does regress with this change in at least one example. I tried @konstin's transformers test: [project]
name = "i5344-transformers-without-markers"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.9.0"
dependencies = [
"datasets",
"dill<0.3.5",
"tensorboard",
"tensorflow-text<2.16",
"tensorflow>=2.6,<2.16",
"torch",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build" And a benchmark:
Performance on Home Assistant regresses too:
I haven't done any investigation on this yet, but to a certain extent, some regression is expected since we are now more aggressively forking. My hope is that there are some low hanging fruits here, but it may require better marker simplification/disjointness checks to achieve it. And it is indeed possible that there will be some regression here that is fundamentally tied to an increase in the number of forks. |
crates/uv/tests/pip_compile.rs
Outdated
# via trio | ||
sortedcontainers==2.4.0 ; sys_platform == 'darwin' or sys_platform == 'win32' | ||
sortedcontainers==2.4.0 ; (sys_platform == 'darwin' and (implementation_name == 'pypy' or os_name != 'nt' or (implementation_name != 'pypy' and os_name == 'nt') or (implementation_name != 'pypy' and os_name == 'nt' and ()))) or (sys_platform == 'win32' and ((os_name == 'nt' and sys_platform != 'darwin') or (os_name != 'nt' and sys_platform != 'darwin'))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is and ()
valid?
I'll add a proper review later, but we need to update some packse scenario descriptions with this, such as |
success: false | ||
exit_code: 1 | ||
success: true | ||
exit_code: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the lockfile be included here now that it succeeds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Requires a packse update.
// Ideally we wouldn't do this here forcefully since if | ||
// the input requirements change (i.e., `pyproject.toml`), | ||
// then it could be correct to introduce a new fork. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to remove this after we have better markers and before shipping to avoid failing in those cases.
@@ -2572,16 +2592,13 @@ fn fork_non_local_fork_marker_direct() -> Result<()> { | |||
cmd.env_remove("UV_EXCLUDE_NEWER"); | |||
cmd.arg("--index-url").arg(packse_index_url()); | |||
uv_snapshot!(filters, cmd, @r###" | |||
success: false | |||
exit_code: 1 | |||
success: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to update the scenario descriptions.
self.forks.push(Fork { | ||
dependencies: vec![dep], | ||
markers, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is major change from our previous design: We now always fork when we see a marker, not just when we see a package being used multiple times. Imo we should decide about this change independent from overlapping markers. This change makes fork-non-local-fork-marker-direct pass, but it's also responsible for the performance regression.
dependencies: fork.dependencies.clone(), | ||
}); | ||
} | ||
if !is_definitively_empty_set(&markers) && !is_disjoint(&fork.markers, &markers) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't we guaranteed with the is_disjoint(&fork.markers, &markers)
check that this can never be empty?
|
||
let mut new = vec![]; | ||
let mut found_overlap = false; | ||
for mut fork in std::mem::take(&mut self.forks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for mut fork in std::mem::take(&mut self.forks) { | |
for mut fork in std::mem::take(&mut self.forks) { | |
// Invariant: The union of the markers of all forks is always the universe. | |
// Let | |
// m_f := markers of the fork | |
// m_d := the remaining markers of the dependency | |
// If `m_f ∩ m_d = ∅`, i.e. they are disjoint, we know that the dependency does not | |
// occur in this fork, and we keep the fork as it was. Otherwise, `m_d` either | |
// intersects with or covers `m_f`. | |
// Should `m_f \ m_d` be empty, `m_d` covers `m_f`, and we add `m_f` verbatim and set | |
// `m_d := m_d \ m_f`- | |
// Otherwise, we split this fork into two new forks, `m_f \ m_d` and `m_f ∩ m_d`. This | |
// is relevant if we have e.g. `python_version >= "3.9"` and `python_version >= "3.10"`, | |
// so that we get forks `<3.9`, `>=3.9,<3.10` and `>=3.10`. It is a sound operation | |
// since `(m_f \ m_d) ∪ (m_f ∩ m_d) = m_f` and since both new forks are either subsets | |
// of `m_f` and `m_d`, meaning we miss no overlap. |
## Summary This PR rewrites the `MarkerTree` type to use algebraic decision diagrams (ADD). This has many benefits: - The diagram is canonical for a given marker function. It is impossible to create two functionally equivalent marker trees that don't refer to the same underlying ADD. This also means that any trivially true or unsatisfiable markers are represented by the same constants. - The diagram can handle complex operations (conjunction/disjunction) in polynomial time, as well as constant-time negation. - The diagram can be converted to a simplified DNF form for user-facing output. The new representation gives us a lot more confidence in our marker operations and simplification, which is proving to be very important (see #5733 and #5163). Unfortunately, it is not easy to split this PR into multiple commits because it is a large rewrite of the `marker` module. I'd suggest reading through the `marker/algebra.rs`, `marker/simplify.rs`, and `marker/tree.rs` files for the new implementation, as well as the updated snapshots to verify how the new simplification rules work in practice. However, a few other things were changed: - [We now use release-only comparisons for `python_full_version`, where we previously only did for `python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522). I'm unsure how marker operations should work in the presence of pre-release versions if we decide that this is incorrect. - [Meaningless marker expressions are now ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502). This means that a marker such as `'x' == 'x'` will always evaluate to `true` (as if the expression did not exist), whereas we previously treated this as always `false`. It's negation however, remains `false`. - [Unsatisfiable markers are written as `python_version < '0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329). - The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub` crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends on the `pubgrub` crate for the `Range` type, we probably want to move `pubgrub::Range` into a separate crate to break this, but I don't think that should block this PR (cc @konstin). There is still some remaining work here that I decided to leave for now for the sake of unblocking some of the related work on the resolver. - We still use `Option<MarkerTree>` throughout uv, which is unnecessary now that `MarkerTree::TRUE` is canonical. - The `MarkerTree` type is now interned globally and can potentially implement `Copy`. However, it's unclear if we want to add more information to marker trees that would make it `!Copy`. For example, we may wish to attach extra and requires-python environment information to avoid simplifying after construction. - We don't currently combine `python_full_version` and `python_version` markers. - I also have not spent too much time investigating performance and there is probably some low-hanging fruit. Many of the test cases I did run actually saw large performance improvements due to the markers being simplified internally, reducing the stress on the old `normalize` routine, especially for the extremely large markers seen in `transformers` and other projects. Resolves #5660, #5179.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
Previously, since the markers weren't disjoint, this didn't result in a fork. But now that we deal with overlapping markers, this is fine. And seems correct.
…rker The test in this case has this comment: ``` /// If a dependency requests a prerelease version with an overlapping marker expression, /// we should prefer the prerelease version in both forks. ``` With this setup: ``` let pyproject_toml = context.temp_dir.child("pyproject.toml"); pyproject_toml.write_str(indoc! {r#" [project] name = "example" version = "0.0.0" dependencies = [ "cffi >= 1.17.0rc1 ; os_name == 'Linux'" ] requires-python = ">=3.11" "#})?; let requirements_in = context.temp_dir.child("requirements.in"); requirements_in.write_str(indoc! {" cffi . "})?; ``` The change in this commit _seems_ more correct that what we had, although it does seem to contradict the comment. Namely, in the `os_name != "Linux"` fork, we don't prefer the pre-release version since the `cffi >= 1.17.0rc1` bound doesn't apply. It's not quite clear what to do in this instance.
…dn't! I wasn't expecting these scenarios to suddenly work, but because of the aggressive forking strategy of overlapping markers, these both now produce a valid resolution. Previously, we *only* forked when there were disjoint markers for the same dependency specification. But now we potentially fork whenever there is any single marker, and this results in different forks reaching what were conflicting dependency specifications. And thus, things work here.
From looking at this carefully, I believe the result here is still correct but "different" from the status quo based on the order in which forks are processed. (At time of writing, we do sort forks, but we don't use comprehensive criteria. So at least in some cases, the order depends on the order that the forks were created. Since the overlapping markers code completely changes how we generate forks, that seems plausible here.)
This is an interesting update that previously used a message without a fork occurring, but now notes a split. The dependencies in this case are: dependencies = [ '''fork-marker-disjoint-a>=2; sys_platform == "linux"''', '''fork-marker-disjoint-a<2; sys_platform == "linux"''', ] In this case, the markers are the same for conflicting dependency specifications. So the "no resolution" result is correct. But previously, we wouldn't fork here, because we wouldn't consider the marker expressions to be disjoint. But with overlapping markers, we are actually forking here. Namely, we consider a fork with `sys_platform == "linux"` and another with `sys_platform != "linux"`. I actually don't think this is even related to overlapping markers. The previous behavior seems not quite right (even though it led to the same result), because it implies we weren't really considering the "linux" and "not linux" cases as distinct in resolution.
This comes from this dependency specification: dependencies = ["requests", "requests[socks] ; python_version < '3.10'"] Previously, the `pysocks` dependency was unconditionally included when the `socks` extra was enabled. But it should only be included when `python_version < '3.10'`, which is now reflected here via a marker.
This commit is meant to group together test updates that are "uninteresting." That is, most updates are just adding marker expressions that are "always true" or establishing the forks in the lock file. I've tried to split out the interesting snapshot changes into subsequent commits. Some of the marker changes here are quite striking. I checked most of them. Every one I checked, including the big ones, are all just fancy ways of saying, "always true."
These changes all look wrong and would need to be resolved before merging.
50eade6
to
23832ff
Compare
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
This PR rewrites the `MarkerTree` type to use algebraic decision diagrams (ADD). This has many benefits: - The diagram is canonical for a given marker function. It is impossible to create two functionally equivalent marker trees that don't refer to the same underlying ADD. This also means that any trivially true or unsatisfiable markers are represented by the same constants. - The diagram can handle complex operations (conjunction/disjunction) in polynomial time, as well as constant-time negation. - The diagram can be converted to a simplified DNF form for user-facing output. The new representation gives us a lot more confidence in our marker operations and simplification, which is proving to be very important (see #5733 and #5163). Unfortunately, it is not easy to split this PR into multiple commits because it is a large rewrite of the `marker` module. I'd suggest reading through the `marker/algebra.rs`, `marker/simplify.rs`, and `marker/tree.rs` files for the new implementation, as well as the updated snapshots to verify how the new simplification rules work in practice. However, a few other things were changed: - [We now use release-only comparisons for `python_full_version`, where we previously only did for `python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522). I'm unsure how marker operations should work in the presence of pre-release versions if we decide that this is incorrect. - [Meaningless marker expressions are now ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502). This means that a marker such as `'x' == 'x'` will always evaluate to `true` (as if the expression did not exist), whereas we previously treated this as always `false`. It's negation however, remains `false`. - [Unsatisfiable markers are written as `python_version < '0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329). - The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub` crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends on the `pubgrub` crate for the `Range` type, we probably want to move `pubgrub::Range` into a separate crate to break this, but I don't think that should block this PR (cc @konstin). There is still some remaining work here that I decided to leave for now for the sake of unblocking some of the related work on the resolver. - We still use `Option<MarkerTree>` throughout uv, which is unnecessary now that `MarkerTree::TRUE` is canonical. - The `MarkerTree` type is now interned globally and can potentially implement `Copy`. However, it's unclear if we want to add more information to marker trees that would make it `!Copy`. For example, we may wish to attach extra and requires-python environment information to avoid simplifying after construction. - We don't currently combine `python_full_version` and `python_version` markers. - I also have not spent too much time investigating performance and there is probably some low-hanging fruit. Many of the test cases I did run actually saw large performance improvements due to the markers being simplified internally, reducing the stress on the old `normalize` routine, especially for the extremely large markers seen in `transformers` and other projects. Resolves #5660, #5179.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
This adds a new top-level directory with bare-bones directories for a sampling of ecosystem projects. The idea is for each directory to have enough that `uv lock` can run. The point of these tests is to 1) ensure resolution works in common cases and 2) track changes to resolutions (and the lock file) in real world projects. Unfortunately, it does look like in some cases, re-running `uv lock` results in changes to the lock file. For those cases, I've disabled the deterministic checking in exchange for getting the lock files tracked in tests. I haven't investigated yet whether either of #5733 or #5887 fix the deterministic problem. There is probably a better way to go about integrating ecosystem projects. In particular, it would be really nice if there was a good flow for upgrading ecosystem packages to their latest version. The main complexity is that some projects require edits to their `pyproject.toml` (or a complete migration from non-`pyproject.toml` to `pyproject.toml`). Although, the projects added here in this initial set were limited to those that didn't require any changes.
At a high level, this PR adds a smattering of new tests that effectively snapshot the output of `uv lock` for a selection of "ecosystem" projects. That is, real Python projects for which we expect `uv` to work well with. The main idea with these tests is to get a better idea of how changes in `uv` impact the lock files of real world projects. For example, we're hoping that these tests will help give us data for how #5733 differs from #5887. This has already revealed some bugs. Namely, re-running `uv lock` for a second time will produce a different lock file for some projects. So to prioritize getting the tests added, for those projects, we don't do the deterministic checking.
I opened a new PR with this same change rebased on latest |
This PR completely rewrites forking in the resolver to follow the
approach to dealing with overlapping markers as described in #4732.
The resulting code is much simpler: forks are no longer constructed
by looking at package names and being limited to disjoint marker
expressions. Instead, we consider all dependencies and iteratively
construct a partitioning of the marker environment universe from the
marker expressions we find on dependencies.
This does generally result in more aggressive forking overall. And
importantly, because of that, puts more pressure on our marker routines
like
is_disjoint
andis_definitively_empty_set
.A weakness of this approach, particularly in the absence of perfect
marker simplification, is that marker expressions can get even
bigger than they were before. The snapshot updates will show some
potential low hanging fruit here. I decided not to spend time on marker
simplification given Ibraheem's ongoing work in that space.
There are a lot of updates to the snapshot tests. I did my best to
split "uninteresting but possibly unfortunate" updates into one commit.
And then any interesting updates into a commit-by-commit breakdown
where I looked at the specific results to see if it was correct or
not. There is at least one test update that I am unsure of (the one
involving cffi and pre-releases). So please review this commit by
commit.
Closes #4732, Fixes #4640, Fixes #4668
But notably does not seem to address #4959.