-
-
Notifications
You must be signed in to change notification settings - Fork 636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require that a target's interpreter_constraints
are a subset of their dependencies'
#15241
Comments
@Eric-Arellano : I know that we have discussed this a few times, but I wasn't able to find a ticket covering the idea. |
interpreter_constraints
are compatible with their dependenciesinterpreter_constraints
are a subset of their dependencies'
After all of our discussion this past year, I am now on board with this. I have no idea how we handle our deprecation policy though |
`CoarsenedTarget`s are structure shared, and because they preserve their internal structure, they can service requests for transitive targets for different roots from the same datastructure. Concretely: Mypy and Pylint can consume `CoarsenedTargets` to execute a single `@rule`-level graph walk, and then compute per-root closures from the resulting `CoarsenedTarget` instances. This does not address #11270 in a general way (and it punts on #15241, which means that we still need per-root transitive walks), but it might provide a prototypical way to solve that problem on a case-by-case basis. Performance wise, this moves cold `check ::` for ~1k files from: * `main`: 32s total, and 26s spent in partitioning * `branch`: 19s total, and 13s spent in partitioning The rest of the time is wrapped up in #15241.
#15141 brought partitioning time down to 9s for From a deprecation perspective, we already know that all targets have "valid but potentially over-broad" ICs. So I think that we could add the code that would check the edgewise condition as a deprecation that explains "IC is |
…d#15141) `CoarsenedTarget`s are structure shared, and because they preserve their internal structure, they can service requests for transitive targets for different roots from the same datastructure. Concretely: Mypy and Pylint can consume `CoarsenedTargets` to execute a single `@rule`-level graph walk, and then compute per-root closures from the resulting `CoarsenedTarget` instances. This does not address pantsbuild#11270 in a general way (and it punts on pantsbuild#15241, which means that we still need per-root transitive walks), but it might provide a prototypical way to solve that problem on a case-by-case basis. Performance wise, this moves cold `check ::` for ~1k files from: * `main`: 32s total, and 26s spent in partitioning * `branch`: 19s total, and 13s spent in partitioning The rest of the time is wrapped up in pantsbuild#15241. # Building wheels and fs_util will be skipped. Delete if not intended. [ci skip-build-wheels]
…ck of #15141) (#15244) `CoarsenedTarget`s are structure shared, and because they preserve their internal structure, they can service requests for transitive targets for different roots from the same datastructure. Concretely: Mypy and Pylint can consume `CoarsenedTargets` to execute a single `@rule`-level graph walk, and then compute per-root closures from the resulting `CoarsenedTarget` instances. This does not address #11270 in a general way (and it punts on #15241, which means that we still need per-root transitive walks), but it might provide a prototypical way to solve that problem on a case-by-case basis. Performance wise, this moves cold `check ::` for ~1k files from: * `main`: 32s total, and 26s spent in partitioning * `branch`: 19s total, and 13s spent in partitioning The rest of the time is wrapped up in #15241.
#15301 is also caused by this, although it is not in a hot path. IMO, we should fix both this and that one together. It's likely that we can fall back to partitioning only if the condition from this issue does not hold (i.e.: if we render the deprecation warning, then we need to fall back to the slow implementation), so that we get the performance benefit immediately for users who are already following the constraint. |
I'll start this today. |
…ir dependencies' (#15373) As described in #15241: we currently compute per-target interpreter constraints by doing per-target graph walks, which is both a scalability bottleneck (because you can almost never use the constraints directly on a target: you must compute them), and complex for users to reason about. This change adds a `ValidateDependenciesRequest` union, which allows backends to validate the computed dependencies of a target. The python backend uses validation to deprecate the condition from #15241. When that deprecation triggers, most/all callsites which currently use `create_from_compatibility_fields`, `create_from_targets`, or the new `compute_from_targets` can instead directly consume the ICs of a root target in the graph. This change also (temporarily) adds an `InterpreterConstraints.compute_from_targets` method which computes (in transitive-dependency-linear time) that the dependencies of a target have a superset of its own `interpreter_constraints`. This method allows us to (again, temporarily: see above) apply the optimization of avoiding set merging if targets are already valid. Reduces the runtime of un-memoized-but-cached `./pants check ::` by 10%. Fixes #15241, fixes #15301, fixes #11072.
…ir dependencies' (pantsbuild#15373) As described in pantsbuild#15241: we currently compute per-target interpreter constraints by doing per-target graph walks, which is both a scalability bottleneck (because you can almost never use the constraints directly on a target: you must compute them), and complex for users to reason about. This change adds a `ValidateDependenciesRequest` union, which allows backends to validate the computed dependencies of a target. The python backend uses validation to deprecate the condition from pantsbuild#15241. When that deprecation triggers, most/all callsites which currently use `create_from_compatibility_fields`, `create_from_targets`, or the new `compute_from_targets` can instead directly consume the ICs of a root target in the graph. This change also (temporarily) adds an `InterpreterConstraints.compute_from_targets` method which computes (in transitive-dependency-linear time) that the dependencies of a target have a superset of its own `interpreter_constraints`. This method allows us to (again, temporarily: see above) apply the optimization of avoiding set merging if targets are already valid. Reduces the runtime of un-memoized-but-cached `./pants check ::` by 10%. Fixes pantsbuild#15241, fixes pantsbuild#15301, fixes pantsbuild#11072. # Building wheels and fs_util will be skipped. Delete if not intended. [ci skip-build-wheels]
…ir dependencies' (Cherry-pick of #15373) (#15407) As described in #15241: we currently compute per-target interpreter constraints by doing per-target graph walks, which is both a scalability bottleneck (because you can almost never use the constraints directly on a target: you must compute them), and complex for users to reason about. This change adds a `ValidateDependenciesRequest` union, which allows backends to validate the computed dependencies of a target. The python backend uses validation to deprecate the condition from #15241. When that deprecation triggers, most/all callsites which currently use `create_from_compatibility_fields`, `create_from_targets`, or the new `compute_from_targets` can instead directly consume the ICs of a root target in the graph. This change also (temporarily) adds an `InterpreterConstraints.compute_from_targets` method which computes (in transitive-dependency-linear time) that the dependencies of a target have a superset of its own `interpreter_constraints`. This method allows us to (again, temporarily: see above) apply the optimization of avoiding set merging if targets are already valid. Reduces the runtime of un-memoized-but-cached `./pants check ::` by 10%. Fixes #15241, fixes #15301, fixes #11072. [ci skip-build-wheels] [ci skip-rust]
(relates to #12652 and #11072)
Currently we do not require that the title of this ticket holds. Instead, the
interpreter_constraints
of a target are treated as the constraints of the sources of the target, rather than necessarily of the target and its dependencies (docs). In order to calculate the actual constraints of a particular target, we currently merge theinterpreter_constraints
of a collection of targets (withInterpreterConstraints.create_from_targets
): either the target itself, its direct dependencies, or its transitive dependencies.This is challenging from a few perspectives:
MyPy
andPylint
partition inputs viaCoarsenedTarget
#15141 can avoid target graph walks at the@rule
graph level, they cannot avoid re-walking in order to compute the full transitive set of ICs for each root target without additionally doing a memoized graph-aware merge ofInterpreterConstraints
.interpreter_constraints
are merged is well documented, it remains non-intuitive that a field declared in aBUILD
file is not necessarily something that your code will run or be linted with: only something that applies to the direct sources.To apply the requirement from the title, rather than ever computing transitive AND'd constraints (which require memoized set merging to make efficient), we would instead eagerly fail if a target's constraints were not a subset of any of its dependencies' constraints (which only requires memoized pairwise subset checks). It would still be the case though that code which succeeds in some cases (where none, or only some of your dependencies are used) may fail in others (when the full transitive set is used).
The argument in favor of the current behavior of
interpreter_constraints
is that it allows targets to begin claiming compatibility with newer versions of the language before their dependencies do (described in this post).The text was updated successfully, but these errors were encountered: