-
Notifications
You must be signed in to change notification settings - Fork 704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to force consistent versions for a library and an associated build tool with new-build? #5105
Comments
Unfortunately, we lack the ability to express such a kind of constraint; ironically, the motivation for introducing qualified goals was to become more liberal and allow to have decoupled install-plans and avoid having a "single version of every dependency" and is working against you here. So I'm afraid, for the case of |
I believe that this type of constraint is not that hard to implement (I hope @grayjay, who is the expert in this area, agrees), so it'd help if someone came up with a proposal for an extension of the constraint syntax. |
This is an interesting question. I think the right generalization is that a given build-tool generates code that expects a given version of a given library to be in scope. So executables should be able to declare that when used as build-tools they forces transitive deps on certain libs. Perhaps the idea could be to add an |
If we are already at the point were we agree that this requires code changes to cabal, may I be so bold to request that we try to find a solution that does not require changes to thousands of packages? (read: Can we change |
My suggestion, if it works as I think it should, would not require changes to any downstream packages, just to the |
(i.e. it would result in adding |
Hmm, I think that would still require changes to every affected package. |
It would be good to work out the impact here. While there are indeed a ton of packages declaring a dep on If the problem is only on the order of 50 packages total, then revision fixups seem feasible. If the problem is closer to 1k packages, it may be worth looking at a specific backwards-compat fix for just this case. I wouldn't like any general purpose solution that made the default behavior to continue to be to assume that you can just go rooting around in the path for build-tools. |
In the past Packages that use
(e.g. https://github.com/hspec/hspec-example/blob/master/test/Spec.hs)
Well, the current behavior is a breaking change + there were no better solutions in the past + I'm not aware of any problems that were caused by using executables from (transitive) dependencies as build tools. What I know caused trouble in the past is, that it was not possible to (a) depend on a package that only specifies an executable (b) specify a version for a build tool. In that regard, |
So one concrete suggestion would be that whenever I'm not sure how people feel about that in general. One thought might be that for cabal files specced prior to say 2.0, this behavior would be in place, but for files specced to a newer version (that includes That way we can ease in the new semantics, but remain backwards-compatible... |
That would work for me. We would still need to solve the "consistent versions for a library with associated build tool" scenario when An other option, that would be even more seamless for the user and solve both issues at the same time: Add a field
provide-build-tool: hspec-discover
executable hspec-discover
... Older versions of cabal would ignore If we don't like build tools to take effect transitively, we would need something like a What I prefer about this approach is that it would be more DRY for the end user. All that said, any solution that addresses the problem is probably fine with me! |
I'm very against "if you transitively depend on" => "something happens". If you need something, you should explicitly specify it. Similarly modules of transitive dependencies aren't in scope, you have to depend explicitly on the package. That's IMO very good thing, and we don't want to make that worse. This have to be emphasized: explicit behavior is better. Anyone could make an Per component build makes possible to not build executables at all. I don't need Having a constraint between build-tool and its companion library versions is valid thing to ask. I have no concrete proposal how that would look like in syntax. |
ok, so this sounds like we can do the "whenever new-build encounters a dependency that includes an executable component, then the executable itself should be brought in-scope to the path" thing only as a backward-compat hack. Then we could use something like |
Note: that there are other expressivity issues and proposals:
I'd be very careful not to make a local optimisation, because every change to Cabal-the-spec have to be supported forever. I'm unsure that |
@gbaz I agree with adding a new thing like NB: there's almost no package that breaks here (otherwise we would considered doing something about this; I've been constantly monitoring the |
Fair point in this not triggering significant breaks. That's an important consideration when putting in backwards-compat shims. Nonetheless, for clarity, my suggestion would not break the "finally we can have library+executable packages which don't force the executable to be built and brought into scope" property -- you could still have it, but it just would require using the a newer cabal-version field. I can see why this might limit the packages that benefit from this too drastically however. @phadej a skim over the other expressivity proposals reveals nothing that can solve this. my suggestion of Executables can and often do generate code. The code they generate will either A) assume some library lying around or B) be completely self-contained and reproduce the common-lib functions. Often we lean towards the second, at the cost of lots of "vendoring" of the common portions of generated code. It also makes such tools much harder to write. Or consider something like https://github.com/awakesecurity/proto3-suite -- here, it makes no assumptions about utility libs, but clearly the generated code requires a bunch of packages to be in scope, not least So an executable being able to state what packages need to be in scope for the code it generates to actually be able to compile seems pretty reasonable to me? And I imagine in the long-run it will actually let a lot of code-gen tools be slightly less fragile and builds with them be more reliable and maintainable. Put another way -- it seems to coincide with the general goals of the |
@gbaz I'm not saying that something proposed would solve this, I want us to pause and think in peace if some other old but yet unsolved problem can be also solved by solution to this one. I.e. if the solution is generalisable. |
I'd also point out that we need to think very careful about the consequences of |
I confess I don't understand the concern. Anything in |
Hey, thanks allot for all your input. However, may I request that we restart the discussion? What I suggest is that we base claims on evidence and try to find a practical solution. Rational: At least from my experience making unfounded claims and reciting mantras does not yield the best possible results. To make my point, I'll now try to back up two claims with evidence. First claim: Almost no packages break
I was unable to support this claim with evidence. Here is what I did: To exclude possibly old, outdated packages that don't build anyway, I only looked at the subset of packages that are on Stackage. Stackage Nightly 2018-01-29 includes 2651 packages.
This leaves us with 233 packages that break with 94% of the packages that use Second claim: This wasn't intended to work in the first place
I was unable to support this claim with evidence. The corresponding code, git commit 7dc0a10 and issue #1120 give a different picture:
One more thing, the discussion on this feature and a code comment use the term temporarily. I want to point out that temporarily in this context means "temporarily, while the action passed to |
@sol I think the claim about package breakage was intended to refer to installation of the packages themselves, not the running of their test-suites. I.e. while many packages use hspec-discover, they will continue to be able to new-build. The problem will come with new-test. I lean towards wanting to keep the test-suites working as well. But I can see the argument that this is less of an issue than if the installs themselves all went kaput -- in the latter case, I think that some way of ensuring backwards-compat would be almost certainly necessary. In this case, it seems a bit more debatable. That said, I'd like to see what @23Skidoo has to say, and probably look as a whole for a wider range of input among cabal devs, as if we do choose to break backwards-compat, it should be done very consciously and with our eyes open. |
I'd also point out, that I'd like to see a concrete specced out proposal on how to extend our expressibility to address the problem, and even more importantly, we should define what problems we're trying to solve, in order to be able to evaluate the possible solutions we come up with. And to be frank, That being said, I'd be happy to help with articulating the larger problem statement and evaluating proposed solutions, if anyone here's willing to invest the time and effort to tackle this non-trivial problem, which sooner or later requires to be addressed anyway. |
Herbert, yes, you've made very clear where you stand w/r/t backward-compat and test suites. I think it is important nonetheless to get input from a wider range of cabal devs, as this really is a policy question in a sense. I think what you say makes perfect sense for hackage trustee revisions -- but it is less clear to me that the exact same considerations should apply in how the cabal tool manages the migration to new-build, where streamlining such things could really help with smooth adoption. Another sort of middle-ground that would leave everyone a bit grumpy is (assuming we have the new tool constraint we want, of the sort i suggested or otherwise) to detect when a cabal file is victim to this problem in |
Sorry for not following this discussion, I have temperature and my brain is fried. |
This discussion has gotten a bit confused because there are actually two distinct issues underlying the top-level problem "hspec/hspec-discover don't work with new-build":
Let's talk about (2) first because it's a core problem which I kicked down the road when I added So let me first state that If this truly is unacceptable, I'm willing to be convinced that some packages should get special dispensation, whereby a Issue (1) also has some unique challenges:
Supposing that There are probably other ways to solve this problem but figuring out how to solve constraint (3) is the crux of the issue. HTH. |
@ezyang I think the confusion between the two issues infected your understanding of what I was proposing (or perhaps my expression if it). On issue #2 I did not suggest having a "special dispensation toggleable from hspec-discover's Cabal file". Rather I agree with you -- the solution, should we choose to implement it, would be a special workaround that's gated by On issue #1, I did propose a special field -- I understand there are some reservations about this, but there is as of yet nothing concrete -- just the thought that maybe there is potentially something more general or better. So I think you confused my |
I guess the weird thing about putting
and then have two versions of foo, one which has |
@ezyang fwiw, your scenario even holds if |
My understanding of qualified goals is not amazing, and I'm mainly working off this post: https://www.well-typed.com/blog/2015/03/qualified-goals/ That said, I don't quite understand the issue described by @ezyang here:
I mean... the referenced package isn't solved in a qualified goal, as I understand the post -- rather, its dependencies are. So the idea is that transitively, I guess, all qualified goals themselves have qualified dependencies. But in this case, an So on the whole, the install-plan of the target would still be exactly what it was before -- but to be able to depend on that target would also depend on the non-modular fact that you need to depend on a particular version of a particular library as well. In particular, in what I'm envisioning, the provided executable package would not necessarily depend on the libraries listed in its |
I don't know what you mean about, so let's talk about setup dependencies. In this case, when I write But maybe that is not what you are thinking about, because you go on to say:
But this is still pretty weird. If
...and when would you actually want different solutions for these? |
Fair enough. I was thinking about But on to the main issue. You write:
And you ask when I would want that. Well, in the concrete set of cases I'm imagining, I don't see the issue. In particular, this handles the case when a build-tool is a code generator. And the generated code needs to make use of functionality provided by a certain library. Upthread these were called "companion libraries." So imagine I have a code generator that among other things provides instances for So: let me turn the question around -- what is the use-case you can imagine where we would ever want the same solution? :-) |
I tried to draw a diagram of my understanding of the relationship between qualified goals and The circles represent different goal qualifiers. (Currently, every goal is qualified in cabal, but build targets usually have the "top-level" qualifier.) This example assumes that the two test suites' packages already have different qualifiers, as if they were used by different build tools that aren't shown in the diagram. Qualifiers:
In this example, the two As I understand it, the I can see at least two possible meanings for
I think that the first option would be easier to implement, but it would require the users of build tools to know what dependencies the build tools add to their code. I also don't think that either option would significantly change the complexity for the solver. |
@grayjay To clarify, The meaning 2. implicit bound feels very powerful. Won't the first case work where I.e. OTOH that breaks if I use (non-existing) Please correct me, if I understood something wrong. |
Thank you for the very nice diagram! I hadn't considered option 1 before, but it seems like it adequately addresses the use-case, and it is also more explicit in the way that people seem to like. Option 1 does seem like it would necessitate a good error message if there was an As for "As I understand it, the executable-requires dependency only needs to constrain versions across different qualifiers, it doesn't need to merge the qualifiers." Yes -- that is what I think is the case too. |
I thought that both test suites could use the same
Yes, I think that only adding a version constraint would also handle the hspec/hspec-discover case, and it would probably be easier to implement. I didn't consider that the generated code's dependencies might only be known once the build tool runs. It seems hard to track dependencies perfectly in that case. Maybe we should distinguish between optional and required dependencies for the build tool's generated code.
I'm not sure I understand. What would break? |
Someone could use I think that build-tool author could provide hints on what constraints should be put on the dependencies to use generated code, but user should explicitly (doesn't exclude conveniently) take them in use. The reasoning, as the "code-generation" is dynamic dependencies might be too. I think for now, that only adding constraints as hinted by build-tool Say if I need to draw a graph. I think I should, but I first have to learn how to make such pictures quickly :) |
@phadej I see what you mean. I think it makes sense to give the build tool user control over the packages to depend on, and allow the build tool to constrain those dependencies. |
Let just through out the once I teach Cabal about cross, we'll need to keep executable and setup dependencies qualified goals, because they are built with a GHC and than libraries! So @grayjay's point about adding a version constraint but keeping the qualification is quite necessary. |
Simplified scenario: We have a library that ships with a build tool. The library and the build tool are meant to be used together and must be of the same version. With sandboxed builds you would just depend on the library and
cabal
would make sure that the build tool is available when building.With
new-build
this approach will lead to a build failure. The user is required to specifybuild-tool-depends:<package>:<executable>
to make it work. However, as far as I can tell, there is no guarantee that the build tool and the library will be of the same version.Is there a way to achieve this without specifying exact dependency versions on both the library and build tool (say I would want to continue to use e.g.
== 2.*
instead of specifying== 2.4.0
in two places)?For my real use case, the library and the build tool are actually two separate packages,
hspec
andhspec-discover
, wherehspec
depends on an exact version ofhspec-discover
.Scope: 1.3k packages on Hackage depend on
hspec
+ an unknown number of in-house projects.The text was updated successfully, but these errors were encountered: