Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is our limit in terms of low level packaging #81

Open
jakirkham opened this issue Apr 9, 2016 · 7 comments
Open

What is our limit in terms of low level packaging #81

jakirkham opened this issue Apr 9, 2016 · 7 comments

Comments

@jakirkham
Copy link
Member

I have opted to break this out as an issue as this is a real world problem that deserves careful consideration before proceeding. In particular, sometimes we need newer versions of system tools. This was inspired by this issue ( conda-forge/staged-recipes#300 ).

One point is do we package assemblers. This is interesting as we currently do package yasm We need yasm to build other dependencies (x264, ffmpeg, possibly other things in the future). Also, in the case of yasm, it is packaged as cross platform (all platforms, even Windows) so I don't know that there is a way around it. We may need to add nasm in the future too.

There are also some cases we have found it better to package our own build tools like m4, bison, flex, libtool, automake, autoconf, pkg-config, etc. There are many reasons for this that range from the system versions being too old (often the case with Mac, sometimes Linux too), having more consistency across platforms, having more control of the build process, etc. The line between too low and acceptable to package is still pretty fuzzy in this case. For example, should ar be packaged? It isn't that complicated and it could be useful to have a newer version in some cases. Similar arguments could be added for other binutils too.

One thought might be we don't package tools that are OS specific. However, I don't expect that to hold as I think we definitely need to package patchelf and we will want that in conda-forge so that we can ensure we have the latest version (as it still hasn't hit 1.0, but it is getting close). There also has been discussion of having a newer version of clang for Mac, which would presumably require packaging. Though that point is far from settled.

Another thought might be we don't want to package standard system development tools. However, this point could be sort of contentious as we use gcc from a package at present. I expect this will be a topic of a fair amount of debate especially as we engage other conda-recipe communities that have opted for different build strategies.

Please feel free to share your thoughts on this point.

@jakirkham
Copy link
Member Author

cc @frol @msarahan @ocefpaf

@PythonCHB
Copy link
Contributor

Let's not forget that a pretty large class of users want pre-built packages, but also need to build their own code.

And it'd be really nice if conda supplied the complete build environment, at least when it's NOT the standard one they are likely to have in their machines.

So it would be nice to have all this packaged up.

@ocefpaf
Copy link
Member

ocefpaf commented Apr 9, 2016

@jakirkham thanks for organizing this issue. I will re-state my opinion from the original issue here.

I agree with @msarahan that we should avoid using low level build tools when packaging, but I am not against letting contributors packaging them. However, as you pointed out in many examples above, we might even break that rule and end up using them in case of necessity.

Basically I am OK merging conda-forge/staged-recipes#300. I am clueless to when, or even if, we (conda-forge) will use the packages from conda-forge/staged-recipes#300 in here.

@frol
Copy link
Member

frol commented Apr 10, 2016

I haven't suggested using my Binutils package as build/run dependencies; in fact, I, personally, pre-install the package into root environment in our company's Docker build image and expose the binaries to the system level (symlink to /usr/local/bin/), so I don't put it as a build dependency anyway (the same story with GCC 5.x).

@ocefpaf
Copy link
Member

ocefpaf commented Apr 10, 2016

I haven't suggested using my Binutils package as build/run dependencies; in fact, I, personally, pre-install the package into root environment in our company's Docker build image and expose the binaries to the system level (symlink to /usr/local/bin/), so I don't put it as a build dependency anyway (the same story with GCC 5.x).

To be honest I am not concerned about your use of those packages. As a power user I am sure you know what you are doing. I am concerned about people, like me, that might see this package available and then submit a recipe that uses them. Or other type of people, like me 😜, that will miss those packages in the dependency list when reviewing a PR and will merge it without noticing them.

We need some sort of rule to ensure they will be used by power users and/or for local experiments only. We could split this into another channel, but I don't really like that idea... I'd prefer if we could strengthen our review process to avoid the cases I mention above.

@frol
Copy link
Member

frol commented Apr 10, 2016

@ocefpaf I can't agree more! I love Conda-Forge exactly because of the quality and automation it provides! Please, take your time to find the best solution.

@jakirkham
Copy link
Member Author

Thanks @frol for being so patient and understanding. I know it is not fun to have things held up in the queue.

We certainly appreciate that having old binutils and compilers are a pain and we also want to workaround this, as well. There is some good discussion happening in this issue ( #29 ) where we hope to come to some workable resolution for all parties. We currently are leaning towards having new versions in a Docker container. At this point, it is more a question of how we do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants