-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
C++14 #125
Comments
Here are some very cursory thoughts as I haven't actually done any deep research into this.
The idea is not to use an arbitrary CentOS 5 image that you have to do a bunch of work to configure, but rather to use the manylinux1 image directly, which should have all the right things setup for you. Then you just need to install/compile any additional binary dependencies you have into that image, and
While -static-libstdc++ is probably reasonable as it's not part of the manylinux base, I'm not sure why you'd need to statically link gcc. Is that required for doing the C++ static linking? Does libstdc++ require symbols too new for those provided in the manylinux1 policy? To answer your questions:
I don't know about current recommended practice, but yeah, bundling libstdc++ is going to be unavoidable as it's not part of the standard manylinux base.
It's just... being slow. IIRC there is a meta-issue tracking it in the manylinux repo, pypa/manylinux#179. auditwheel slowness on getting PRs for this merged is certainly a part of that, but I am going to try real hard to get that done this weekend and cut a release soon. Until there is support in pip and the manylinux2010 image are released, though, it won't be super useful. But I'm hoping auditwheel support might be a kick in the butt...
That's detailed in PEP 571:
The manylinux PEPs are pretty accessible as far as PEPs go, fwiw.
I suspect we should hold off on documentation for this until we ship manylinux2010 but no, unfortunately docs are a bit lacking and I would love to see more. I suspect this would best belong in the Python Packaging Guides, where we currently have 0 manylinux docs (which is a shame). I did give a talk about this at PyGotham but that recording isn't up yet. I am hoping to have a reprise at PyCon US 2019.
Yes, the idea is not to support CentOS 5, per se, but rather that CentOS 5 has a sufficiently ancient compiler toolchain to support most modern Linuxes, since core ABIs maintain strict backwards compatibility. |
Thanks! Just a few clarifications: gcc 4.8.2. installed on the manylinux1 image doesn't have (proper) support for C++14. That's why I had to look for another image with a newer compiler. The toolchain on the manylinux1 image isn't an option for my project. Likewise, whiel it looks like libstdc++.so is bundled in manylinux1, the version is too old to support C++14. I don't believe it is possible to use a newer compiler with newer language support with an older libstdc++. According to PEP 571, the libstdc++ version that will be supported in manylinux2010 is gcc 4.4, which still doesn't have full C++14 support. I guess that's not surprising; CentOS 6.9 uses gcc 4.4.7. 😞 That's unfortunate, as it looks like manylinux2010 won't improve the situation for me. It sounds like static linking libstdc++ is the path forward? For individual extension modules, it's not such a big deal, though it does bloat the modules somewhat. But there might be problems in the long run importing multiple extension modules each linking libstdc++; I'm not sure. |
All that makes sense; sounds like you might be waiting for the CentOS 7-based policy to get native GCC support in manylinux. I'd agree that you will need to continue to statically link libstdc++ (or potentially, dynamically link it and see if I am not super surprised that it's challenging to make C++14 work with the manylinux policy, because it's designed to ensure compatibility even on super old production Linux systems. This allows developers to build one wheel, distribute it on PyPI, and have it work for almost everyone. But it's going to cause challenges when you need the latest and greatest C++ support! |
Not really "latest and greatest", unfortunately. C++17 is out for a while now. GCC 5.2, the first to support C++14, dates back to around the time of Python 3.4. I don't think I'm waiting for a CentOS 7 policy to be honest; by then C++ will have moved forward too and I'm no more likely to code in older C++ than in Python 2.7. It's unfortunate that there isn't better support for Python extensions written in C++. Not only are there lots of useful C++ libraries useful wrapped in Python, but also for many C++ is the systems programming language of choice. |
Actually you're right, you're not waiting for a CentOS 7 policy, but not for the reason you think! CentOS 6 mostly does support the latest-and-greatest C++, because RH is maintaining specialized gcc backports for it. (These include internal magic so that the resulting binaries can use modern features but still work with the old libgcc/libstdc++ on CentOS 6.) So the manylinux2010 docker image will ship with gcc 7.3.1 or better, and will keep getting updates as long as RH keeps releasing them.
There's absolutely nothing Python-specific about this; the challenge is entirely in figuring out how to produce any kind of pre-built binary that just works across most Linux distros. It's unfortunate that shipping software is such a dark art, but that's the reality we live with. No-one's more aware of the limitations of the manylinux toolchain than its maintainers, but frankly it already works miraculously better than anyone thought was possible a few years ago, and that's despite having zero resources, zero influence on compiler or distro vendors, and relying entirely on extremely busy volunteers squeezing out a few hours here and there. If you want to help or pay for better C++ support then that'd be awesome. |
It's not so much that there isn't better support, but that it isn't possible to support it. The point of manylinux is to provide a build environment in which you can assume a common minimal core for nearly all Linux systems. gcc libc is that core. In this sense, more recent releases of C++ are in the same situation that Fortran is: if you want to ship a Python extension that has that as a dependency to all PyPI users, you can't assume it's already installed on the base system, so you'll have to bundle the runtime. I don't think there's any way to avoid that need; everyone has to solve this problem in some way to portably distribute software on any OS. If your concern is that the documentation/tooling/support for more recent releases of C++ is not great, I apologize for the bad experience, and we'd appreciate assistance with that! I work on auditwheel in my spare time and I don't develop C++ Python extensions so I don't really have the expertise to fix that. :( |
Well, how can I help? |
Since this a top google hit for this problem, I'd like to add it is very important to hide the libstdc++ symbols when linking it in statically with If you don't do this, you will run into trouble when importing modules that dynamically link the older libstdc++ (e.g., segfaults that only happen if import statements are in a certain order). |
I'm trying to build a Python 3.6 manylinux1 wheel of a package implemented in C++14. I perused #39, and also recent traffic on wheel-builders as suggested in that issue, but didn't find too much to guide me.
After some trial and error, I managed to get a working wheel (I think!) via the following process:
-static-libstdc++ -static-libgcc
.auditwheel repair
.I was wondering,
-static-libstdc++
or bundling libstdc++.so seems unavoidable for C++14 given the CXXABI version restriction.FWIW I'm not particularly interested in supporting CentOS 5 with my package. I'm just looking to create a binary Linux wheel that works on reasonably modern systems that I can actually upload to PyPI.
Thanks much in advance.
The text was updated successfully, but these errors were encountered: