-
Notifications
You must be signed in to change notification settings - Fork 931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEA] PyPi release #10382
Comments
Some digging, and I found this article; it is outdated. You obviously didn't try hard enough. The folks over at Pytorch are fine compiling towards PyPi in their own channel. I believe you can do it too. |
@caniko |
There is no PyPi release planned as far as I know, @jiapei100. I am trying to advocate for it on this issue. |
Hi both -- thanks for reporting. While we don't have plans to support pip packages in the near term, we are evaluating the longer-term feasibility. Many of the issues discussed in the blog post are still relevant. From my understanding, PyTorch hosts its own wheels outside of PyPI and we have no plans of doing so. Please also see the discussion in #6979, and the initial work in rapidsai/rmm#976. |
Thank you for the response. This project is inaccesible to my packages till you release on PyPi. If PyTorch can release to and host their own PyPi, so can you. How large are your binaries? You may not need to create your own channel. The article is outdated, and the fact that you are switching to scikit-build proves that your builder is outdated. Personal opinion: In my scientific circle engineers/developers use pip, and endusers use conda. Conda is not the right place for this package as it is not for the enduser. |
Thanks @caniko - I agree that pip is more familiar and accessible to most Python users. There are many problems we need to solve before we're able to publish pip packages and we are actively working towards that goal. |
Original author of the referenced blog here, although I am no longer an NVIDIA employee. Producing proper wheel packages is arguable as bad now as it was when I originally wrote it and there's challenges for cuDF that don't exist for PyTorch. For example, libcudf depends on Apache Arrow's CUDA module, which in turn uses the CUDA driver API. The CUDA driver library only ships as a shared library so it cannot be statically linked into libcudf. This means that to ship a compliant wheel for pypi we'd need to include the CUDA driver shared library in our wheel which is against the terms of the NVIDIA CUDA EULA. PyTorch used to get around this problem by just shipping non-compliant wheels that didn't include everything they linked against, but they may have moved towards dynamically loading CUDA libraries as a workaround. Someone could try introducing a similar workaround to Apache Arrow, but that's a development maintenance burden that most projects wouldn't want to take on. Add onto that when CUDA 12 releases and is supported, you end up needing separate packages for separate CUDA versions which the pip solver can't take into account so you can end up loading a CUDA 11 PyTorch and a CUDA 12 cuDF and getting symbols clobbered at runtime. The Python wheel packaging system was built for Python packages with small C extensions and it's been amalgamated for projects like Tensorflow, PyTorch, RAPIDS, etc. that are essentially C++ packages with small Python wrappers. If you ask those projects about their attempts at supporting wheels and pypi everyone will share the same pain and misery.
Thankfully RAPIDS projects are open source so you can contribute towards this effort if it's something that would enable your use cases! I will personally be happy to review any contributions you make towards this effort. If you aren't willing to contribute code towards this effort, I'm sure someone else would happily take funding towards working on this problem. In lieu of that, please do not downplay the amount of effort made by the maintainers of cuDF and other RAPIDS packages to support users because they work day and night to try to do best by their users. |
Overall very informal comment from an ex-insiders perspective. Thank you. Nobody is downplaying anything, but the PyPi publication effort. Obviously a lot of work went into this package, kudos to the authors. With that said, to say that the Python wheel packaging system wasn't made for this is ridiculous. You obviously don't know exactly what PyTorch did to fix their CUDA problem, so why make assumptions for arguments sake? Let us push the project to newer heights instead of arguing, and keeping it down. Open source or not, the availability of this package to the Python community is limited because of this decision. Luckily one person is trying to solve the problem, alone; kudos to him. |
My entire stack is currently Poetry, and I can't switch to conda.
I wanted to use cuSpatial, and was sad to find out that you only release source + conda binaries. Could you also release on PyPi?
I am assuming that your binaries are too large for the PyPi mirrors, you will have to setup your own PyPi server, similar to having your own channel in conda. I hope you do this, mixing conda with poetry isn't ideal.
The text was updated successfully, but these errors were encountered: