-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python Binary Distribution with multiple configurations #605
Comments
Using cibuildwheel might be a reasonable way to build pypi compliant wheels? Would need investigation. |
This is also relevant to the CUDA version used to build the wheel. The
This is to be expected. If we build with 11.2 instead, this would be compatible with CUDA 11.2+ (but presumably not 12.x) due to changes in how NVRTC is packaged from CUDA 11.3. See this nvidia blog post
We can potentially provide 11.0 wheels for colab, and 11.2 wheels for more recent cuda support (with tweaks to reduce the build matrix potentially?). Again this would lead to more breakage of wheel filename conventions to differentiate the different CUDA versions (and dependencies on external .so's are breakages anyway)) |
Rather than renaming wheels I.e. setuptools.setup(name="pyflamegpu-console", packages=["pyflamegu"]) In this case, the generated wheel would be If we were uploading these to pip, it woudl be separate package, available as Package extras may be another option. e.g. So I if we do go down the multiple packages route
Multiple cuda versions might be more challenging / doesn't fit as an extra when you cannot specify a default or make extras mutually exclusive. We need to provide CUDA 11.0 for google collab, but I'd much prefer that 11.2 become the main / default option, as this will work with all future 11.X relases due to changes in cuda shared object versioning / compatability going forwards. When CUDA 12 turns up we would then want to also provide CUDA 12 images, and probably maintain support / pre-built binaries for multiple cuda versions? I do still (potentially) prefer conda for distribution for labels etc, but still need to read more on how conda recipes work, and how binary distribution actually works w.r.t older glibc etc. We don't want users having to do source builds of pyflamegpu given how long the pyflamegpu target takes to compile. |
From the CUDA Tooklit EULA
In the manylinux container, i'm only installing the stub library of libcuda.so, as the driver is not being distributed anyway? Need to investigate more (still) |
Pytorch appear to use conda as the primary distribution now based on thier get-started page which has a nice table for getting installation instructions. to provide multiple CUDA version support via pip, they provide CUDA 10.2 releases to pypi directly, and cuda 11.1 is available via piip's
We could potentially do something similar by generating a html page on the website which points to the various wheels which are attached to releases through some funky github api usage. Shouldn't be too difficult to do (possibly as a separate gh pages repo Not sure this helps with seatbelts on/off still, the Conda might still be cleaner / easier though as an alternate. |
The CUDA 11.4 early access to official cuda python bindings don't look like they will be of any use to us, as all our cuda is handelled within the cuda-c++ library. It will be available as a package on pypi and conda in the future, so might be able to become a dependency for bettter package management, but that's unclear at this stage. |
https://packaging.python.org/glossary/#term-Project I.e. it's convention that |
Regarding the use of python local version identifiers (and separate pip sources) such as the PEP440 specifies that:
I.e. downstream projects can redistribute builds with local versions, but they must be api compatibly. The cuda local version is
Personally, i'd prefer So using local builds for cuda versions seems very appropriate, but pypi versions shouldn't/can't have a local version number, so for a given pypi package only a single (based) cuda version can be supported this way. Personally, i'd be keen for this to be 11.2 for forwards compatibility, with We could also potentially include the minimum SM that the build supports in there as a separate point-separated section, i.e. However, using local builds for For now, I think i'm going to go with:
|
Colab has now moved to 11.2
|
We still probably want to have 11.0 dists as our minimum supported version, then 11.2 which is generally abi compat with other 11.x releases (some wiggle room till 11.3 iirc in some libs / nvrtc). We would have to do 11.8 as well if we want to provide a SM 90 (Hopper) pre-compiled release, though we do embed ptx so it'll be usable on those devices anyway via JIT (just not leverage SM90 opts). Each new CUDA does bloat the release matrix even more. The main concern in this issue is the separation of vis and non-vis builds. A (painful for use, nice for eveyrone once done) option is to always do vis builds, but cuda version etc is handeld similar to other packages via the |
The current renaming of the visualisaiton wheels to If we move the vis/notvis status into the local build version (like the cuda version) that would solve both those issues, but making installing the vis version less nice. |
From 2.0.0rc1, we will have cuda 11.2 and 12.0 wheels being built on CI, and available as non-manylinux compliant wheels available from the github releases page, and via Vis and non-vis builds available. Local python package version is used to differentiate the builds, which is not pypi compatible (i.e. we could only provide a single one on pypi). Users can either very explicilty request something like Still no belts off prebuilt python pacakges available, although this could be something we add to the wheelhouse if we want. Conda is probably still the better choice, at very least it makes multiple cuda versions easier to select between. Lookslike 12.x might need different handling than 11.x though (can only depend on the parts of cuda needed, making installs lighter). |
The initial python binary distributioin(s) will be / are Release mode, seatbelts ON only.
This is the default, sensible choice for python users to be able to develop models that run reasonably well (i.e. not terribly slow debug builds) while also getting sane error messages if there are issues with agent functions. If python users require faster builds, they will have to build from source initially.
Longer term, it would be nice to provide pre-built binaries for seatbelts=OFF, or to support 2 major cuda versions, or other options that result in a larger build matrix. Doing this in a wheel compliant way is non-trivial.
Python namespace packages are a potential solution, or multiple packages which users can import with alternate names (i.e.
pyflamegpu-seatbeltsoff
, which users can then import as pyflamegpu for a 1 line change)The text was updated successfully, but these errors were encountered: