-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release Binary distribution #514
Comments
Python/pypy References:
In general, uploading to pypy seems relatively simple, via It should be possible to automate this as part of an action (will require the use of github org secrets + the associated change to the actions to acomplish this). Still need to consider what will be uploaded
The majority of python users probably want release builds, potentially with and without SEATBELTS. Possibly a separate pypy package? Visualisation support will be additional faff too. |
Another thing to consider: Create multiple python packages as a way of providing differnet configurations of flame gpu via pypi. https://packaging.python.org/guides/packaging-namespace-packages/
Users could then do |
Also worth considering altentatives to pypi due to issues w/ CUDA. The Rapids team have opted for conda or Docker as the preferred methods, or docker / source. https://rapids.ai/start.html#get-rapids https://medium.com/rapids-ai/rapids-0-7-release-drops-pip-packages-47fc966e9472 Conda's more recent licence changing is worth considering though, although in practice it's probably not an issue
|
As a quick test, building the pyflamegpu wheel on one machine (Ubuntu 1804, Python 3.8, cuda 11.2, SM61,70), copying the Changing the version of cuda on my path to 11.4 still works, with the jitify cache file name still referencing 11.2. After uninstalling nvrtc 11.2 it still runs (including purging the cache), implying the cache identifier is based on the version used to build the library, not the current nvrtc version (which is probably fair enough, if a little incorrect). If i adjust my path to contain CUDA 10.0, it also still works... |
If i adjust my path to contain CUDA 10.0, it also still works…
This seems like one of the things, where there are likely to be hidden bugs
due to ABI changes. So it’s probably not to be recommended. We don’t want
people reporting bugs that we can’t repro.
I’m interested to try this with windows on a clean machine that has never
had visual studio installed.
…On Tue, 13 Jul 2021 at 14:28, Peter Heywood ***@***.***> wrote:
As a quick test, building the pyflamegpu wheel on one machine (Ubuntu
1804, Python 3.8, cuda 11.2, SM61,70), copying the .whl onto another
machine (ubuntu 2004, py3.8, cuda 11.2, Pascal GPU) works. (via pip
install <filename>.whl into a venv).
Changing the version of cuda on my path to 11.4 still works, with the
jitify cache file name still referencing 11.2.
After uninstalling nvrtc 11.2 it still runs (including purging the cache),
implying the cache identifier is based on the version used to build the
library, not the current nvrtc version (which is probably fair enough, if a
little incorrect).
If i adjust my path to contain CUDA 10.0, it also still works...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#514 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFVGCS6OW2IRLQ6A3NS6FDTXQ5RHANCNFSM425KHOTA>
.
|
For now, we will manually atttache |
Wheel filename's have a convention for the filename. This doesn't make it clear if there is room to encode the cuda version in the wheel name. TF doesn't, and as it only supports a single CUDA version per wheel that's not an issue (so we can probably do the same). cupy includes the cuda version in the package name. I.e. |
Building for all non-tegra CUDA compute capabilities, the If we provided 2 seatbelts configs * 2 platforms * 1 cuda version, that's 10GB+ per release. We could reduce the architectures we build for, to only be one per Major compute capability (52, 60, 70, 80) potentially, which will provide decent perf on all supported devices, but not include optimistaions in some cases (i.e. consumer cards) For now, I'm going to hold off on binary C++ releases due to this. Longer term it may be worth us providing a reduced set of examples in the pre-built binary (and in the core repo). I.e we don't need to be shipping 5 variants of boids. The full fat wheel bbuild is ~200M, compared to 40M for a single arch. |
Enabling the vis causes windows CI to fail due to WError settings in the main repo CI, but not the vis repo. These warnigns are in third party code so this might be a little bit fun. See FLAMEGPU/FLAMEGPU2-visualiser#71. |
Initiall binary distribution will just by pyflamegpu, for a single cuda version, with a single build configuration of release, seatbelts=on, vis=on (subject to vis builds not causing issues for headless nodes), and for major cuda architectures only. Subsequnt issues have been created which may expand on this in the future (for post-alpha releases?): #603, #604 & #605. |
Visualisation enabled wheels are now building and being attached to draft releases! See https://github.com/ptheywood/FLAMEGPU2/releases/tag/v0.1.3-beta for an example. Remaining steps are:
|
Runninv Vis python wheels on linux boxes which do not have the vis shared libraries availabel (
This will be an issue on HPC systems. |
Yeah this is to be expected (although thought we might have been able to
get away with trying to run a sim without vis). Not sure how we could
resolve it without doing something grim like runtime loading.
…On Thu, 29 Jul 2021 at 11:22, Peter Heywood ***@***.***> wrote:
Runninv Vis python wheels on linux boxes which do not have the vis shared
libraries availabel (libGLEW.so for instance) results in an error.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#514 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFVGCWKQGRRUNVIFZQGN3LT2ETW5ANCNFSM425KHOTA>
.
|
After discussing with @mondus, the plan is to just break wheel naming conventions for the alpha releases, to provide This will need to be explained in the release notes. |
Need to figure out how to do binary distributions.
c++ should be fine similar to F1, but automated via an action (on push to tags which match a pattern?)
Python will be more difficult. PyQuest is potentially a very useful reference.
Related:
The text was updated successfully, but these errors were encountered: