-
-
Notifications
You must be signed in to change notification settings - Fork 638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve PyTorch support #18293
Comments
I have recently spent some time adding PyTorch, and what hack I have to apply:
The result is that we have a working version for Linux/Mac. (I hope it's a temporary solution) PS: I think the real issue is not "pytorch", but adding support for multiplatform or explaining how to do it right, and then it could be applied for all libraries that require specific versions for different platforms. |
It goes beyond platform. For example, what if some devs want And both of them have supporting libraries that are also API tagged, so if you use torch+cu113 you also need torchvision+cu113. I think for Tensorflow the same occurs with the distributions package, maybe others. And... now for my CI I want to run tests with CPU but I actually want to publish docker images with GPU support. It's madness. (Not to mention that all these local tags are screwy when it comes to PEP440, and forces users to use |
The solution we are exploring now is to use multiple .toml files for pants depending on the operating system that is used. There will be a separate lockfile and separate toml file for each environment that we support for our monorepo. This way we can easily set things like python indexes and such separately per environment. It's just something we thought of today, and we just have to see if this works in practice. |
I'm a little stumped at the moment. I've been adding
I get the following issue:
Is this the same issue as what is being reported here? My computer is Linux, with a CUDA enabled GPU. Running I've tried on pants versions 2.15.0, 2.16.0a0 and 2.16.0a1 |
My solution for now is to not use
|
In my environment, I was able to resolve the problem by manually adding the dependencies that caused the error.
ugly, but it is worked. |
A user on slack shared their script for handling PyTorch too: |
@SimonBiggs @minato-ellie It can be simply solved by adding this option in [python-repos]
indexes = ["https://pypi.org/simple/", "https://download.pytorch.org/whl/cu117"] Then, Pants automatically resolves such Pytorch transitive dependencies. (this example for cuda version 11.7.) well, this is my case :) |
This issue seems to be specific to torch==2.0.1, and it appears to be a known problem. You can refer to this Issue for more information: pytorch/pytorch#100974 (comment) |
All changes: - https://github.com/pantsbuild/pex/releases/tag/v2.1.153 - https://github.com/pantsbuild/pex/releases/tag/v2.1.154 - https://github.com/pantsbuild/pex/releases/tag/v2.1.155 Highlights: - `--no-pre-install-wheels` (and `--max-install-jobs`) that likely helps with: - #15062 - (the root cause of) #20227 - _maybe_ arguably #18293, #18965, #19681 - improved shebang selection, helping with #19514, but probably not the full solution (#19925) - performance improvements
https://github.com/pantsbuild/pex/releases/tag/v2.1.156 Continuing from #20347, this brings additional performance optimisations, particularly for large wheels like PyTorch, and so may help with #18293, #18965, #19681
Survey submitter said:
The text was updated successfully, but these errors were encountered: