Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add PyTorch installation guide #6523

Merged
merged 10 commits into from
Nov 19, 2024

Conversation

baggiponte
Copy link
Contributor

@baggiponte baggiponte commented Aug 23, 2024

Hello there! First real docs PR for uv.

  1. I expect this will be rewritten a gazillion times to have a consistent tone with the rest of the docs, despite me trying to stick to it as best as I could. Feel free to edit!
  2. I went super on the verbose mode, while also providing a callout with a TLDR on top. Scrap anything you feel it's redundant!
  3. I placed the guide under integrations since Charlie added the FastAPI integration there.

Summary

Addresses #5945

Test Plan

I just looked at the docs on the dev server of mkdocs if it looked nice.

I could not test the commands that I wrote work outside of macOS. If someone among contributors has a Windows/Linux laptop, it should be enough, even for the GPU-supported versions: I expect the installation will just break once torch checks for CUDA (perhaps even at runtime).

@zanieb zanieb self-assigned this Aug 23, 2024
@zanieb zanieb added the documentation Improvements or additions to documentation label Aug 23, 2024
@FishAlchemist
Copy link
Contributor

Have you considered including a way to specify package versions?
This part is the difference between uv and pip.
https://github.com/astral-sh/uv/blob/main/docs/pip/compatibility.md#local-version-identifiers

@baggiponte
Copy link
Contributor Author

Have you considered including a way to specify package versions? This part is the difference between uv and pip. main/docs/pip/compatibility.md#local-version-identifiers

Like uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?

@zanieb
Copy link
Member

zanieb commented Aug 23, 2024

This will definitely require validation from folks on other platforms. Thanks for starting though!

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Aug 23, 2024

Have you considered including a way to specify package versions? This part is the difference between uv and pip. main/docs/pip/compatibility.md#local-version-identifiers

Like uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?

On macOS, packages are obtained directly from PyPI, thus eliminating the need for local version identifiers.
image
Therefore, only on macOS, we cannot test whether the usage within the project(uv add/remove) is correct.
I don't know how to correctly provide the PyTorch version with CUDA to UV.

@baggiponte
Copy link
Contributor Author

baggiponte commented Aug 23, 2024

Have you considered including a way to specify package versions? This part is the difference between uv and pip. main/docs/pip/compatibility.md#local-version-identifiers

Like uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?

On macOS, packages are obtained directly from PyPI, thus eliminating the need for local version identifiers. image Therefore, only on macOS, we cannot test whether the usage within the project(uv add/remove) is correct. I don't know how to correctly provide the PyTorch version with CUDA to UV.

Exactly! That's what I wrote on the macOS section. Sorry if I wasn't being clear.


I realised I can test this out on Colab for Linux 😈

  1. Go on colab.new
  2. Run:
!curl -LsSf https://astral.sh/uv/install.sh | sh
!source $HOME/.cargo/env bash
!/root/.cargo/bin/uv --version # doesn't find it on the $PATH, I guess I should restart the shell? idk
!/root/.cargo/bin/uv venv

Then:

# installs GPU cu12
!/root/.cargo/bin/uv pip install -- torch # works

# fails
!/root/.cargo/bin/uv pip install -- "torch==2.4.0+cpu" # fails

# works
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cpu -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu118 -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu121 -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu124 -- torch

@FishAlchemist
Copy link
Contributor

Have uv considered adding the index-url to the pyproject.toml file when using uv add?
After adding PyTorch, when I added another package from PyPI, the lock file modified PyTorch to be installed from PyPI.
lockfile.zip
command

 uv add --extra-index-url=https://download.pytorch.org/whl/cu121 torch torchvision torchaudio --no-sync
 uv add deep-translator

@zanieb
Copy link
Member

zanieb commented Aug 23, 2024

@FishAlchemist yes, it's on the roadmap #171

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Aug 23, 2024

@FishAlchemist yes, it's on the roadmap #171

@zanieb
If it's not yet supported, it seems like we can't include the project API part in the PR's document.
After all, when the source is not PyPI, the lock file might be unexpected.

@baggiponte baggiponte force-pushed the docs2024/pytorch-guide branch from 1eb215b to 94a6c95 Compare August 23, 2024 18:12
@baggiponte
Copy link
Contributor Author

@FishAlchemist yes, it's on the roadmap #171

@zanieb If it's not yet supported, it seems like we can't include the project API part in the PR's document. After all, when the source is not PyPI, the lock file might be unexpected.

Uh yeah, just pushed a commit to remove all the mentions to modifying pyproject.toml.

So:

  1. We might want to give this a spin on a Windows machine to make sure it works
  2. Given that currently there's no mechanism to bind a specific package to a specific source, the only thing that can be documented in the docs is to run uv pip install --extra-index-url=... or uv add --extra-index-url=..., am I right?

@FishAlchemist
Copy link
Contributor

@baggiponte
I think there's no problem with downloading PyTorch using "uv pip install" on Windows. Although I've only run CUDA 12.1, I was able to do simple tests using the installation method provided by PyTorch, just with the difference of using "uv pip".
For more complex tasks, I switched to Linux because my Windows computer has insufficient memory.
As for the project, although the command can run on Windows, the locked file results are not what I expected.

For PyTorch, I still recommend including the specific version in the documentation. I remember seeing some issues in the past where problems only occurred when a specific version was specified, and I'm not sure if they have been fixed.

@baggiponte
Copy link
Contributor Author

For more complex tasks, I switched to Linux because my Windows computer has insufficient memory.

Do you think I should try and/or cover some of those?

As for the project, although the command can run on Windows, the locked file results are not what I expected.

Uhm, I guess this deserves an issue of its own?

For PyTorch, I still recommend including the specific version in the documentation. I remember seeing some issues in the past where problems only occurred when a specific version was specified, and I'm not sure if they have been fixed.

Were those issues uv-related or just generic torch version problems? Because otherwise I would not be super inclined to add this kind of recommendation to the docs.


Unrelated: perhaps I could create a new repo and use github actions on various runners to see if everything works, if we need more complex installation tests.

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Aug 24, 2024

@baggiponte If lock file doesn't have a mac wheel, I'm unsure if uv sync can successfully execute on a Mac.

Command (uv 0.3.3 (deea602 2024-08-23))

uv init torch_uv -p 3.10
# Remember to enter the directory
uv python pin 3.10
uv add --extra-index-url=https://download.pytorch.org/whl/cu121 torch --no-sync

Note: Create on windows 11 (x86-64)

Part of uv.lock for torch

[[package]]
name = "torch"
version = "2.4.0+cu121"
source = { registry = "https://download.pytorch.org/whl/cu121" }
dependencies = [
    { name = "filelock" },
    { name = "fsspec" },
    { name = "jinja2" },
    { name = "networkx" },
    { name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "sympy" },
    { name = "triton", marker = "python_full_version < '3.13' and platform_machine == 'x86_64' and platform_system == 'Linux'" },
    { name = "typing-extensions" },
]
wheels = [
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp310-cp310-linux_x86_64.whl", hash = "sha256:28bfba084dca52a06c465d7ad0f3cc372c35fc503f3eab881cc17a5fd82914e7" },
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp310-cp310-win_amd64.whl", hash = "sha256:9244bdc160d701915ae03e14cc25c085aa11e30d711a0b64bef0ee427e04632c" },
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp311-cp311-linux_x86_64.whl", hash = "sha256:a9fff32d365e0c74b6909480548b2e291314a204adb29b6bb6f2c6d33f8be26c" },
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp311-cp311-win_amd64.whl", hash = "sha256:bada31485e04282b9f099da39b774484d3e4c431b7ea0df3663817295ae764e4" },
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp312-cp312-linux_x86_64.whl", hash = "sha256:49ac55a6497ddd6d0cdd51b5ea27d8ebe20c9273077855e9c96eb0dc289f07c3" },
    { url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp312-cp312-win_amd64.whl", hash = "sha256:b5c27549daf5f3209da6e07607f2bb8d02712555734fcd8cd7a23703a6e7d639" },
]

project_source.zip

According to the document:

uv.lock is a universal or cross-platform lockfile that captures the packages that would be installed across all possible Python markers such as operating system, architecture, and Python version.

If a uv.lock generated on Windows cannot be used on other platforms, then it is not a uv.lock as documented.
Therefore, when the documentation mentions using the Project API and depending on PyTorch, the uv.lock should conform to the documentation's specifications.
Or note that the generated lock file is not a universal file?

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Aug 24, 2024

@baggiponte
As for describing the installation method for a specific version, it's because UV installs PyTorch from sources other than PyPI, and it requires not only the version number but also the local version identifiers.
Or, mentioning Local version identifiers in the document might be another way to help people understand how to install a specific version.

pip

image

uv pip

  • 2.4.0
    image
  • 2.4.0+cu121
    image

@baggiponte
Copy link
Contributor Author

Hey there, was away for the weekend. Thank you very much for the explanation 😊 Will get back to this after work, later today.

In the meanwhile, to recap:

  1. I should investigate lockfiles generated by torch installation and document, at least to say that they might not be cross-platform, in this case.
  2. Cover the local version identifiers differences between pip and uv.

Did I get everything?

Thank you again for taking the time to steer me through this!

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Aug 26, 2024

Hey there, was away for the weekend. Thank you very much for the explanation 😊 Will get back to this after work, later today.

In the meanwhile, to recap:

  1. I should investigate lockfiles generated by torch installation and document, at least to say that they might not be cross-platform, in this case.
  2. Cover the local version identifiers differences between pip and uv.

Did I get everything?

Thank you again for taking the time to steer me through this!

While this is generally correct, there's a potential issue when using uv add to install PyTorch.
If not configured properly, uv add might overwrite your existing PyTorch installation with a version from PyPI that lacks CUDA support, even if you previously had a GPU-accelerated version.
As I mentioned from this comment: #6523 (comment)

Note: Since PyPI's PyTorch offers wheels for macOS, Linux, and Windows, if we switch the source to PyPI and remove the Local version identifiers, there will be no errors. However, the version will possibly switch from CUDA to CPU only.
Note: The Linux version of PyTorch CUDA 12.1 on PyPI already supports CUDA.
Note: The Linux version of PyTorch CUDA 12.4 on PyPI already supports CUDA.
image

You're welcome. I know how frustrating these issues can be, so I wanted to save other users some time.
Providing good documentation is a great service to users, and I appreciate you taking the time to do so.

@inflation
Copy link

It's pretty annoying since somehow extra-index-url overwrites the default index when a package with the same name but some versions missing. uv simply does not look at the default index for the missing version.

@zanieb
Copy link
Member

zanieb commented Aug 27, 2024

@inflation there are details on that behavior in the documentation. Please don't complain about it in someone's pull request.

@inflation
Copy link

@inflation there are details on that behavior in the documentation. Please don't complain about it in someone's pull request.

This is precisely where it happens the most. Installing pytorch using the its index introduce the problem. pixi has a similar issue and contains a nice example and explanation.

Copy link

@seemethere seemethere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 from the PyTorch side!


* If want PyTorch on macOS, just run `uv add torch`/`uv pip install torch`.
* If want PyTorch on Windows *with CPU support*, just run `uv add torch`/`uv pip install torch`.
* If want to install PyTorch on Linux with CPU support, add `--extra-index-url=https://download.pytorch.org/whl/cpu` to the `uv add`/`uv pip install` command.
Copy link

@albanD albanD Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we be able to do this with --index-url instead of --extra-index-url throughout?
The extra version of this flag was the cause of a bad security event already for PyTorch and we would not want to repeat that here: https://pytorch.org/blog/compromised-nightly-dependency/

Because we do use --index-url for all the pip commands, you can rely on the fact that the url will contain all depedencies needed to install torch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note we have different behavior than pip to try to avoid such issues https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes

I think the real answer is that we need #171 though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uh, he has a point. I'll add a link to explain this behaviour.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree the default is better. But it still sounds quite dangerous if we make a mistake (don't realize we forgot to push a version of that package on our index). Looking forward to the pinned package index!

Thanks for the update.

@baggiponte baggiponte force-pushed the docs2024/pytorch-guide branch from 94a6c95 to 283d193 Compare September 13, 2024 07:10
@baggiponte
Copy link
Contributor Author

baggiponte commented Sep 13, 2024

Hello there! Sorry for disappearing, but as if it was not enough already, we got a sparkle of floods here too.

I edited a couple of things with the last commit I pushed.

  1. Since @albanD rightfully pointed out, I added a small callout to point to the relevant bits of the uv docs.
  2. I reworked a bit the TL;DR section to make it a bit more straightforward.
  3. I added a mention to Add support for pinning a package to a specific index #171
  4. I tried to explain there might be issues with the cross-compatible lockfile. I am not sure I explained correctly what @FishAlchemist meant, though. I guess what they mean is:
    • If someone does uv add torch --extra-index-url=...
    • Then uv add foobar
    • Then the pytorch version might be replaced with the PyPI one?
      If so, how can I phrase this correctly?

@zanieb let me know if it makes sense, suggest edits or make them directly.

@FishAlchemist
Copy link
Contributor

@baggiponte
The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.

@baggiponte
Copy link
Contributor Author

@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.

Makes perfect sense!

I guess it might be a good idea to mention that you should add [tool.uv.sources] to your pyproject. What do you think?

@FishAlchemist
Copy link
Contributor

@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.

Makes perfect sense!

I guess it might be a good idea to mention that you should add [tool.uv.sources] to your pyproject. What do you think?

According to the documentation, version 0.4.10, [tool.uv.sources] only supports these sources.
image
Therefore, using [tool.uv.sources] requires you to find the sources yourself.

I'm trying to use extra-index-url, but as a result, there are no macOS wheels available.

[tool.uv]
extra-index-url = ["https://download.pytorch.org/whl/cu121"]

I've yet to find a solution for using [tool.uv.sources] that can support Windows, Linux, macOS, and CUDA.

@baggiponte baggiponte force-pushed the docs2024/pytorch-guide branch from 283d193 to 2556d40 Compare September 14, 2024 15:49
@baggiponte
Copy link
Contributor Author

@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.

Makes perfect sense!
I guess it might be a good idea to mention that you should add [tool.uv.sources] to your pyproject. What do you think?

According to the documentation, version 0.4.10, [tool.uv.sources] only supports these sources. image Therefore, using [tool.uv.sources] requires you to find the sources yourself.

I'm trying to use extra-index-url, but as a result, there are no macOS wheels available.

[tool.uv]
extra-index-url = ["https://download.pytorch.org/whl/cu121"]

I've yet to find a solution for using [tool.uv.sources] that can support Windows, Linux, macOS, and CUDA.

Very clear. Pushed another minor edit mentioning this. Would love to hear your feedback on the phrasing.

@baggiponte
Copy link
Contributor Author

I also added torchvision to the example in the docs given #8344.

@zanieb
Copy link
Member

zanieb commented Oct 22, 2024

I believe that @charliermarsh plans to look into PyTorch this month so I've assigned him. Great to see all the collaboration here!

@vvuk
Copy link

vvuk commented Oct 30, 2024

I'm running into the same original issue as #7202 following these docs (the "Distribution .. can't be installed because it doesn't have a source distribution...") error. My setup:

workspace pyproject.toml:

[project]
name = "workspace"
version = "0.0.0"
requires-python = "==3.12.*"
dependencies = []

[tool.uv.workspace]
members = [
    "foo",
]

[tool.uv]
managed = true
override-dependencies = [
  # mediapipe wrongly depends on protobuf<5
    "protobuf==5.27.5",
    "mediapipe==0.10.14",
]

[tool.uv.sources]
torch = [
  { index = "pytorch-cu124", marker = "platform_system != 'Darwin'"},
]
torchvision = [
  { index = "pytorch-cu124", marker = "platform_system != 'Darwin'"},
]

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

And foo/pyproject.toml:

[project]
name = "foo"
version = "0.0.0"
dependencies = [
    "torch",
    "torchsde",
    "torchvision",
    "torchaudio",
    "einops",
    "transformers>=4.28.1",
    "tokenizers>=0.13.3",
    "sentencepiece",
    "safetensors>=0.4.2",
    "aiohttp",
    "pyyaml",
    "Pillow",
    "scipy",
    "tqdm",
    "psutil",
    "kornia>=0.7.1",
    "spandrel",
    "soundfile",
]
requires-python = ">=3.12"
readme = "README.md"

Trying to run uv sync --project foo results in:

error: Distribution `torch==2.5.1 @ registry+https://download.pytorch.org/whl/cu124` can't be installed because it doesn't have a source distribution or wheel for the current platform```

@charliermarsh
Copy link
Member

@vvuk -- Are you running on macOS or non-macOS (per the markers)?

@vvuk
Copy link

vvuk commented Oct 30, 2024

non-macos (Windows). I've also noticed some docs (e.g. here) have platform_system != 'Darwin' and others have sys_platform != 'Darwin' -- which is correct? Or are they interchangeable? Also perhaps odd is that a uv sync in the workspace root succeeds, and a lock file gets created.. but now that I look in the lock file, there's no resolution for "torch" for Windows. Just for Darwin, and for.. hmm. Not sure how to interpret this:

[[package]]
name = "torch"
version = "2.5.1"
source = { registry = "https://download.pytorch.org/whl/cu124" }
resolution-markers = [
    "platform_machine == 'aarch64' and platform_system == 'Linux'",
    "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system
 != 'Linux')",
]
...
wheels = ... linux_aarch64 ...

[[package]]
name = "torch"
version = "2.5.1"
source = { registry = "https://pypi.org/simple" }
resolution-markers = [
    "platform_system == 'Darwin'",
]
...
wheels = .. macosx_11_arm64 ...

So looks like there was only a resolution for linux-aarch64 and for macos-arm64

@charliermarsh
Copy link
Member

That's discussed at length in a few places: #5182 (comment), #8536 (comment). (There's no way for us to know if the set of wheels covers the entire space you care about.)

@charliermarsh
Copy link
Member

You probably need to define your dependencies like:

torch==2.5.1+cu124 ; platform_system != 'Darwin'
torch==2.5.1 ; platform_system == 'Darwin'

Confusingly, the wheels you want from the PyTorch index are tagged as 2.5.1+cu124 -- there are some wheels that are just 2.5.1, but not for Windows: https://download.pytorch.org/whl/torch/

@vvuk
Copy link

vvuk commented Oct 30, 2024

Yeah, I was hoping to not have to declare explicit versions; that seems to be the only workable solution though. Maybe one way to resolve this mess would be a uv config for "prioritize these local tags in this order for resolution". e.g.:

local-priority = [
   "cu124 ; platform_system != 'Darwin'",
   "foo ; platform_system == 'Darwin'",
   "bar"
]

To state that on non-macos, prioritize +cu124 above any other matching version. On macOS, prioritize +foo. And on all platforms prioritize +bar above anything else. The above would also be in order, so on windows +cu124 would be picked before +bar. (Maybe an optional syntax to say "require cu124" vs "prioritize cu124", i.e. whether to pick 2.6.0 over 2.5.5+cu124 or not)

@charliermarsh
Copy link
Member

Yeah, it's not a great situation. I'd really love to just fix / improve the local version handling entirely.

@vvuk
Copy link

vvuk commented Oct 30, 2024

I'm trying with this in my workspace pyproject.toml:

[tool.uv]
override-dependencies = [
  "torch==2.5.1+cu124 ; platform_system != 'Darwin'",
  "torch==2.5.1 ; platform_system == 'Darwin'",

  "torchvision==0.20.1+cu124 ; platform_system != 'Darwin'",
  "torchvision==0.20.1 ; platform_system == 'Darwin'",
]

[tool.uv.sources]
torch = [
  { index = "pytorch-cu124", marker = "platform_system != 'Darwin'" }
]
torchvision = [
  { index = "pytorch-cu124", marker = "platform_system != 'Darwin'" }
]

A workspace member has a dependency on just torch. Running sync on windows:

  × No solution found when resolving dependencies for split (python_full_version == '3.12.*' and platform_machine == 'aarch64' and platform_system ==
  │ 'Linux'):
  ╰─▶ Because there is no version of torch{platform_system != 'Darwin'}==2.5.1+cu124 and comfyui depends on torch{platform_system !=
      'Darwin'}==2.5.1+cu124, we can conclude that comfyui's requirements are unsatisfiable.
      And because your workspace requires comfyui, we can conclude that your workspace's requirements are unsatisfiable.

okay, I can believe there's no cuda version for aarch64. I've tried every variant I can think of and can't get a resolution, including this which I think should 100% work:

override-dependencies = [
  "torch==2.5.1+cu124 ; platform_system == 'Windows'",
  "torch==2.5.1 ; platform_system != 'Windows'",
  "torchvision==0.20.1+cu124 ; platform_system == 'Windows'",
  "torchvision==0.20.1 ; platform_system != 'Windows'",
]

[tool.uv.sources]
torch = [
  { index = "pytorch-cu124", marker = "platform_system == 'Windows'" },
  # added these because I wasn't sure if sources was constraining the indexes completely, i.e.
  # it wouldn't consider any additional ones if explicit ones are specified
  { index = "pytorch-cpu", marker = "platform_system != 'Windows'" },
]
torchvision = [
  { index = "pytorch-cu124", marker = "platform_system == 'Windows'" },
  { index = "pytorch-cpu", marker = "platform_system != 'Windows'" },
]

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

I get:

  × No solution found when resolving dependencies for split (python_full_version == '3.12.*' and platform_system == 'Windows'):
  ╰─▶ Because there is no version of torch{platform_system == 'Windows'}==2.5.1+cu124 and comfyui depends on torch{platform_system ==
      'Windows'}==2.5.1+cu124, we can conclude that comfyui's requirements are unsatisfiable.
      And because your workspace requires comfyui, we can conclude that your workspace's requirements are unsatisfiable.

If I do uv sync --no-cache -vv, I actually see zero hits to the pytorch index URLs. If I remove explicit = true, I get further, but then I get into a mess because I'm picking up random packages from the pytorch sources that are too old, and versions inside pypi's aren't considered. If I use --index-strategy unsafe-best-match then I end up picking up a random package that is too new, and somehow exists for only one platform and so can't be found anywhere else.

@vvuk
Copy link

vvuk commented Oct 30, 2024

This seems to be an interaction with dependency overrides and sources; it seems like tool.uv.sources is not considered when something is in override-dependencies. If I turn my workspace into a non-virtual one:

[project]
name = "foo"
version = "0.0.0"
dependencies = [
  "torch==2.5.1+cu124 ; platform_system == 'Windows'",
  "torch==2.5.1 ; platform_system != 'Windows'",
  "torchaudio==2.5.1+cu124 ; platform_system == 'Windows'",
  "torchaudio==2.5.1 ; platform_system != 'Windows'",
  "torchvision==0.20.1+cu124 ; platform_system == 'Windows'",
  "torchvision==0.20.1 ; platform_system != 'Windows'",
]

and don't put that in override-dependencies, and keep the uv sources to point to the cpu repo for non-windows, I can resolve. If I make dependencies generic ("torch", "torchaudio", "torchvision") and restore the overrides, I get the same error about not being able to resolve +cu124.

... but if I do this (dependencies), then I'm back to a non-virtual workspace, and then I run into #5727. So, I created a dummy pin-pytorch package that contains just a pyproject.toml like above and added that to my (virtual) workspace. This seems to work!

@Leon0402
Copy link

Leon0402 commented Nov 1, 2024

I am missing the following use cases in the docs:

  1. What happens if we have transitive dependencies that specify different versions of torch. Imagine one dependency with a dependency on torch CPU and one on torch CudaX. But you actually need CudaY. I think it would be quite valuable to give some detail here how the resolver behaves in such cases (even though that is not torch specific) and how to resolve potential conflicts -> I have not tried that use case and I am no UV expert, so I don't really know what happens here. But I guess it goes in the direction of override-dependencies mentioned earlier here.
  2. How can we deal with the situation that project members need different torch versions for the same platform. Imagine one person only has a CPU, but another one a GPU. Or two people need different cuda versions on the same platform. AFAIK there is no super nice solution here as we do not have cuda markers, because python cannot realiably detect this or something like this. So I imagine this needs to be solved somehow with extras or something similar. Mentioning limitations and possible solutions would be beneficial.

And perhaps it would be nice to have some sort of FAQ section with common mistakes, for instance the docs specify in the beginning !!! tip "Supported Python versions", but I think people might miss that or cannot relate the error message directly with this. So it might make sense to show explicitly the kind of error that happens, when you use an incompatible python version. I guess there are more examples for common mistakes / error messages.

@zanieb
Copy link
Member

zanieb commented Nov 1, 2024

Perhaps a useful example #8746 (comment)

@FishAlchemist
Copy link
Contributor

FishAlchemist commented Nov 2, 2024

Hi everyone,
I think we need to confirm the minimum supported PyTorch version in this PR's document, and whether we should support all its target architectures.

Operating Systems

https://github.com/pytorch/pytorch/blob/main/RELEASE.md#operating-systems
image
If we want to support versions below 2.3.0, such as 2.2.2, then we need to be able to install it on macOS Intel (x86-64).
Alternatively, is it outside the scope of our project API to accommodate PyTorch execution on macOS Intel(x86-64)?

Release Compatibility Matrix

https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix
image
According to the diagram above, since UV currently supports Python 3.8 as the minimum version, we only need to ensure support for PyTorch 2.0 or above.

Co-authored-by: Santiago Castro <bryant1410@gmail.com>
@charliermarsh
Copy link
Member

Now that we have all the pieces in place I'm going to invest in comprehensive PyTorch docs. I will fold this work into those docs -- thank you so much for writing it up. I may even merge this first, then make my PR atop it so that this is preserved.

@charliermarsh
Copy link
Member

(These docs are really great, thank you @baggiponte.)

@charliermarsh
Copy link
Member

I've built on this work in #9210. This PR will merge first, then #9210 would follow.

@charliermarsh charliermarsh merged commit a88a3e5 into astral-sh:main Nov 19, 2024
53 checks passed
charliermarsh added a commit that referenced this pull request Nov 19, 2024
## Summary

Now that we have all the pieces in place, this PR adds some dedicated
documentation to enable a variety of PyTorch setups.

This PR is downstream of #6523 and builds on the content in there; #6523
will merge first, and this PR will follow.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.