Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Mac] Unable to find installation candidates for torch #1

Open
ALonelySheep opened this issue Jun 28, 2024 · 10 comments
Open

[Mac] Unable to find installation candidates for torch #1

ALonelySheep opened this issue Jun 28, 2024 · 10 comments

Comments

@ALonelySheep
Copy link

Hi Lucas, thanks for this tutorial! But I encountered this error when trying to replicate your installation process:

Package operations: 1 install, 1 update, 0 removals

  - Updating torch (2.2.2 -> 2.2.2+cpu): Failed

  RuntimeError

  Unable to find installation candidates for torch (2.2.2+cpu)

I have installed torch by adding torch = "^2.2.2" in toml file before trying your method. Do you have any idea of what is happening here?

Thanks

@ALonelySheep ALonelySheep changed the title nable to find installation candidates for torch Unable to find installation candidates for torch Jun 28, 2024
@ALonelySheep
Copy link
Author

The only significant difference in our toml file is that I'm using python=3.8 instead of 3.10

@lucaspar
Copy link
Owner

hey, so the repo's TOML already has an entry for torch, you shouldn't need to add it. Just change the versions in the original TOML to match v2.2 instead (it's 2.3).

@lucaspar
Copy link
Owner

I created a new branch that works on Python 3.8.

See the TOML file.

@ALonelySheep
Copy link
Author

ALonelySheep commented Jul 2, 2024

Hey Lucas, thanks you for taking time to address this. The issue seems to be system specific.
I'm using MacOS and this structure seems to work

torch = [
    { version = "~2.2.2", source = "pypi", platform = "darwin", markers = "extra=='cpu' and extra!='gpu'" },
    { version = "~2.2.2+cpu", source = "pytorch_cpu", platform = "linux", markers = "extra=='cpu' and extra!='gpu'" },
    { version = "~2.2.2+cpu", source = "pytorch_cpu", platform = "win32", markers = "extra=='cpu' and extra!='gpu'" },
]

@ALonelySheep
Copy link
Author

Also as a caveat, if we add another package which depends on torch, its possible to break the hack mentioned in this repo. I think at its core, this is because Poetry does not support conflicting dependencies. More discussion can be found here: python-poetry/poetry#6419

@lucaspar lucaspar changed the title Unable to find installation candidates for torch [Mac] Unable to find installation candidates for torch Jul 3, 2024
@lucaspar
Copy link
Owner

lucaspar commented Jul 3, 2024

right, I thought platform fields were enough for macs; are you also installing it in an environment with NVIDIA cards that you need the GPU markers?

As for dependencies that have torch itself as a requirement, that's something I haven't explored yet, thanks for bringing it up.

@ALonelySheep
Copy link
Author

Yes, I'm trying to create a consistent TOML file which can work on both dev and production machines.

@ALonelySheep
Copy link
Author

Sadly, I feel like I've wasted a lot of time on poetry, but the solution I ended up with is very unreliable. In the end, I opted to use a bash script to rename the correct file. I hope Poetry can have better support for ML environments in the future.

@lucaspar
Copy link
Owner

lucaspar commented Jul 4, 2024

True, it seems all we have now are workarounds.

This is not even a "Poetry" issue: AFAIK no Python package manager today reliably replicates an environment across different platforms with conditional hardware acceleration. Conda's environment.yaml files perhaps get the closest, but without a proper .lock file I've had them fail miserably when it comes to reproducibility.

@pschoen-itsc
Copy link

pschoen-itsc commented Oct 1, 2024

Solution that works for me

torch = [
    { version = "==2.4.0", source = "pypi", markers = "sys_platform=='darwin'" },
    { version = "==2.4.0", source = "pytorch-cpu", markers = "sys_platform!='darwin' and extra=='cpu' and extra!='cuda'" }
]

[tool.poetry.group.cuda]
optional = true

[tool.poetry.group.cuda.dependencies]
torch = { version = "==2.4.0", markers = "sys_platform!='darwin' and extra=='cuda' and extra!='cpu'" }

pypi source is my own proxy repo of the public pypi, but that should not make a difference.
That is also my primary source, so the cuda torch should be also pulled from there.
I don't know for sure if everything is needed to make this work, but this does work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants