-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot install the CPU version of torch 2.6.0 #11406
Comments
Sorry, there must be something else here that isn't captured by your example. The lockfile generated by this [project]
name = "foo"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
]
[project.optional-dependencies]
# e.g. torch-2.6.0+cpu.cxx11.abi-cp311-cp311-linux_x86_64.whl
cpu = [
"torch>=2.6.0",
"torchvision>=0.21.0",
]
cu124 = [
"torch>=2.6.0",
"torchvision>=0.21.0",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cu124" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cu124" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true This only includes: [[package]]
name = "torch"
version = "2.6.0"
source = { registry = "https://download.pytorch.org/whl/cpu" } |
As an aside, |
Fair enough, I quickly glanced at the wheels, but I ultimately wanted |
Are you able to share the rest of the pyproject.toml, or at least the dependencies field? It’s missing details necessary to reproduce the issue. |
Sure, here is the full one
|
I think this is a preference problem. We end up choosing |
I think part of the issue is that you also have torch defined as a non-optional dependency. Where do you expect torch to come from when no |
E.g., I think this would do what you want: [project]
name = "venv"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11, <3.12"
dependencies = [
"einops",
"fire>=0.7.0",
"gradio",
"huggingface-hub",
"hydra-core>=1.3.2",
"imageio[ffmpeg]",
"iopath>=0.1.10",
"loguru>=0.7.3",
"numpy==1.24.3",
"omegaconf",
"opencv-python",
"pillow>=11.1.0",
"plotly",
"poselib>=2.0.2",
"pre-commit>=4.1.0",
"pyceres==2.3",
"pycolmap==3.10.0",
"pyqt5-sip>=12.17.0",
"pyqtchart==5.12",
"pyrender==0.1.45",
"roma==1.4.1",
"scikit-image>=0.25.1",
"scikit-learn",
"scipy==1.11.2",
"setuptools>=75.8.0",
"tqdm",
"trimesh",
"visdom>=0.2.4",
]
[project.optional-dependencies]
cpu = [
"torch>=2.6.0",
"torchvision>=0.21.0",
"timm==0.6.7",
]
cu124 = [
"torch>=2.6.0",
"torchvision>=0.21.0",
"timm==0.6.7",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cu124" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cu124" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true |
Hmm. Yes I agree that it is not the most full-proof way to set up the I updated the
|
Yeah, unfortunately I think you have to do this for now, until we fix the "preference" bug I described above: cpu = [
"torch==2.6.0+cpu",
"torchvision==0.21.0+cpu",
"timm==0.6.7",
] |
Hey @charliermarsh, Isn't this issue resolved in #11546 because I am still facing this issue? |
Yup, the issue is resolved. You should open a new issue with a complete reproduction if you're facing the same problem. (You'll definitely need to |
## Summary This PR fixes a subtle issue arising from our propagation of preferences. When we resolve a fork, we take the solution from that fork and mark all the chosen versions as "preferred" as we move on to the next fork. In this specific case, the resolver ended up solving a macOS-specific fork first, which led us to pick `2.6.0` rather than `2.6.0+cpu`. This in itself is correct; but when we moved on to the next fork, we preferred `2.6.0` over `2.6.0+cpu`, despite the fact that `2.6.0` _only_ includes macOS wheel, and that branch was focused on Linux. Now, in preferences, we prefer local variants (if they exist). If the local variant ends up not working, we'll presumedly backtrack to the base version anyway. Closes astral-sh#11406.
Summary
I am facing similar issues as described in #1497 (comment) afterr following your instructions and trying to have the CPU and CUDA flags both working for laptop debugging and CUDA for actual running of the code.
uv sync --extra cu124
worksuv pip install torch==2.6.0+cpu --index-url https://download.pytorch.org/whl/cpu
worksuv sync --extra cpu
fails with the following message:I would say that
torch-2.6.0+cpu.cxx11.abi-cp311-cp311-linux_x86_64.whl
seems like the correct wheel? Can I somehow forceuv
to use this wheel?My
pyproject.toml
uv.lock
Platform
Ubuntu 22.4 X86_64 Architecture (Linux 6.8.0-52-generic)
Version
uv version: 0.5.29
Python version
Python 3.11.10
The text was updated successfully, but these errors were encountered: