-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
Closed
Closed
Copy link
Labels
bugSomething isn't workingSomething isn't working
Description
🐛 Describe the bug
Build Error on AArch64 due to old version of torch in cpu-build.txt
Reproducer:
pip install -r vllm/requirements/cpu-build.txt
VLLM_TARGET_DEVICE=cpu python3 setup.py bdist_wheel
Error:
[249/260] Building CXX object CMakeFiles/_C.dir/csrc/moe/dynamic_4bit_int_moe_cpu.cpp.o
FAILED: [code=1] CMakeFiles/_C.dir/csrc/moe/dynamic_4bit_int_moe_cpu.cpp.o
ccache /usr/bin/c++ -DARM_BF16_SUPPORT -DPy_LIMITED_API=3 -DTORCH_EXTENSION_NAME=_C -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_C_EXPORTS -I/home/fadara01/vllm-prepack-weights/vllm/csrc -I/home/fadara01/vllm-prepack-weights/vllm/.deps/onednn-src/include -I/home/fadara01/vllm-prepack-weights/vllm/.deps/onednn-build/include -I/home/fadara01/vllm-prepack-weights/vllm/.deps/onednn-src/src/../include -isystem /usr/include/python3.10 -isystem /home/fadara01/vllm-prepack-weights/venv/lib/python3.10/site-packages/torch/include -isystem /home/fadara01/vllm-prepack-weights/venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -Wl,-rpath,/home/fadara01/vllm-reproduce/ComputeLibrary/build/ -O2 -g -DNDEBUG -std=gnu++17 -fPIC -fopenmp -DVLLM_CPU_EXTENSION -march=armv8.2-a+bf16+dotprod+fp16 -D_GLIBCXX_USE_CXX11_ABI=1 -MD -MT CMakeFiles/_C.dir/csrc/moe/dynamic_4bit_int_moe_cpu.cpp.o -MF CMakeFiles/_C.dir/csrc/moe/dynamic_4bit_int_moe_cpu.cpp.o.d -o CMakeFiles/_C.dir/csrc/moe/dynamic_4bit_int_moe_cpu.cpp.o -c /home/fadara01/vllm-prepack-weights/vllm/csrc/moe/dynamic_4bit_int_moe_cpu.cpp
/home/fadara01/vllm-prepack-weights/vllm/csrc/moe/dynamic_4bit_int_moe_cpu.cpp:7:12: fatal error: ATen/ops/_dyn_quant_matmul_4bit.h: No such file or directory
7 | #include <ATen/ops/_dyn_quant_matmul_4bit.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reason for Error:
On AArch64, requirements/cpu-build.txt torch version is set to v2.6.0, which mismatches the version set in requirements/cpu.txt (v2.8.0). Note that v2.6.0 is does no contain the ops (introduced in PyTorch here) and used in #23809
Fix should be to update the pytorch version in requirements/cpu-build.txt for AArch64 to match that in requirements/cpu.txt
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
current environment
The output of python collect_env.py
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (aarch64)
GCC version : (Ubuntu 12.3.0-1ubuntu1~22.04.2) 12.3.0
Clang version : 16.0.6 (++20231112100510+7cbf1a259152-1~exp1~20231112100554.106)
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.6.0+cpu
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.8.0-1036-aws-aarch64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: ARM
Model name: Neoverse-V2
Model: 1
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 1
Stepping: r0p1
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache: 6 MiB (96 instances)
L1i cache: 6 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] torch==2.6.0+cpu
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.0rc2.dev69+g164299500 (git sha: 164299500)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1```
</details>
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working