- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 10.9k
Open
Labels
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
==============================
        System Info
==============================
OS                           : CentOS Stream 9 (x86_64)
GCC version                  : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11)
Clang version                : Could not collect
CMake version                : version 3.31.6
Libc version                 : glibc-2.34
==============================
       PyTorch Info
==============================
PyTorch version              : 2.10.0.dev20250916+rocm6.4
Is debug build               : False
CUDA used to build PyTorch   : N/A
ROCM used to build PyTorch   : 6.4.43484-123eb5128
==============================
      Python Environment
==============================
Python version               : 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (64-bit runtime)
Python platform              : Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34
==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : Could not collect
CUDA_MODULE_LOADING set to   : 
GPU models and configuration : AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version        : Could not collect
cuDNN version                : Could not collect
HIP runtime version          : 6.4.43484
MIOpen runtime version       : 3.4.0
Is XNNPACK available         : True
==============================
          CPU Info
==============================
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             384
On-line CPU(s) list:                0-383
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 9654 96-Core Processor
CPU family:                         25
Model:                              17
Thread(s) per core:                 2
Core(s) per socket:                 96
Socket(s):                          2
Stepping:                           1
Frequency boost:                    enabled
CPU(s) scaling MHz:                 86%
CPU max MHz:                        3707.8120
CPU min MHz:                        1500.0000
BogoMIPS:                           4792.43
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V
L1d cache:                          6 MiB (192 instances)
L1i cache:                          6 MiB (192 instances)
L2 cache:                           192 MiB (192 instances)
L3 cache:                           768 MiB (24 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-95,192-287
NUMA node1 CPU(s):                  96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
==============================
Versions of relevant libraries
==============================
[pip3] conch-triton-kernels==1.2.1
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pytorch-triton-rocm==3.5.0+git5ae38bdb
[pip3] pyzmq==27.0.2
[pip3] torch==2.10.0.dev20250916+rocm6.4
[pip3] torchao==0.14.0.dev20250917+rocm6.4
[pip3] torchaudio==2.8.0.dev20250917+rocm6.4
[pip3] torchvision==0.25.0.dev20250917+rocm6.4
[pip3] transformers==4.55.4
[pip3] triton==3.3.0
[conda] Could not collect
==============================
         vLLM Info
==============================
ROCM Version                 : 6.4.43484-123eb5128
vLLM Version                 : 0.10.1rc2.dev544+g81c53ef55.d20250925 (git sha: 81c53ef55, date: 20250925)
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
  ============================ ROCm System Management Interface ============================
================================ Weight between two GPUs =================================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            15           15           15           15           15           15           15           
GPU1   15           0            15           15           15           15           15           15           
GPU2   15           15           0            15           15           15           15           15           
GPU3   15           15           15           0            15           15           15           15           
GPU4   15           15           15           15           0            15           15           15           
GPU5   15           15           15           15           15           0            15           15           
GPU6   15           15           15           15           15           15           0            15           
GPU7   15           15           15           15           15           15           15           0            
================================= Hops between two GPUs ==================================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            1            1            1            1            1            1            1            
GPU1   1            0            1            1            1            1            1            1            
GPU2   1            1            0            1            1            1            1            1            
GPU3   1            1            1            0            1            1            1            1            
GPU4   1            1            1            1            0            1            1            1            
GPU5   1            1            1            1            1            0            1            1            
GPU6   1            1            1            1            1            1            0            1            
GPU7   1            1            1            1            1            1            1            0            
=============================== Link Type between two GPUs ===============================
       GPU0         GPU1         GPU2         GPU3         GPU4         GPU5         GPU6         GPU7         
GPU0   0            XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         
GPU1   XGMI         0            XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         
GPU2   XGMI         XGMI         0            XGMI         XGMI         XGMI         XGMI         XGMI         
GPU3   XGMI         XGMI         XGMI         0            XGMI         XGMI         XGMI         XGMI         
GPU4   XGMI         XGMI         XGMI         XGMI         0            XGMI         XGMI         XGMI         
GPU5   XGMI         XGMI         XGMI         XGMI         XGMI         0            XGMI         XGMI         
GPU6   XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         0            XGMI         
GPU7   XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         XGMI         0            
======================================= Numa Nodes =======================================
GPU[0]          : (Topology) Numa Node: 0
GPU[0]          : (Topology) Numa Affinity: 0
GPU[1]          : (Topology) Numa Node: 0
GPU[1]          : (Topology) Numa Affinity: 0
GPU[2]          : (Topology) Numa Node: 0
GPU[2]          : (Topology) Numa Affinity: 0
GPU[3]          : (Topology) Numa Node: 0
GPU[3]          : (Topology) Numa Affinity: 0
GPU[4]          : (Topology) Numa Node: 1
GPU[4]          : (Topology) Numa Affinity: 1
GPU[5]          : (Topology) Numa Node: 1
GPU[5]          : (Topology) Numa Affinity: 1
GPU[6]          : (Topology) Numa Node: 1
GPU[6]          : (Topology) Numa Affinity: 1
GPU[7]          : (Topology) Numa Node: 1
GPU[7]          : (Topology) Numa Affinity: 1
================================== End of ROCm SMI Log ===================================
==============================
     Environment Variables
==============================
CUDA_CACHE_PATH=/data/users/lifans/.nv/ComputeCache
CUDA_NVCC_EXECUTABLE=/home/lifans/local/ccache/cuda/nvcc
LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64/:/usr/local/cuda-12.4/lib64/:
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
๐ Describe the bug
Large max_num_seqs hurts performance on MI300X due to excessive cudagraphs.
Mean ITL (ms):
- ITL with real batch size 1 increased from 12.05 ms to 17.35ms when max-num-seqsis changed from 32 to 128
Extra Findings:
- H100 does not have this regression
- Limiting the max cuda_graph_sizeswhen MBS = 128 can mitigate the issue. (Manually change self.cuda_graph_sizes from[min(self.max_num_seqs * 2, 512)]to[min(64, 512)])
Test 1:
# start the server with --max-num-seqs=32
MODEL=meta-llama/Llama-3.3-70B-Instruct
HF_HUB_DISABLE_XET=1 VLLM_USE_V1=1 with-proxy python -m vllm.entrypoints.openai.api_server --model $MODEL --disable-log-requests -tp 8 --port 8001 --no-enable-prefix-caching --max-model-len=8192 --max-num-seqs=32 --gpu_memory_utilization=0.8
# perf command
MODEL=meta-llama/Llama-3.3-70B-Instruct
python -m vllm.entrypoints.cli.main bench serve --model $MODEL --tokenizer $MODEL --port 8001  --dataset-name random  --ignore-eos  --num-prompts 20  --request-rate inf  --random-input-len 2048  --random-output-len 100  --max-concurrency 1
# result
---------------Inter-token Latency----------------
Mean ITL (ms):                           12.05     
Median ITL (ms):                         11.94     
P99 ITL (ms):                            14.53     
==================================================
Test 2:
#  start the server with --max-num-seqs=128 
MODEL=meta-llama/Llama-3.3-70B-Instruct
HF_HUB_DISABLE_XET=1 VLLM_USE_V1=1 with-proxy python -m vllm.entrypoints.openai.api_server --model $MODEL --disable-log-requests -tp 8 --port 8001 --no-enable-prefix-caching --max-model-len=8192 --max-num-seqs=128 --gpu_memory_utilization=0.8
# perf command
MODEL=meta-llama/Llama-3.3-70B-Instruct
python -m vllm.entrypoints.cli.main bench serve --model $MODEL --tokenizer $MODEL --port 8001  --dataset-name random  --ignore-eos  --num-prompts 20  --request-rate inf  --random-input-len 2048  --random-output-len 100  --max-concurrency 1
# result
---------------Inter-token Latency----------------
Mean ITL (ms):                           17.35     
Median ITL (ms):                         12.03     
P99 ITL (ms):                            20.82     
==================================================
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.