Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[5/N][torch.compile] torch.jit.script --> torch.compile #10406

Merged
merged 2 commits into from
Nov 18, 2024

Conversation

youkaichao
Copy link
Member

@youkaichao youkaichao commented Nov 17, 2024

fixes #8536

and I also observe performance gain:

python benchmarks/benchmark_throughput.py --input-len 1024 --output-len 256 --model meta-llama/Llama-3.1-8B -tp 2 --load-format dummy

main branch: 
Throughput: 17.91 requests/s, 22928.01 total tokens/s, 4585.60 output tokens/s

this pr:
Throughput: 18.69 requests/s, 23918.95 total tokens/s, 4783.79 output tokens/s

Signed-off-by: youkaichao <youkaichao@gmail.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@@ -1770,7 +1770,7 @@ def capture(
# Run the model a few times without capturing the graph.
# This is to make sure that the captured graph does not include the
# kernel launches for initial benchmarking (e.g., Triton autotune).
# Note one iteration is not enough for torch.jit.script
# Note one iteration is not enough for torch.compile
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one warmup iteration should be enough for torch.compile , but we can investigate and confirm later.

@youkaichao youkaichao changed the title [5/N] replace torch.jit.script with torch.compile [5/N][torch.compile] torch.jit.script --> torch.compile Nov 17, 2024
Signed-off-by: youkaichao <youkaichao@gmail.com>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@youkaichao youkaichao added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 18, 2024
@DarkLight1337 DarkLight1337 merged commit 7851b45 into vllm-project:main Nov 18, 2024
64 checks passed
@youkaichao youkaichao deleted the no_jit branch November 18, 2024 17:10
mikejuliet13 pushed a commit to mikejuliet13/vllm that referenced this pull request Nov 19, 2024
…#10406)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Manjul Mohan <manjul.mohan@ibm.com>
coolkp pushed a commit to coolkp/vllm that referenced this pull request Nov 20, 2024
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
mfournioux pushed a commit to mfournioux/vllm that referenced this pull request Nov 20, 2024
…#10406)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
rickyyx pushed a commit to rickyyx/vllm that referenced this pull request Nov 20, 2024
…#10406)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
tlrmchlsmth pushed a commit to neuralmagic/vllm that referenced this pull request Nov 23, 2024
…#10406)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
prashantgupta24 pushed a commit to opendatahub-io/vllm that referenced this pull request Dec 3, 2024
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
@suneeta-mall
Copy link

Hey @youkaichao, I am running python 3.11 and cuda 12.4 and do a full build of vllm on my environment. My env spec is as following:

Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.31

Python version: 3.11.11 (main, Dec  4 2024, 08:55:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1045-nvidia-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.154.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      43 bits physical, 48 bits virtual
CPU(s):                             256
On-line CPU(s) list:                0-255
Thread(s) per core:                 2
Core(s) per socket:                 64
Socket(s):                          2
NUMA node(s):                       8
Vendor ID:                          AuthenticAMD
CPU family:                         23
Model:                              49
Model name:                         AMD EPYC 7742 64-Core Processor
Stepping:                           0
Frequency boost:                    enabled
CPU MHz:                            2250.000
CPU max MHz:                        2250.0000
CPU min MHz:                        1500.0000
BogoMIPS:                           4491.10
Virtualization:                     AMD-V
L1d cache:                          4 MiB
L1i cache:                          4 MiB
L2 cache:                           64 MiB
L3 cache:                           512 MiB
NUMA node0 CPU(s):                  0-15,128-143
NUMA node1 CPU(s):                  16-31,144-159
NUMA node2 CPU(s):                  32-47,160-175
NUMA node3 CPU(s):                  48-63,176-191
NUMA node4 CPU(s):                  64-79,192-207
NUMA node5 CPU(s):                  80-95,208-223
NUMA node6 CPU(s):                  96-111,224-239
NUMA node7 CPU(s):                  112-127,240-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries:
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.6.0.dev20241215+cu124
[pip3] torchvision==0.20.1
[pip3] transformers==4.47.0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.1.0

When I launch the vllm server, I get this error irrespective of the model type.

ERROR 12-17 02:48:52 registry.py:333]   File "/wks/vllm/vllm/model_executor/layers/vocab_parallel_embedding.py", line 136, in <module>
ERROR 12-17 02:48:52 registry.py:333]     @torch.compile(dynamic=True)
ERROR 12-17 02:48:52 registry.py:333]      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-17 02:48:52 registry.py:333]   File "/wks/vllm/.venv/lib/python3.11/site-packages/torch/__init__.py", line 2424, in fn
ERROR 12-17 02:48:52 registry.py:333]     return compile(
ERROR 12-17 02:48:52 registry.py:333]            ^^^^^^^^
ERROR 12-17 02:48:52 registry.py:333]   File "/wks/vllm/.venv/lib/python3.11/site-packages/torch/__init__.py", line 2447, in compile
ERROR 12-17 02:48:52 registry.py:333]     return torch._dynamo.optimize(
ERROR 12-17 02:48:52 registry.py:333]            ^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-17 02:48:52 registry.py:333]   File "/wks/vllm/.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 716, in optimize
ERROR 12-17 02:48:52 registry.py:333]     return _optimize(rebuild_ctx, *args, **kwargs)
ERROR 12-17 02:48:52 registry.py:333]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-17 02:48:52 registry.py:333]   File "/wks/vllm/.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 790, in _optimize
ERROR

I have realised that it is coming from vocab_parallel_embedding.get_masked_input_and_mask torch.compile decorator. If I switch the decorator back to jit, it seems to work fine. Do you see any reason why I would have this issue? I am conscious that the issue could be related to pytorch stack version but my env is up to date as per the requirement files in repo at least as of commit 69ba344de8683ec4d3d42d11ae4e147a2a302da8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
4 participants