Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Initial support for Intel GPU #3725

Closed
6 tasks
jikunshang opened this issue Mar 29, 2024 · 1 comment
Closed
6 tasks

[RFC] Initial support for Intel GPU #3725

jikunshang opened this issue Mar 29, 2024 · 1 comment

Comments

@jikunshang
Copy link
Contributor

jikunshang commented Mar 29, 2024

Anything you want to discuss about vllm.

Progress

  • Cmake and build System for Intel XPU/SYCL
  • vLLM custom op implementation in SYCL source code
  • Integrate Intel XPU backend for basic model inference.
  • Support tensor parallelism with Ray for XPU backend
  • Integrate with IPEX optimized kernels(eg, page attention) for better performance
  • Quantization support

Target Intel GPU device and models

For Intel GPU device(in pytorch context, it's named xpu), we are trying to make vllm support Intel Xe architecture Graphic cards, including data center MAX GPUs(such as PVC 1550, PVC 1100), and client GPUs(such as Arc A770) natively.

For models, we will make sure vLLM + xpu works well with all existing vLLM supported models.

Design

Python API

Since Intel GPU have similar API (via IPEX) and behavior compare with CUDA device, we just introduce 2 new classes

  1. XPUExecutor(extends ExecutorBase), have similar behavior with GpuExecutor, will dispatch to generate this executor class based on device type in LLMEngine and AsyncLLMEngine

  2. XPUWorker( extends Worker Class) is used to initial the environment, most of code is shared from parent class.
    xpu_vllm

Torch API

Meanwhile, we introduce torch_sdpa backend (reuse torch scaled_dot_production_attention from CPU backend support) to compute prompt tokens attention since xformers and flash_attn are not supported on Intel GPU.

Custom Op

vLLM implemented many efficient CUDA kernels and packaged as _C library by pybind. These kernels are ported to SYCL, with the same function signatures to replace the CUDA kernels directly. The SYCL custom kernel building procedure is integrated into vLLM CMake build system.

Background & References

Intel Max series GPU: https://www.intel.com/content/www/us/en/products/docs/processors/max-series/overview.html
You can try to get Intel GPU access via Intel Developer Cloud.
Intel extension for pytorch: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants