Skip to content

Commit

Permalink
Add documentation on how to do incremental builds (vllm-project#2796)
Browse files Browse the repository at this point in the history
  • Loading branch information
pcmoritz authored and jimpang committed Feb 22, 2024
1 parent 88483a6 commit 7a0823f
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 0 deletions.
10 changes: 10 additions & 0 deletions docs/source/getting_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,3 +67,13 @@ You can also build and install vLLM from source:
$ # Use `--ipc=host` to make sure the shared memory is large enough.
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
.. note::
If you are developing the C++ backend of vLLM, consider building vLLM with

.. code-block:: console
$ python setup.py develop
since it will give you incremental builds. The downside is that this method
is `deprecated by setuptools <https://github.com/pypa/setuptools/issues/917>`_.
5 changes: 5 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,11 @@

ROOT_DIR = os.path.dirname(__file__)

# If you are developing the C++ backend of vLLM, consider building vLLM with
# `python setup.py develop` since it will give you incremental builds.
# The downside is that this method is deprecated, see
# https://github.com/pypa/setuptools/issues/917

MAIN_CUDA_VERSION = "12.1"

# Supported NVIDIA GPU architectures.
Expand Down

0 comments on commit 7a0823f

Please sign in to comment.