From be553c848c2a73d9e247e84c30939bf7efa992b6 Mon Sep 17 00:00:00 2001 From: David Xia Date: Tue, 10 Jun 2025 21:13:25 -0400 Subject: [PATCH] [doc] fix "Other AI accelerators" getting started page * add `configure-a-new-environment` snippet tag to all included sub-pages to fix malformed page layout and headers * restore TPU parameter descriptions truncated in #18145 [link] * list in "Provision Cloud TPUs with GKE" requires newline before start * indent line continuation for `gcloud` command in "Provision a Cloud TPU with the queued resource API" [link]: https://github.com/vllm-project/vllm/pull/18145/files#diff-5c4190821389df4c03e59603ce0143a517beb8ae241103b83914ebb000a2b9ba Signed-off-by: David Xia --- .../ai_accelerator/hpu-gaudi.inc.md | 6 +- .../installation/ai_accelerator/neuron.inc.md | 55 ++++++++++--------- .../installation/ai_accelerator/tpu.inc.md | 32 ++++++----- 3 files changed, 50 insertions(+), 43 deletions(-) diff --git a/docs/getting_started/installation/ai_accelerator/hpu-gaudi.inc.md b/docs/getting_started/installation/ai_accelerator/hpu-gaudi.inc.md index 00935a37417e..71ec7e2cc2c6 100644 --- a/docs/getting_started/installation/ai_accelerator/hpu-gaudi.inc.md +++ b/docs/getting_started/installation/ai_accelerator/hpu-gaudi.inc.md @@ -19,7 +19,8 @@ to set up the execution environment. To achieve the best performance, please follow the methods outlined in the [Optimizing Training Platform Guide](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Optimization_in_Training_Platform.html). -## Configure a new environment +# --8<-- [end:requirements] +# --8<-- [start:configure-a-new-environment] ### Environment verification @@ -56,7 +57,7 @@ docker run \ vault.habana.ai/gaudi-docker/1.18.0/ubuntu22.04/habanalabs/pytorch-installer-2.4.0:latest ``` -# --8<-- [end:requirements] +# --8<-- [end:configure-a-new-environment] # --8<-- [start:set-up-using-python] # --8<-- [end:set-up-using-python] @@ -183,7 +184,6 @@ Currently in vLLM for HPU we support four execution modes, depending on selected | 0 | 0 | torch.compile | | 0 | 1 | PyTorch eager mode | | 1 | 0 | HPU Graphs | -
vLLM execution modes
!!! warning In 1.18.0, all modes utilizing `PT_HPU_LAZY_MODE=0` are highly experimental and should be only used for validating functional correctness. Their performance will be improved in the next releases. For obtaining the best performance in 1.18.0, please use HPU Graphs, or PyTorch lazy mode. diff --git a/docs/getting_started/installation/ai_accelerator/neuron.inc.md b/docs/getting_started/installation/ai_accelerator/neuron.inc.md index 86c12472fb36..3649cd328088 100644 --- a/docs/getting_started/installation/ai_accelerator/neuron.inc.md +++ b/docs/getting_started/installation/ai_accelerator/neuron.inc.md @@ -1,8 +1,8 @@ # --8<-- [start:installation] -[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) is the software development kit (SDK) used to run deep learning and - generative AI workloads on AWS Inferentia and AWS Trainium powered Amazon EC2 instances and UltraServers (Inf1, Inf2, Trn1, Trn2, - and Trn2 UltraServer). Both Trainium and Inferentia are powered by fully-independent heterogeneous compute-units called NeuronCores. +[AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) is the software development kit (SDK) used to run deep learning and + generative AI workloads on AWS Inferentia and AWS Trainium powered Amazon EC2 instances and UltraServers (Inf1, Inf2, Trn1, Trn2, + and Trn2 UltraServer). Both Trainium and Inferentia are powered by fully-independent heterogeneous compute-units called NeuronCores. This tab describes how to set up your environment to run vLLM on Neuron. !!! warning @@ -17,11 +17,12 @@ - Accelerator: NeuronCore-v2 (in trn1/inf2 chips) or NeuronCore-v3 (in trn2 chips) - AWS Neuron SDK 2.23 -## Configure a new environment +# --8<-- [end:requirements] +# --8<-- [start:configure-a-new-environment] ### Launch a Trn1/Trn2/Inf2 instance and verify Neuron dependencies -The easiest way to launch a Trainium or Inferentia instance with pre-installed Neuron dependencies is to follow this +The easiest way to launch a Trainium or Inferentia instance with pre-installed Neuron dependencies is to follow this [quick start guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/neuron-setup/multiframework/multi-framework-ubuntu22-neuron-dlami.html#setup-ubuntu22-multi-framework-dlami) using the Neuron Deep Learning AMI (Amazon machine image). - After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html) to connect to the instance @@ -30,14 +31,14 @@ The easiest way to launch a Trainium or Inferentia instance with pre-installed N source /opt/aws_neuronx_venv_pytorch_2_6_nxd_inference/bin/activate ``` -Refer to the [NxD Inference Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/nxdi-setup.html) +Refer to the [NxD Inference Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/nxdi-setup.html) for alternative setup instructions including using Docker and manually installing dependencies. !!! note - NxD Inference is the default recommended backend to run inference on Neuron. If you are looking to use the legacy [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx) - library, refer to [Transformers NeuronX Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/setup/index.html). + NxD Inference is the default recommended backend to run inference on Neuron. If you are looking to use the legacy [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx) + library, refer to [Transformers NeuronX Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/setup/index.html). -# --8<-- [end:requirements] +# --8<-- [end:configure-a-new-environment] # --8<-- [start:set-up-using-python] # --8<-- [end:set-up-using-python] @@ -59,14 +60,14 @@ pip install -U -r requirements/neuron.txt VLLM_TARGET_DEVICE="neuron" pip install -e . ``` -AWS Neuron maintains a [Github fork of vLLM](https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2) at - [https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2](https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2), which contains several features in addition to what's +AWS Neuron maintains a [Github fork of vLLM](https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2) at + [https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2](https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2), which contains several features in addition to what's available on vLLM V0. Please utilize the AWS Fork for the following features: - Llama-3.2 multi-modal support -- Multi-node distributed inference +- Multi-node distributed inference -Refer to [vLLM User Guide for NxD Inference](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/vllm-user-guide.html) +Refer to [vLLM User Guide for NxD Inference](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/vllm-user-guide.html) for more details and usage examples. To install the AWS Neuron fork, run the following: @@ -101,11 +102,11 @@ Make sure to use in place of the default Dock [](){ #feature-support-through-nxd-inference-backend } ### Feature support through NxD Inference backend -The current vLLM and Neuron integration relies on either the `neuronx-distributed-inference` (preferred) or `transformers-neuronx` backend - to perform most of the heavy lifting which includes PyTorch model initialization, compilation, and runtime execution. Therefore, most - [features supported on Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html) are also available via the vLLM integration. +The current vLLM and Neuron integration relies on either the `neuronx-distributed-inference` (preferred) or `transformers-neuronx` backend +to perform most of the heavy lifting which includes PyTorch model initialization, compilation, and runtime execution. Therefore, most +[features supported on Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html) are also available via the vLLM integration. -To configure NxD Inference features through the vLLM entrypoint, use the `override_neuron_config` setting. Provide the configs you want to override +To configure NxD Inference features through the vLLM entrypoint, use the `override_neuron_config` setting. Provide the configs you want to override as a dictionary (or JSON object when starting vLLM from the CLI). For example, to disable auto bucketing, include ```console override_neuron_config={ @@ -117,33 +118,33 @@ or when launching vLLM from the CLI, pass --override-neuron-config "{\"enable_bucketing\":false}" ``` -Alternatively, users can directly call the NxDI library to trace and compile your model, then load the pre-compiled artifacts -(via `NEURON_COMPILED_ARTIFACTS` environment variable) in vLLM to run inference workloads. +Alternatively, users can directly call the NxDI library to trace and compile your model, then load the pre-compiled artifacts +(via `NEURON_COMPILED_ARTIFACTS` environment variable) in vLLM to run inference workloads. ### Known limitations - EAGLE speculative decoding: NxD Inference requires the EAGLE draft checkpoint to include the LM head weights from the target model. Refer to this [guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/feature-guide.html#eagle-checkpoint-compatibility) for how to convert pretrained EAGLE model checkpoints to be compatible for NxDI. -- Quantization: the native quantization flow in vLLM is not well supported on NxD Inference. It is recommended to follow this - [Neuron quantization guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/custom-quantization.html) +- Quantization: the native quantization flow in vLLM is not well supported on NxD Inference. It is recommended to follow this + [Neuron quantization guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/custom-quantization.html) to quantize and compile your model using NxD Inference, and then load the compiled artifacts into vLLM. -- Multi-LoRA serving: NxD Inference only supports loading of LoRA adapters at server startup. Dynamic loading of LoRA adapters at +- Multi-LoRA serving: NxD Inference only supports loading of LoRA adapters at server startup. Dynamic loading of LoRA adapters at runtime is not currently supported. Refer to [multi-lora example](https://github.com/aws-neuron/upstreaming-to-vllm/blob/neuron-2.23-vllm-v0.7.2/examples/offline_inference/neuron_multi_lora.py) - Multi-modal support: multi-modal support is only available through the AWS Neuron fork. This feature has not been upstreamed to vLLM main because NxD Inference currently relies on certain adaptations to the core vLLM logic to support this feature. - Multi-node support: distributed inference across multiple Trainium/Inferentia instances is only supported on the AWS Neuron fork. Refer to this [multi-node example](https://github.com/aws-neuron/upstreaming-to-vllm/tree/neuron-2.23-vllm-v0.7.2/examples/neuron/multi_node) to run. Note that tensor parallelism (distributed inference across NeuronCores) is available in vLLM main. -- Known edge case bug in speculative decoding: An edge case failure may occur in speculative decoding when sequence length approaches - max model length (e.g. when requesting max tokens up to the max model length and ignoring eos). In this scenario, vLLM may attempt - to allocate an additional block to ensure there is enough memory for number of lookahead slots, but since we do not have good support - for paged attention, there isn't another Neuron block for vLLM to allocate. A workaround fix (to terminate 1 iteration early) is +- Known edge case bug in speculative decoding: An edge case failure may occur in speculative decoding when sequence length approaches + max model length (e.g. when requesting max tokens up to the max model length and ignoring eos). In this scenario, vLLM may attempt + to allocate an additional block to ensure there is enough memory for number of lookahead slots, but since we do not have good support + for paged attention, there isn't another Neuron block for vLLM to allocate. A workaround fix (to terminate 1 iteration early) is implemented in the AWS Neuron fork but is not upstreamed to vLLM main as it modifies core vLLM logic. ### Environment variables -- `NEURON_COMPILED_ARTIFACTS`: set this environment variable to point to your pre-compiled model artifacts directory to avoid +- `NEURON_COMPILED_ARTIFACTS`: set this environment variable to point to your pre-compiled model artifacts directory to avoid compilation time upon server initialization. If this variable is not set, the Neuron module will perform compilation and save the artifacts under `neuron-compiled-artifacts/{unique_hash}/` sub-directory in the model path. If this environment variable is set, but the directory does not exist, or the contents are invalid, Neuron will also fallback to a new compilation and store the artifacts diff --git a/docs/getting_started/installation/ai_accelerator/tpu.inc.md b/docs/getting_started/installation/ai_accelerator/tpu.inc.md index d0b168120137..9ac660a897f6 100644 --- a/docs/getting_started/installation/ai_accelerator/tpu.inc.md +++ b/docs/getting_started/installation/ai_accelerator/tpu.inc.md @@ -58,11 +58,13 @@ assigned to your Google Cloud project for your immediate exclusive use. ### Provision Cloud TPUs with GKE For more information about using TPUs with GKE, see: + - - - -## Configure a new environment +# --8<-- [end:requirements] +# --8<-- [start:configure-a-new-environment] ### Provision a Cloud TPU with the queued resource API @@ -70,23 +72,23 @@ Create a TPU v5e with 4 TPU chips: ```console gcloud alpha compute tpus queued-resources create QUEUED_RESOURCE_ID \ ---node-id TPU_NAME \ ---project PROJECT_ID \ ---zone ZONE \ ---accelerator-type ACCELERATOR_TYPE \ ---runtime-version RUNTIME_VERSION \ ---service-account SERVICE_ACCOUNT + --node-id TPU_NAME \ + --project PROJECT_ID \ + --zone ZONE \ + --accelerator-type ACCELERATOR_TYPE \ + --runtime-version RUNTIME_VERSION \ + --service-account SERVICE_ACCOUNT ``` | Parameter name | Description | |--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | QUEUED_RESOURCE_ID | The user-assigned ID of the queued resource request. | -| TPU_NAME | The user-assigned name of the TPU which is created when the queued | +| TPU_NAME | The user-assigned name of the TPU which is created when the queued resource request is allocated. | | PROJECT_ID | Your Google Cloud project | -| ZONE | The GCP zone where you want to create your Cloud TPU. The value you use | -| ACCELERATOR_TYPE | The TPU version you want to use. Specify the TPU version, for example | -| RUNTIME_VERSION | The TPU VM runtime version to use. For example, use `v2-alpha-tpuv6e` for a VM loaded with one or more v6e TPU(s). For more information see [TPU VM images](https://cloud.google.com/tpu/docs/runtimes). | -
Parameter descriptions
+| ZONE | The GCP zone where you want to create your Cloud TPU. The value you use depends on the version of TPUs you are using. For more information, see [TPU regions and zones] | +| ACCELERATOR_TYPE | The TPU version you want to use. Specify the TPU version, for example `v5litepod-4` specifies a v5e TPU with 4 cores, `v6e-1` specifies a v6e TPU with 1 core. For more information, see [TPU versions]. | +| RUNTIME_VERSION | The TPU VM runtime version to use. For example, use `v2-alpha-tpuv6e` for a VM loaded with one or more v6e TPU(s). For more information see [TPU VM images]. | +| SERVICE_ACCOUNT | The email address for your service account. You can find it in the IAM Cloud Console under *Service Accounts*. For example: `tpu-service-account@.iam.gserviceaccount.com` | Connect to your TPU using SSH: @@ -94,7 +96,11 @@ Connect to your TPU using SSH: gcloud compute tpus tpu-vm ssh TPU_NAME --zone ZONE ``` -# --8<-- [end:requirements] +[TPU versions]: https://cloud.google.com/tpu/docs/runtimes +[TPU VM images]: https://cloud.google.com/tpu/docs/runtimes +[TPU regions and zones]: https://cloud.google.com/tpu/docs/regions-zones + +# --8<-- [end:configure-a-new-environment] # --8<-- [start:set-up-using-python] # --8<-- [end:set-up-using-python]