diff --git a/inference/a4x/single-host-serving/tensorrt-llm/README.md b/inference/a4x/single-host-serving/tensorrt-llm/README.md
new file mode 100644
index 0000000..d3ef41a
--- /dev/null
+++ b/inference/a4x/single-host-serving/tensorrt-llm/README.md
@@ -0,0 +1,384 @@
+# Single Host Model Serving with NVIDIA TensorRT-LLM (TRT-LLM) on A4x GKE Node Pool
+
+This document outlines the steps to serve and benchmark various Large Language Models (LLMs) using the [NVIDIA TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) framework on a single [A4x GKE Node pool](https://cloud.google.com/kubernetes-engine).
+
+This guide walks you through setting up the necessary cloud infrastructure, configuring your environment, and deploying a high-performance LLM for inference.
+
+
+## Table of Contents
+
+* [1. Test Environment](#test-environment)
+* [2. High-Level Architecture](#architecture)
+* [3. Environment Setup (One-Time)](#environment-setup)
+ * [3.1. Clone the Repository](#clone-repo)
+ * [3.2. Configure Environment Variables](#configure-vars)
+ * [3.3. Connect to your GKE Cluster](#connect-cluster)
+ * [3.4. Get Hugging Face Token](#get-hf-token)
+ * [3.5. Create Hugging Face Kubernetes Secret](#setup-hf-secret)
+* [4. Run the Recipe](#run-the-recipe)
+ * [4.1. Inference benchmark for DeepSeek-R1 671B](#serving-deepseek-r1-671b)
+* [5. Monitoring and Troubleshooting](#monitoring)
+ * [5.1. Check Deployment Status](#check-status)
+ * [5.2. View Logs](#view-logs)
+* [6. Cleanup](#cleanup)
+
+
+## 1. Test Environment
+
+[Back to Top](#table-of-contents)
+
+The recipe uses the following setup:
+
+* **Orchestration**: [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine)
+* **Deployment Configuration**: A [Helm chart](https://helm.sh/) is used to configure and deploy a [Kubernetes Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). This deployment encapsulates the inference of the target LLM using the TensorRT-LLM framework.
+
+This recipe has been optimized for and tested with the following configuration:
+
+* **GKE Cluster**:
+ * A [regional standard cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/configuration-overview) version: `1.33.4-gke.1036000` or later.
+ * A GPU node pool with 1 [a4x-highgpu-4g](https://cloud.google.com/compute/docs/gpus) machine.
+ * [Workload Identity Federation for GKE](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) enabled.
+ * [Cloud Storage FUSE CSI driver for GKE](https://cloud.google.com/kubernetes-engine/docs/concepts/cloud-storage-fuse-csi-driver) enabled.
+ * [DCGM metrics](https://cloud.google.com/kubernetes-engine/docs/how-to/dcgm-metrics) enabled.
+ * [Kueue](https://kueue.sigs.k8s.io/docs/reference/kueue.v1beta1/) and [JobSet](https://jobset.sigs.k8s.io/docs/overview/) APIs installed.
+ * Kueue configured to support [Topology Aware Scheduling](https://kueue.sigs.k8s.io/docs/concepts/topology_aware_scheduling/).
+* A regional Google Cloud Storage (GCS) bucket to store logs generated by the recipe runs.
+
+> [!IMPORTANT]
+> To prepare the required environment, see the [GKE environment setup guide](../../../../docs/configuring-environment-gke-a4x.md).
+> Provisioning a new GKE cluster is a long-running operation and can take **20-30 minutes**.
+
+
+## 2. High-Level Flow
+
+[Back to Top](#table-of-contents)
+
+Here is a simplified diagram of the flow that we follow in this recipe:
+
+```mermaid
+---
+config:
+ layout: dagre
+---
+flowchart TD
+ subgraph workstation["Client Workstation"]
+ T["Cluster Toolkit"]
+ B("Kubernetes API")
+ A["helm install"]
+ end
+ subgraph huggingface["Hugging Face Hub"]
+ I["Model Weights"]
+ end
+ subgraph gke["GKE Cluster (A4x)"]
+ C["Deployment"]
+ D["Pod"]
+ E["TensorRT-LLM container"]
+ F["Service"]
+ end
+ subgraph storage["Cloud Storage"]
+ J["Bucket"]
+ end
+
+ %% Logical/actual flow
+ T -- Create Cluster --> gke
+ A --> B
+ B --> C & F
+ C --> D
+ D --> E
+ F --> C
+ E -- Downloads at runtime --> I
+ E -- Write logs --> J
+
+
+ %% Layout control
+ gke
+```
+
+* **helm:** A package manager for Kubernetes to define, install, and upgrade applications. It's used here to configure and deploy the Kubernetes Deployment.
+* **Deployment:** Manages the lifecycle of your model server pod, ensuring it stays running.
+* **Service:** Provides a stable network endpoint (a DNS name and IP address) to access your model server.
+* **Pod:** The smallest deployable unit in Kubernetes. The Triton server container with TensorRT-LLM runs inside this pod on a GPU-enabled node.
+* **Cloud Storage:** A Cloud Storage bucket to store benchmark logs and other artifacts.
+
+
+## 3. Environment Setup (One-Time)
+
+[Back to Top](#table-of-contents)
+
+First, you'll configure your local environment. These steps are required once before you can deploy any models.
+
+
+### 3.1. Clone the Repository
+
+```bash
+git clone https://github.com/ai-hypercomputer/gpu-recipes.git
+cd gpu-recipes
+export REPO_ROOT=$(pwd)
+export RECIPE_ROOT=$REPO_ROOT/inference/a4x/single-host-serving/tensorrt-llm
+```
+
+
+### 3.2. Configure Environment Variables
+
+This is the most critical step. These variables are used in subsequent commands to target the correct resources.
+
+```bash
+export PROJECT_ID=
+export CLUSTER_REGION=
+export CLUSTER_NAME=
+export KUEUE_NAME=
+export GCS_BUCKET=
+export TRTLLM_VERSION=1.2.0rc2
+
+# Set the project for gcloud commands
+gcloud config set project $PROJECT_ID
+```
+
+Replace the following values:
+
+| Variable | Description | Example |
+| --------------------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------- |
+| `PROJECT_ID` | Your Google Cloud Project ID. | `gcp-project-12345` |
+| `CLUSTER_REGION` | The GCP region where your GKE cluster is located. | `us-central1` |
+| `CLUSTER_NAME` | The name of your GKE cluster. | `a4x-cluster` |
+| `KUEUE_NAME` | The name of the Kueue local queue. The default queue created by the cluster toolkit is `a4x`. Verify the name in your cluster. | `a4x` |
+| `ARTIFACT_REGISTRY` | Full path to your Artifact Registry repository. | `us-central1-docker.pkg.dev/gcp-project-12345/my-repo` |
+| `GCS_BUCKET` | Name of your GCS bucket (do not include `gs://`). | `my-benchmark-logs-bucket` |
+| `TRTLLM_VERSION` | The tag/version for the Docker image. Other verions can be found at https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release | `1.2.0rc2` |
+
+
+
+### 3.3. Connect to your GKE Cluster
+
+Fetch credentials for `kubectl` to communicate with your cluster.
+
+```bash
+gcloud container clusters get-credentials $CLUSTER_NAME --region $CLUSTER_REGION
+```
+
+
+### 3.4. Get Hugging Face token
+
+To access models through Hugging Face, you'll need a Hugging Face token.
+ 1. Create a [Hugging Face account](https://huggingface.co/) if you don't have one.
+ 2. For **gated models** like Llama 4, ensure you have requested and been granted access on Hugging Face before proceeding.
+ 3. Generate an Access Token: Go to **Your Profile > Settings > Access Tokens**.
+ 4. Select **New Token**.
+ 5. Specify a Name and a Role of at least `Read`.
+ 6. Select **Generate a token**.
+ 7. Copy the generated token to your clipboard. You'll use this later.
+
+
+
+### 3.5. Create Hugging Face Kubernetes Secret
+
+Create a Kubernetes Secret with your Hugging Face token to enable the pod to download model checkpoints from Hugging Face.
+
+```bash
+# Paste your Hugging Face token here
+export HF_TOKEN=
+
+kubectl create secret generic hf-secret \
+--from-literal=hf_api_token=${HF_TOKEN} \
+--dry-run=client -o yaml | kubectl apply -f -
+```
+
+
+## 4. Run the recipe
+
+[Back to Top](#table-of-contents)
+
+This recipe supports the deployment of the following models:
+
+1. [DeepSeek-R1-NVFP4-v2](#serving-deepseek-r1)
+
+> [!NOTE]
+> After running the recipe with `helm install`, it can take **up to 30 minutes** for the deployment to become fully available. This is because the GKE node must first pull the Docker image and then download the model weights from Hugging Face.
+
+
+### 4.1. Inference benchmark for DeepSeek-R1 671B Model
+
+[Back to Top](#table-of-contents)
+
+The recipe runs inference benchmark for [DeepSeek-R1 671B NVFP4 model](https://huggingface.co/nvidia/DeepSeek-R1-NVFP4-v2) which is Nvidia's pre-quantized FP4 checkpoint of the original [DeepSeek-R1 671B model](https://huggingface.co/deepseek-ai/DeepSeek-R1).
+
+The recipe does the following steps to run the benchmarking:
+
+1. Download the full DeepSeek-R1 model checkpoints from [Hugging Face](https://huggingface.co/nvidia/DeepSeek-R1-NVFP4-v2).
+2. Run the throughput and/or latency benchmarking.
+
+The recipe uses [`trtllm-bench`](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/performance/perf-benchmarking.md), a command-line tool from NVIDIA to benchmark the performance of TensorRT-LLM engine. For more information about `trtllm-bench`, see the [TensorRT-LLM documentation](https://github.com/NVIDIA/TensorRT-LLM).
+
+> [!NOTE]
+> The config file directly exposes the settings within TensorRT-LLM's llm_args.py class, which are passed to `trtllm-bench` or `trtllm-serve`, you can modify these as needed in [`src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml`](../../../../src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml)
+
+1. Install the helm chart to prepare and benchmark the model using [`trtllm-bench`](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/performance/perf-benchmarking.md) tool:
+
+ ```bash
+ cd $RECIPE_ROOT
+ helm install -f values.yaml \
+ --set-file workload_launcher=$REPO_ROOT/src/launchers/trtllm-launcher.sh \
+ --set-file serving_config=$REPO_ROOT/src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml \
+ --set queue=${KUEUE_NAME} \
+ --set "volumes.gcsMounts[0].bucketName=${GCS_BUCKET}" \
+ --set workload.model.name=nvidia/DeepSeek-R1-NVFP4-v2 \
+ --set workload.image=nvcr.io/nvidia/tensorrt-llm/release:${TRTLLM_VERSION} \
+ --set workload.framework=trtllm \
+ $USER-serving-deepseek-r1-model \
+ $REPO_ROOT/src/helm-charts/a4x/inference-templates/deployment
+ ```
+
+ This creates a Helm release and a Deployment named `$USER-serving-deepseek-r1-model`, and a Service named `$USER-serving-deepseek-r1-model-svc`.
+
+2. **Check the deployment status.**
+
+ ```bash
+ kubectl get deployment/$USER-serving-deepseek-r1-model
+ ```
+
+ Wait until the `READY` column shows `1/1`. See the [Monitoring and Troubleshooting](#monitoring) section to view the deployment logs.
+
+ > [!NOTE]
+ > - This helm chart is configured to run only a single benchmarking experiment for 1k requests for 128 tokens of input/output lengths. To run other experiments, you can add the various combinations provided in the [values.yaml](values.yaml) file.
+ > - This deployment process can take **up to 30 minutes** as it downloads the model weights from Hugging Face and then the server loads the model weights.
+
+
+
+## 5. Monitoring and Troubleshooting
+
+[Back to Top](#table-of-contents)
+
+After the model is deployed via Helm as described in the sections [above](#run-the-recipe), use the following steps to monitor the deployment and interact with the model. Replace `` and `` with the appropriate names from the model-specific deployment instructions (e.g., `$USER-serving-deepseek-r1-model` and `$USER-serving-deepseek-r1-model-svc`).
+
+
+
+### 5.1. Check Deployment Status
+
+Check the status of your deployment. Replace the name if you deployed a different model.
+
+```bash
+# Example for DeepSeek-R1 671B
+kubectl get deployment/$USER-serving-deepseek-r1-model
+```
+
+Wait until the `READY` column shows `1/1`. If it shows `0/1`, the pod is still starting up.
+
+> [!NOTE]
+> In the GKE UI on Cloud Console, you might see a status of "Does not have minimum availability" during startup. This is normal and will resolve once the pod is ready.
+
+
+### 5.2. View Logs
+
+To see the logs from the TRTLLM server (useful for debugging), use the `-f` flag to follow the log stream:
+
+```bash
+kubectl logs -f deployment/$USER-serving-deepseek-r1-model
+```
+
+You should see logs indicating preparing the model, and then running the throughput benchmark test, similar to this:
+
+```bash
+Running benchmark for nvidia/DeepSeek-R1-NVFP4-v2 with ISL=128, OSL=128, TP=4, EP=4, PP=1
+
+===========================================================
+= PYTORCH BACKEND
+===========================================================
+Model: nvidia/DeepSeek-R1-NVFP4-v2
+Model Path: /ssd/nvidia/DeepSeek-R1-NVFP4-v2
+TensorRT LLM Version: 1.2
+Dtype: bfloat16
+KV Cache Dtype: FP8
+Quantization: NVFP4
+
+===========================================================
+= REQUEST DETAILS
+===========================================================
+Number of requests: 1000
+Number of concurrent requests: 985.9849
+Average Input Length (tokens): 128.0000
+Average Output Length (tokens): 128.0000
+===========================================================
+= WORLD + RUNTIME INFORMATION
+===========================================================
+TP Size: 4
+PP Size: 1
+EP Size: 4
+Max Runtime Batch Size: 2304
+Max Runtime Tokens: 4608
+Scheduling Policy: GUARANTEED_NO_EVICT
+KV Memory Percentage: 85.00%
+Issue Rate (req/sec): 8.3913E+13
+
+===========================================================
+= PERFORMANCE OVERVIEW
+===========================================================
+Request Throughput (req/sec): X.XX
+Total Output Throughput (tokens/sec): X.XX
+Total Token Throughput (tokens/sec): X.XX
+Total Latency (ms): X.XX
+Average request latency (ms): X.XX
+Per User Output Throughput [w/ ctx] (tps/user): X.XX
+Per GPU Output Throughput (tps/gpu): X.XX
+
+-- Request Latency Breakdown (ms) -----------------------
+
+[Latency] P50 : X.XX
+[Latency] P90 : X.XX
+[Latency] P95 : X.XX
+[Latency] P99 : X.XX
+[Latency] MINIMUM: X.XX
+[Latency] MAXIMUM: X.XX
+[Latency] AVERAGE: X.XX
+
+===========================================================
+= DATASET DETAILS
+===========================================================
+Dataset Path: /ssd/token-norm-dist_DeepSeek-R1-NVFP4-v2_128_128_tp4.json
+Number of Sequences: 1000
+
+-- Percentiles statistics ---------------------------------
+
+ Input Output Seq. Length
+-----------------------------------------------------------
+MIN: 128.0000 128.0000 256.0000
+MAX: 128.0000 128.0000 256.0000
+AVG: 128.0000 128.0000 256.0000
+P50: 128.0000 128.0000 256.0000
+P90: 128.0000 128.0000 256.0000
+P95: 128.0000 128.0000 256.0000
+P99: 128.0000 128.0000 256.0000
+===========================================================
+```
+
+
+## 6. Cleanup
+
+To avoid incurring further charges, clean up the resources you created.
+
+1. **Uninstall the Helm Release:**
+
+ First, list your releases to get the deployed models:
+
+ ```bash
+ # list deployed models
+ helm list --filter $USER-serving-
+ ```
+
+ Then, uninstall the desired release:
+
+ ```bash
+ # uninstall the deployed model
+ helm uninstall
+ ```
+ Replace `` with the helm release names listed.
+
+2. **Delete the Kubernetes Secret:**
+
+ ```bash
+ kubectl delete secret hf-secret --ignore-not-found=true
+ ```
+
+3. (Optional) Delete the built Docker image from Artifact Registry if no longer needed.
+4. (Optional) Delete Cloud Build logs.
+5. (Optional) Clean up files in your GCS bucket if benchmarking was performed.
+6. (Optional) Delete the [test environment](#test-environment) provisioned including GKE cluster.
\ No newline at end of file
diff --git a/inference/a4x/single-host-serving/tensorrt-llm/values.yaml b/inference/a4x/single-host-serving/tensorrt-llm/values.yaml
new file mode 100644
index 0000000..2560ff8
--- /dev/null
+++ b/inference/a4x/single-host-serving/tensorrt-llm/values.yaml
@@ -0,0 +1,63 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+queue:
+
+dwsSettings:
+ maxRunDurationSeconds:
+
+huggingface:
+ secretName: hf-secret
+ secretData:
+ token: "hf_api_token"
+
+volumes:
+ gcsVolumes: true
+ ssdMountPath: "/ssd"
+ gcsMounts:
+ - bucketName:
+ mountPath: "/gcs"
+
+service:
+ type: ClusterIP
+ ports:
+ http: 8000
+
+workload:
+ model:
+ name:
+ gpus: 4
+ image:
+ framework:
+ configFile: serving-args.yaml
+ configPath: /workload/configs
+ envs:
+ - name: HF_HUB_ENABLE_HF_TRANSFER
+ value: "1"
+ - name: LAUNCHER_SCRIPT
+ value: "/workload/launcher/launch-workload.sh"
+ - name: SERVER_ARGS_FILE
+ value: "/workload/configs/serving-args.yaml"
+ benchmarks:
+ experiments:
+ - isl: 128
+ osl: 128
+ num_requests: 1000
+
+network:
+ subnetworks[]:
+ gibVersion: us-docker.pkg.dev/gce-ai-infra/gpudirect-gib/nccl-plugin-gib-arm64:v1.0.7
+ ncclSettings:
+ - name: NCCL_DEBUG
+ value: "VERSION"
\ No newline at end of file
diff --git a/src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml b/src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml
new file mode 100644
index 0000000..5ebdf9d
--- /dev/null
+++ b/src/frameworks/a4x/trtllm-configs/deepseek-r1-nvfp4.yaml
@@ -0,0 +1,35 @@
+tp_size: 4
+ep_size: 4
+pp_size: 1
+backend: pytorch
+kv_cache_free_gpu_mem_fraction: 0.85
+llm_api_args:
+ cuda_graph_config:
+ batch_sizes:
+ - 1
+ - 2
+ - 4
+ - 8
+ - 16
+ - 20
+ - 24
+ - 32
+ - 64
+ - 96
+ - 128
+ - 160
+ - 192
+ - 256
+ - 320
+ - 384
+ - 512
+ enable_padding: true
+ enable_attention_dp: true
+ enable_chunked_prefill: true
+ kv_cache_config:
+ dtype: auto
+ enable_block_reuse: false
+ free_gpu_memory_fraction: 0.85
+ moe_config:
+ backend: CUTLASS
+ print_iter_log: true
\ No newline at end of file
diff --git a/src/helm-charts/a3ultra/inference-templates/deployment/templates/serving-launcher.yaml b/src/helm-charts/a3ultra/inference-templates/deployment/templates/serving-launcher.yaml
index ad4feb5..7fa8ab0 100644
--- a/src/helm-charts/a3ultra/inference-templates/deployment/templates/serving-launcher.yaml
+++ b/src/helm-charts/a3ultra/inference-templates/deployment/templates/serving-launcher.yaml
@@ -182,6 +182,9 @@ spec:
value: "{{ $root.Values.workload.model.name }}"
- name: MODEL_DOWNLOAD_DIR
value: "/ssd/{{ $root.Values.workload.model.name }}"
+ # A3-Ultra recipe is based on the TensorRT image, which puts tensorrt_llm in a different path than default
+ - name: TRTLLM_DIR
+ value: "/workspace/tensorrtllm_backend/tensorrt_llm"
{{- if $root.Values.workload.envs }}
{{- toYaml .Values.workload.envs | nindent 12 }}
{{- end }}
diff --git a/src/helm-charts/a4x/inference-templates/deployment/Chart.yaml b/src/helm-charts/a4x/inference-templates/deployment/Chart.yaml
new file mode 100644
index 0000000..4f584cc
--- /dev/null
+++ b/src/helm-charts/a4x/inference-templates/deployment/Chart.yaml
@@ -0,0 +1,20 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: v2
+name: single-host-serving-deployment-template
+description: single-host-serving-deployment-template
+type: application
+version: 0.1.0
+appVersion: "1.16.0"
diff --git a/src/helm-charts/a4x/inference-templates/deployment/templates/serving-config-configmap.yaml b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-config-configmap.yaml
new file mode 100644
index 0000000..a17bdf4
--- /dev/null
+++ b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-config-configmap.yaml
@@ -0,0 +1,25 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: "{{ .Release.Name }}-config"
+data:
+ serving-configuration: |-
+{{- if .Values.serving_config }}
+{{ .Values.serving_config | nindent 4 }}
+{{- else }}
+{{ "config: null" | nindent 4 }}
+{{- end }}
\ No newline at end of file
diff --git a/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher-configmap.yaml b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher-configmap.yaml
new file mode 100644
index 0000000..b111553
--- /dev/null
+++ b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher-configmap.yaml
@@ -0,0 +1,27 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: "{{ .Release.Name }}-launcher"
+data:
+ launch-workload.sh: |-
+{{- if .Values.workload_launcher }}
+{{ .Values.workload_launcher | nindent 4 }}
+{{- else }}
+ #!/bin/bash
+ echo "No workload launcher specified"
+ exit 1
+{{- end }}
\ No newline at end of file
diff --git a/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher.yaml b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher.yaml
new file mode 100644
index 0000000..6a0a2cf
--- /dev/null
+++ b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-launcher.yaml
@@ -0,0 +1,282 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{{ $nodes := div .Values.workload.gpus 4 | max 1 }}
+{{ $gpusPerNode := min .Values.workload.gpus 4 }}
+
+{{ $root := . }}
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: "{{ .Release.Name }}"
+ namespace: default
+ labels:
+ app: {{ .Release.Name }}-serving
+ {{- if $root.Values.queue }}
+ kueue.x-k8s.io/queue-name: "{{ $root.Values.queue }}"
+ {{- end }}
+spec:
+ replicas: {{ $nodes }}
+ selector:
+ matchLabels:
+ app: {{ .Release.Name }}-serving
+ template:
+ metadata:
+ labels:
+ app: {{ .Release.Name }}-serving
+ annotations:
+ kubectl.kubernetes.io/default-container: serving
+ {{- if $root.Values.volumes.gcsVolumes }}
+ gke-gcsfuse/volumes: "true"
+ gke-gcsfuse/cpu-limit: "0"
+ gke-gcsfuse/memory-limit: "0"
+ gke-gcsfuse/ephemeral-storage-limit: "0"
+ {{- end }}
+ {{- if and $root.Values.queue $root.Values.dwsSettings.maxRunDurationSeconds }}
+ provreq.kueue.x-k8s.io/maxRunDurationSeconds: "{{ $root.Values.dwsSettings.maxRunDurationSeconds }}"
+ {{- end }}
+ {{- if not $root.Values.network.hostNetwork }}
+ networking.gke.io/default-interface: "eth0"
+ # networking.gke.io/interfaces: |
+ # {{- if $root.Values.network.subnetworks }}
+ # [
+ # {{- range $i, $subnetwork := $root.Values.network.subnetworks }}
+ # {"interfaceName":"eth{{ $i }}","network":"{{ $subnetwork }}"}{{ eq $i 5 | ternary "" ","}}
+ # {{- end }}
+ # ]
+ # {{- else }}
+ # [
+ # {"interfaceName":"eth0","network":"default"},
+ # {"interfaceName":"eth1","network":"gvnic-1"},
+ # {{- range $i := until 4 }}
+ # {"interfaceName":"eth{{ add 2 $i }}","network":"rdma-{{ $i }}"}{{ eq $i 3 | ternary "" ","}}
+ # {{- end }}
+ # ]
+ # {{- end }}
+ {{- end }}
+ spec:
+ {{- if $root.Values.network.hostNetwork }}
+ hostNetwork: true
+ dnsPolicy: ClusterFirstWithHostNet
+ {{- end }}
+ subdomain: "{{.Release.Name}}"
+ restartPolicy: Always
+ {{- if $root.Values.targetNodes }}
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/hostname
+ operator: In
+ values:
+ {{ range $hostname := $root.Values.targetNodes }}
+ - {{ $hostname }}
+ {{ end }}
+ {{- end }}
+ tolerations:
+ - operator: "Exists"
+ key: nvidia.com/gpu
+ - operator: "Exists"
+ key: cloud.google.com/impending-node-termination
+ - key: "kubernetes.io/arch"
+ operator: "Equal"
+ value: "arm64"
+ effect: "NoSchedule"
+ volumes:
+ {{- if $root.Values.network.gibVersion }}
+ - name: gib
+ emptyDir: {}
+ {{- end }}
+ - name: serving-configuration
+ configMap:
+ name: "{{.Release.Name}}-config"
+ items:
+ - key: serving-configuration
+ path: {{ $root.Values.workload.configFile | default "serving-args" }}
+ - name: serving-launcher
+ configMap:
+ name: "{{.Release.Name}}-launcher"
+ defaultMode: 0700
+ - name: shared-memory
+ emptyDir:
+ medium: "Memory"
+ sizeLimit: 250Gi
+ {{- range $gcs := $root.Values.volumes.gcsMounts }}
+ - name: "{{ $gcs.bucketName }}"
+ csi:
+ driver: gcsfuse.csi.storage.gke.io
+ volumeAttributes:
+ bucketName: "{{ $gcs.bucketName }}"
+ {{- if $gcs.mountOptions }}
+ mountOptions: "{{ $gcs.mountOptions }}"
+ {{- end }}
+ {{- end }}
+ {{- if $root.Values.volumes.ssdMountPath }}
+ - name: local-ssd
+ hostPath:
+ path: /mnt/stateful_partition/kube-ephemeral-ssd
+ {{- end }}
+
+ initContainers:
+ {{- if $root.Values.network.gibVersion }}
+ - name: nccl-plugin-installer
+ image: {{ $root.Values.network.gibVersion }}
+ imagePullPolicy: Always
+ args:
+ - |
+ set -ex
+ /scripts/container_entry.sh install --install-nccl
+ cp -R /var/lib/gib/lib64/. /target/usr/local/gib/lib64
+ cp -R /var/lib/gib/. /target/usr/local/gib
+ command:
+ - /bin/sh
+ - -c
+ volumeMounts:
+ - mountPath: /target/usr/local/gib
+ name: gib
+ {{- end }}
+
+ containers:
+ {{- if $root.Values.workload.gcsSidecarImage }}
+ - name: gke-gcsfuse-sidecar
+ image: {{ $root.Values.workload.gcsSidecarImage }}
+ - name: gke-gcsfuse-metadata-prefetch
+ image: {{ $root.Values.workload.gcsSidecarImage }}
+ {{- end }}
+ - name: serving
+ image: "{{ $root.Values.workload.image }}"
+ imagePullPolicy: Always
+ {{- if $root.Values.network.hostNetwork }}
+ securityContext:
+ privileged: true
+ {{- end }}
+ env:
+ - name: HF_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: "{{ $root.Values.huggingface.secretName }}"
+ key: "{{ $root.Values.huggingface.secretData.token }}"
+ # Pass NCCL settings to the container
+ {{- if $root.Values.network.ncclSettings }}
+ {{- toYaml .Values.network.ncclSettings | nindent 12 }}
+ {{- end }}
+ - name: NCCL_PLUGIN_PATH
+ value: /usr/local/gib/lib64
+ - name: LD_LIBRARY_PATH
+ value: /usr/local/gib/lib64:/usr/local/nvidia/lib64
+ {{- if $root.Values.network.gibVersion }}
+ - name: NCCL_INIT_SCRIPT
+ value: "/usr/local/gib/scripts/set_nccl_env.sh"
+ {{- end }}
+ # Workload specific environment variables
+ - name: MODEL_NAME
+ value: "{{ $root.Values.workload.model.name }}"
+ - name: MODEL_DOWNLOAD_DIR
+ value: "/ssd/{{ $root.Values.workload.model.name }}"
+ - name: TRTLLM_DIR
+ value: "/app/tensorrt_llm"
+ {{- if $root.Values.workload.envs }}
+ {{- toYaml .Values.workload.envs | nindent 12 }}
+ {{- end }}
+
+ workingDir: /workload
+ command: ["/bin/bash", "-c"]
+ args:
+ - |
+ #!/bin/bash
+ pip install pyyaml hf_transfer
+ if [ ! -f "$LAUNCHER_SCRIPT" ]; then
+ echo "Error: Launcher script $LAUNCHER_SCRIPT not found!"
+ exit 1
+ fi
+
+ ARGS=()
+ EXTRA_ARGS_FILE="/tmp/extra_llm_api_args.yaml"
+
+ # Use Python to parse the main config file, extract llm_api_args,
+ # and generate the command-line arguments.
+ python -c "
+ import yaml
+ import sys
+
+ args = []
+ llm_api_args = {}
+ config_file = sys.argv[1]
+ extra_args_file = sys.argv[2]
+
+ try:
+ with open(config_file, 'r') as f:
+ config = yaml.safe_load(f)
+
+ if 'llm_api_args' in config:
+ llm_api_args = config.pop('llm_api_args')
+ with open(extra_args_file, 'w') as f:
+ yaml.dump(llm_api_args, f)
+
+ for key, value in config.items():
+ if value is True:
+ args.append(f'--{key}')
+ elif value is not False:
+ args.append(f'--{key}')
+ args.append(str(value))
+
+ # Print the arguments for the shell script to capture
+ print(' '.join(args))
+
+ except Exception as e:
+ print(f'Error parsing config file: {e}', file=sys.stderr)
+ sys.exit(1)
+ " "$SERVER_ARGS_FILE" "$EXTRA_ARGS_FILE" > /tmp/launcher_args.txt
+
+ # Read the generated arguments into the ARGS array
+ mapfile -t ARGS < <(tr ' ' '\n' < /tmp/launcher_args.txt)
+ rm /tmp/launcher_args.txt
+
+ {{ if eq $root.Values.workload.framework "trtllm" }}
+ {{- range $root.Values.workload.benchmarks.experiments }}
+ echo "Running: $LAUNCHER_SCRIPT --model_name $MODEL_NAME --isl {{ .isl }} --osl {{ .osl }} --num_requests {{ .num_requests }} -- ${ARGS[@]}"
+ exec "$LAUNCHER_SCRIPT" --model_name $MODEL_NAME --isl {{ .isl }} --osl {{ .osl }} --num_requests {{ .num_requests }} -- "${ARGS[@]}"
+ {{- end }}
+ {{ else }}
+ echo "Running: $LAUNCHER_SCRIPT ${ARGS[@]}"
+ exec "$LAUNCHER_SCRIPT" "${ARGS[@]}"
+ {{- end }}
+
+ volumeMounts:
+ {{- if $root.Values.network.gibVersion }}
+ - name: gib
+ mountPath: /usr/local/gib
+ {{- end }}
+ - name: serving-configuration
+ mountPath: {{ $root.Values.workload.configPath | default "/workload/configs" }}
+ - name: serving-launcher
+ mountPath: /workload/launcher
+ - name: shared-memory
+ mountPath: /dev/shm
+ {{- range $gcs := $root.Values.volumes.gcsMounts }}
+ - name: "{{ $gcs.bucketName }}"
+ mountPath: "{{ $gcs.mountPath }}"
+ {{- end }}
+ {{- if $root.Values.volumes.ssdMountPath }}
+ - name: local-ssd
+ mountPath: "{{ $root.Values.volumes.ssdMountPath }}"
+ {{- end }}
+
+ resources:
+ requests:
+ nvidia.com/gpu: {{ $gpusPerNode }}
+ limits:
+ nvidia.com/gpu: {{ $gpusPerNode }}
\ No newline at end of file
diff --git a/src/helm-charts/a4x/inference-templates/deployment/templates/serving-svc.yaml b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-svc.yaml
new file mode 100644
index 0000000..3d1363b
--- /dev/null
+++ b/src/helm-charts/a4x/inference-templates/deployment/templates/serving-svc.yaml
@@ -0,0 +1,26 @@
+# Copyright 2025 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ .Release.Name }}-svc
+spec:
+ selector:
+ app: {{ .Release.Name }}-serving
+ ports:
+ - name: http
+ port: {{ .Values.service.ports.http }}
+ targetPort: {{ .Values.service.ports.http }}
+ type: {{ .Values.service.type }}
\ No newline at end of file
diff --git a/src/launchers/trtllm-launcher.sh b/src/launchers/trtllm-launcher.sh
index 97372d6..5e8ee09 100644
--- a/src/launchers/trtllm-launcher.sh
+++ b/src/launchers/trtllm-launcher.sh
@@ -117,6 +117,9 @@ parse_serving_config() {
tp_size=${SERVING_CONFIG_DICT["tp_size"]:=8}
pp_size=${SERVING_CONFIG_DICT["pp_size"]:=1}
+ ep_size=${SERVING_CONFIG_DICT["ep_size"]:=1}
+ backend=${SERVING_CONFIG_DICT["backend"]:="tensorrt"}
+ kv_cache_free_gpu_mem_fraction=${SERVING_CONFIG_DICT["kv_cache_free_gpu_mem_fraction"]:=0.95}
}
print_configuration() {
@@ -131,6 +134,9 @@ print_configuration() {
echo "number of requests: $num_requests"
echo "tensor parallel size: $tp_size"
echo "pipeline parallel size: $pp_size"
+ echo "expert parallel size: $ep_size"
+ echo "backend: $backend"
+ echo "kv_cache_free_gpu_mem_fraction: $kv_cache_free_gpu_mem_fraction"
echo "--------------------------------"
}
@@ -148,14 +154,22 @@ run_benchmark() {
local num_requests=$4
local tp_size=$5
local pp_size=$6
+ local ep_size=$7
+ local backend=$8
+ local kv_cache_free_gpu_mem_fraction=$9
- echo "Running benchmark for $model_name with ISL=$isl, OSL=$osl, TP=$tp_size, PP=$pp_size"
+ echo "Running benchmark for $model_name with ISL=$isl, OSL=$osl, TP=$tp_size, PP=$pp_size, EP=$ep_size, backend=$7"
dataset_file="/ssd/token-norm-dist_${model_name##*/}_${isl}_${osl}_tp${tp_size}.json"
output_file="/ssd/output_${model_name##*/}_isl${isl}_osl${osl}_tp${tp_size}.txt"
+ extra_args_file="/tmp/extra_llm_api_args.yaml"
+ extra_args=""
+ if [ -f "$extra_args_file" ]; then
+ extra_args="--extra_llm_api_options $extra_args_file"
+ fi
echo "Preparing dataset"
- python3 /workspace/tensorrtllm_backend/tensorrt_llm/benchmarks/cpp/prepare_dataset.py \
+ python3 $TRTLLM_DIR/benchmarks/cpp/prepare_dataset.py \
--tokenizer=$model_name \
--stdout token-norm-dist \
--num-requests=$num_requests \
@@ -164,25 +178,39 @@ run_benchmark() {
--input-stdev=0 \
--output-stdev=0 >$dataset_file
- echo "Building engine"
- trtllm-bench \
- --model $model_name \
- --model_path /ssd/${model_name} \
- --workspace /ssd build \
- --tp_size $tp_size \
- --quantization FP8 \
- --dataset $dataset_file
-
- engine_dir="/ssd/${model_name}/tp_${tp_size}_pp_${pp_size}"
-
- # Save throughput output to a file
- echo "Running throughput benchmark"
- trtllm-bench \
+ if [[ $backend == "pytorch" ]]; then
+ echo "Running throughput benchmark"
+ trtllm-bench \
--model $model_name \
--model_path /ssd/${model_name} throughput \
--dataset $dataset_file \
- --engine_dir $engine_dir \
- --kv_cache_free_gpu_mem_fraction 0.95 >$output_file
+ --tp $tp_size \
+ --pp $pp_size \
+ --ep $ep_size \
+ --backend "pytorch" \
+ --kv_cache_free_gpu_mem_fraction $kv_cache_free_gpu_mem_fraction $extra_args >$output_file
+ else
+ echo "Building engine"
+ trtllm-bench \
+ --model $model_name \
+ --model_path /ssd/${model_name} \
+ --workspace /ssd build \
+ --tp_size $tp_size \
+ --pp_size $pp_size \
+ --quantization FP8 \
+ --dataset $dataset_file
+
+ engine_dir="/ssd/${model_name}/tp_${tp_size}_pp_${pp_size}"
+
+ # Save throughput output to a file
+ echo "Running throughput benchmark"
+ trtllm-bench \
+ --model $model_name \
+ --model_path /ssd/${model_name} throughput \
+ --dataset $dataset_file \
+ --engine_dir $engine_dir \
+ --kv_cache_free_gpu_mem_fraction $kv_cache_free_gpu_mem_fraction $extra_args >$output_file
+ fi
cat $output_file
gsutil cp $output_file /gcs/benchmark_logs/trtllm/
@@ -205,7 +233,7 @@ main() {
# run benchmark
mkdir -p /gcs/benchmark_logs/trtllm
echo "Running benchmarks"
- run_benchmark "$model_name" $isl $osl $num_requests $tp_size $pp_size
+ run_benchmark "$model_name" $isl $osl $num_requests $tp_size $pp_size $ep_size $backend $kv_cache_free_gpu_mem_fraction
}
# Set environment variables