Skip to content

Commit b4bcc0a

Browse files
committed
Fix indentation of code in list
Signed-off-by: Leah Karasek <karasek@google.com>
1 parent 2fd02da commit b4bcc0a

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

_posts/2025-10-15-vllm-tpu.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -51,23 +51,23 @@ For this reason, vLLM TPU now uses JAX as the lowering path for all vLLM models,
5151

5252
1. Installation
5353

54-
```shell
55-
pip install vllm-tpu # a single install path
56-
```
54+
```shell
55+
pip install vllm-tpu # a single install path
56+
```
5757

58-
Because Torchax and JAX are essentially just JAX under the hood, we can leverage the same install path regardless of whether the model code was written in PyTorch or JAX. This ensures dependencies remain consistent and users don’t have to worry about managing different requirements for different models.
58+
Because Torchax and JAX are essentially just JAX under the hood, we can leverage the same install path regardless of whether the model code was written in PyTorch or JAX. This ensures dependencies remain consistent and users don’t have to worry about managing different requirements for different models.
5959

6060
2. Serving a Model
6161

62-
```shell
63-
MODEL_ID="google/gemma3-27b-it" # model registered in tpu-inference or vllm
64-
vllm serve $MODEL_ID
65-
```
62+
```shell
63+
MODEL_ID="google/gemma3-27b-it" # model registered in tpu-inference or vllm
64+
vllm serve $MODEL_ID
65+
```
6666

67-
When serving a model on TPU, there are 2 model registries to pull model code from:
67+
When serving a model on TPU, there are 2 model registries to pull model code from:
6868

69-
1) tpu-inference (*default, [list](https://github.com/vllm-project/tpu-inference/tree/main/tpu_inference/models/jax)*)
70-
2) vllm (maintained in *vLLM upstream, [list](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/registry.py)*)
69+
1) tpu-inference (*default, [list](https://github.com/vllm-project/tpu-inference/tree/main/tpu_inference/models/jax)*)
70+
2) vllm (maintained in *vLLM upstream, [list](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/registry.py)*)
7171

7272
Let’s take a closer look at what’s happening under the hood:
7373

0 commit comments

Comments
 (0)