Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 23 additions & 12 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,44 @@
# Introduction
This document outlines the benchmarking process for vllm-ascend, designed to evaluate its performance under various workloads. The primary goal is to help developers assess whether their pull requests improve or degrade vllm-ascend's performance.To maintain consistency with the vllm community, we have reused the vllm community [benchmark](https://github.com/vllm-project/vllm/tree/main/benchmarks) script.
This document outlines the benchmarking methodology for vllm-ascend, aimed at evaluating the performance under a variety of workloads. The primary goal is to help developers assess whether their pull requests improve or degrade vllm-ascend's performance. To maintain alignment with vLLM, we use the [benchmark](https://github.com/vllm-project/vllm/tree/main/benchmarks) script provided by the vllm project.

# Overview
**Benchmarking Coverage**: We measure latency, throughput, and fixed-QPS serving on the Atlas800I A2 (see [quick_start](../docs/source/quick_start.md) to learn more supported devices list), with different models(coming soon).
- Latency tests
- Input length: 32 tokens.
- Output length: 128 tokens.
- Batch size: fixed (8).
- Models: llama-3.1 8B.
- Models: Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct.
- Evaluation metrics: end-to-end latency (mean, median, p99).

- Throughput tests
- Input length: randomly sample 200 prompts from ShareGPT dataset (with fixed random seed).
- Output length: the corresponding output length of these 200 prompts.
- Batch size: dynamically determined by vllm to achieve maximum throughput.
- Models: llama-3.1 8B .
- Models: Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct.
- Evaluation metrics: throughput.
- Serving tests
- Input length: randomly sample 200 prompts from ShareGPT dataset (with fixed random seed).
- Output length: the corresponding output length of these 200 prompts.
- Batch size: dynamically determined by vllm and the arrival pattern of the requests.
- **Average QPS (query per second)**: 1, 4, 16 and inf. QPS = inf means all requests come at once. For other QPS values, the arrival time of each query is determined using a random Poisson process (with fixed random seed).
- Models: llama-3.1 8B.
- Models: Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct.
- Evaluation metrics: throughput, TTFT (time to the first token, with mean, median and p99), ITL (inter-token latency, with mean, median and p99).

**Benchmarking Duration**: about 800senond for single model.
**Benchmarking Duration**: about 800 senond for single model.


# Quick Use
## Prerequisites
Before running the benchmarks, ensure the following:

- vllm and vllm-ascend are installed and properly set up in an NPU environment, as these scripts are specifically designed for NPU devices.

- Install necessary dependencies for benchmarks:
```
pip install -r benchmarks/requirements-bench.txt
```

- Models and datasets are cached locally to accelerate execution. Modify the paths in the JSON files located in benchmarks/tests accordingly. feel free to add your own models and parameters in the JSON to run your customized benchmarks.
- For performance benchmark, it is recommended to set the [load-format](https://github.com/vllm-project/vllm-ascend/blob/5897dc5bbe321ca90c26225d0d70bff24061d04b/benchmarks/tests/latency-tests.json#L7) as `dummy`, It will construct random weights based on the passed model without downloading the weights from internet, which can greatly reduce the benchmark time. feel free to add your own models and parameters in the JSON to run your customized benchmarks.

## Run benchmarks
The provided scripts automatically execute performance tests for serving, throughput, and latency. To start the benchmarking process, run command in the vllm-ascend root directory:
Expand All @@ -44,11 +47,19 @@ bash benchmarks/scripts/run-performance-benchmarks.sh
```
Once the script completes, you can find the results in the benchmarks/results folder. The output files may resemble the following:
```
|-- latency_llama8B_tp1.json
|-- serving_llama8B_tp1_sharegpt_qps_1.json
|-- serving_llama8B_tp1_sharegpt_qps_16.json
|-- serving_llama8B_tp1_sharegpt_qps_4.json
|-- serving_llama8B_tp1_sharegpt_qps_inf.json
|-- throughput_llama8B_tp1.json
.
|-- serving_qwen2_5_7B_tp1_qps_1.json
|-- serving_qwen2_5_7B_tp1_qps_16.json
|-- serving_qwen2_5_7B_tp1_qps_4.json
|-- serving_qwen2_5_7B_tp1_qps_inf.json
|-- serving_qwen2_5vl_7B_tp1_qps_1.json
|-- serving_qwen2_5vl_7B_tp1_qps_16.json
|-- serving_qwen2_5vl_7B_tp1_qps_4.json
`-- serving_qwen2_5vl_7B_tp1_qps_inf.json
```
These files contain detailed benchmarking results for further analysis.

To view the results more intuitively, you can use [script](./scripts/convert_json_to_markdown.py) convert these json to markdown:
```bash
python benchmarks/scripts/convert_json_to_markdown.py
```
183 changes: 183 additions & 0 deletions benchmarks/scripts/convert_json_to_markdown.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
import argparse
import json
import os
from pathlib import Path

import pandas as pd
from tabulate import tabulate

CUR_PATH = Path(__file__).parent.resolve()
# latency results and the keys that will be printed into markdown
latency_results = []
latency_column_mapping = {
"test_name": "Test name",
"avg_latency": "Mean latency (ms)",
"P50": "Median latency (ms)",
"P99": "P99 latency (ms)",
}

# throughput tests and the keys that will be printed into markdown
throughput_results = []
throughput_results_column_mapping = {
"test_name": "Test name",
"num_requests": "Num of reqs",
"total_num_tokens": "Total num of tokens",
"elapsed_time": "Elapsed time (s)",
"requests_per_second": "Tput (req/s)",
"tokens_per_second": "Tput (tok/s)",
}

# serving results and the keys that will be printed into markdown
serving_results = []
serving_column_mapping = {
"test_name": "Test name",
"request_rate": "Request rate (req/s)",
"request_throughput": "Tput (req/s)",
"output_throughput": "Output Tput (tok/s)",
"median_ttft_ms": "TTFT (ms)",
"median_tpot_ms": "TPOT (ms)",
"median_itl_ms": "ITL (ms)",
}


def read_markdown(file):
if os.path.exists(file):
with open(file) as f:
return f.read() + "\n"
else:
return f"{file} not found.\n"


def results_to_json(latency, throughput, serving):
return json.dumps({
'latency': latency.to_dict(),
'throughput': throughput.to_dict(),
'serving': serving.to_dict()
})


if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Process the results of the benchmark tests.")
parser.add_argument(
"--results_folder",
type=str,
default="../results/",
help="The folder where the benchmark results are stored.")
parser.add_argument(
"--output_folder",
type=str,
default="../results/",
help="The folder where the benchmark results are stored.")
parser.add_argument("--markdown_template",
type=str,
default="./perf_result_template.md",
help="The template file for the markdown report.")
parser.add_argument("--tag",
default="main",
help="Tag to be used for release message.")
parser.add_argument("--commit_id",
default="",
help="Commit ID to be used for release message.")

args = parser.parse_args()
results_folder = (CUR_PATH / args.results_folder).resolve()
output_folder = (CUR_PATH / args.output_folder).resolve()
markdown_template = (CUR_PATH / args.markdown_template).resolve()

# collect results
for test_file in results_folder.glob("*.json"):

with open(test_file) as f:
raw_result = json.loads(f.read())

if "serving" in str(test_file):
# this result is generated via `benchmark_serving.py`

# update the test name of this result
raw_result.update({"test_name": test_file.stem})

# add the result to raw_result
serving_results.append(raw_result)
continue

elif "latency" in f.name:
# this result is generated via `benchmark_latency.py`

# update the test name of this result
raw_result.update({"test_name": test_file.stem})

# get different percentiles
for perc in [10, 25, 50, 75, 90, 99]:
# Multiply 1000 to convert the time unit from s to ms
raw_result.update(
{f"P{perc}": 1000 * raw_result["percentiles"][str(perc)]})
raw_result["avg_latency"] = raw_result["avg_latency"] * 1000

# add the result to raw_result
latency_results.append(raw_result)
continue

elif "throughput" in f.name:
# this result is generated via `benchmark_throughput.py`

# update the test name of this result
raw_result.update({"test_name": test_file.stem})

# add the result to raw_result
throughput_results.append(raw_result)
continue

print(f"Skipping {test_file}")
serving_results.sort(key=lambda x: (len(x['test_name']), x['test_name']))

latency_results = pd.DataFrame.from_dict(latency_results)
serving_results = pd.DataFrame.from_dict(serving_results)
throughput_results = pd.DataFrame.from_dict(throughput_results)

raw_results_json = results_to_json(latency_results, throughput_results,
serving_results)

# remapping the key, for visualization purpose
if not latency_results.empty:
latency_results = latency_results[list(
latency_column_mapping.keys())].rename(
columns=latency_column_mapping)
if not serving_results.empty:
serving_results = serving_results[list(
serving_column_mapping.keys())].rename(
columns=serving_column_mapping)
if not throughput_results.empty:
throughput_results = throughput_results[list(
throughput_results_column_mapping.keys())].rename(
columns=throughput_results_column_mapping)

processed_results_json = results_to_json(latency_results,
throughput_results,
serving_results)

# get markdown tables
latency_md_table = tabulate(latency_results,
headers='keys',
tablefmt='pipe',
showindex=False)
serving_md_table = tabulate(serving_results,
headers='keys',
tablefmt='pipe',
showindex=False)
throughput_md_table = tabulate(throughput_results,
headers='keys',
tablefmt='pipe',
showindex=False)

# document the result
print(output_folder)
with open(output_folder / "benchmark_results.md", "w") as f:

results = read_markdown(markdown_template)
results = results.format(
latency_tests_markdown_table=latency_md_table,
throughput_tests_markdown_table=throughput_md_table,
serving_tests_markdown_table=serving_md_table,
benchmarking_results_in_json_string=processed_results_json)
f.write(results)
31 changes: 31 additions & 0 deletions benchmarks/scripts/perf_result_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
## Online serving tests

- Input length: randomly sample 200 prompts from [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json) and [lmarena-ai/vision-arena-bench-v0.1](https://huggingface.co/datasets/lmarena-ai/vision-arena-bench-v0.1/tree/main)(multi-modal) dataset (with fixed random seed).
- Output length: the corresponding output length of these 200 prompts.
- Batch size: dynamically determined by vllm and the arrival pattern of the requests.
- **Average QPS (query per second)**: 1, 4, 16 and inf. QPS = inf means all requests come at once. For other QPS values, the arrival time of each query is determined using a random Poisson process (with fixed random seed).
- Models: Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct
- Evaluation metrics: throughput, TTFT (median time to the first token ), ITL (median inter-token latency) TPOT(median time per output token).

{serving_tests_markdown_table}

## Offline tests
### Latency tests

- Input length: 32 tokens.
- Output length: 128 tokens.
- Batch size: fixed (8).
- Models: Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct
- Evaluation metrics: end-to-end latency.

{latency_tests_markdown_table}

### Throughput tests

- Input length: randomly sample 200 prompts from [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json) and [lmarena-ai/vision-arena-bench-v0.1](https://huggingface.co/datasets/lmarena-ai/vision-arena-bench-v0.1/tree/main)(multi-modal) dataset (with fixed random seed).
- Output length: the corresponding output length of these 200 prompts.
- Batch size: dynamically determined by vllm to achieve maximum throughput.
- Models: Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-VL-7B-Instruct
- Evaluation metrics: throughput.

{throughput_tests_markdown_table}
14 changes: 8 additions & 6 deletions benchmarks/scripts/run-performance-benchmarks.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,10 @@ wait_for_server() {
# wait for vllm server to start
# return 1 if vllm server crashes
timeout 1200 bash -c '
until curl -X POST localhost:8000/v1/completions; do
until curl -s -X POST localhost:8000/v1/completions || curl -s -X POST localhost:8000/v1/chat/completions; do
sleep 1
done' && return 0 || return 1

}

get_cur_npu_id() {
Expand Down Expand Up @@ -241,11 +242,13 @@ run_serving_tests() {
cleanup() {
rm -rf ./vllm_benchmarks
}

get_benchmarks_scripts() {
git clone -b main --depth=1 git@github.com:vllm-project/vllm.git && \
mv vllm/benchmarks vllm_benchmarks
rm -rf ./vllm
git clone --depth=1 --filter=blob:none --sparse https://github.com/vllm-project/vllm || return 1
cd vllm || return 1
git sparse-checkout set benchmarks || return 1
mv benchmarks ../vllm_benchmarks || return 1
cd .. || return 1
rm -rf vllm || return 1
}

main() {
Expand Down Expand Up @@ -287,7 +290,6 @@ main() {
END_TIME=$(date +%s)
ELAPSED_TIME=$((END_TIME - START_TIME))
echo "Total execution time: $ELAPSED_TIME seconds"

}

main "$@"
16 changes: 14 additions & 2 deletions benchmarks/tests/latency-tests.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,22 @@
[
{
"test_name": "latency_llama8B_tp1",
"test_name": "latency_qwen2_5_7B_tp1",
"parameters": {
"model": "LLM-Research/Meta-Llama-3.1-8B-Instruct",
"model": "Qwen/Qwen2.5-7B-Instruct",
"tensor_parallel_size": 1,
"load_format": "dummy",
"max_model_len": 16384,
"num_iters_warmup": 5,
"num_iters": 15
}
},
{
"test_name": "latency_qwen2_5vl_7B_tp1",
"parameters": {
"model": "Qwen/Qwen2.5-VL-7B-Instruct",
"tensor_parallel_size": 1,
"load_format": "dummy",
"max_model_len": 16384,
"num_iters_warmup": 5,
"num_iters": 15
}
Expand Down
Loading