Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vit #7

Merged
merged 6 commits into from
Aug 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions inference/benchmarks/bertLarge/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
### 1. 推理数据集

● 下载地址:`https://drive.google.com/drive/folders/1cywmDnAsrP5-2vsr8GDc6QUc7VWe-M3v`

```
文件列表:
results_text.tar.gz
bert_reference_results_text_md5.txt
```

* 解压后将eval.txt放置在<data_dir>目录下

### 2. 模型与权重

* 模型实现
* pytorch:transformers.BertForMaskedLM
* 权重下载
* pytorch:BertForMaskedLM.from_pretrained("bert-large/base-uncased")
* 权重选择
* 使用save_pretrained将加载的bert-large或bert-base权重保存到<data_dir>/<weight_dir>路径下

### 3. 软硬件配置与运行信息参考

#### 2.1 Nvidia A100

- ##### 硬件环境
- 机器、加速卡型号: NVIDIA_A100-SXM4-40GB
- 多机网络类型、带宽: InfiniBand,200Gb/s

- ##### 软件环境
- OS版本:Ubuntu 20.04
- OS kernel版本: 5.4.0-113-generic
- 加速卡驱动版本:470.129.06
- Docker 版本:20.10.16
- 训练框架版本:pytorch-1.13.0a0+937e930
- 依赖软件版本:
- cuda: 11.8

- 推理工具包

- TensorRT 8.5.1.7

### 4. 运行情况(BERT-Large)

* 指标列表

| 指标名称 | 指标值索引 | 特殊说明 |
| ------------------ | ----------------- | ----------------------------------------------------------- |
| 数据精度 | precision | 可选fp32/fp16 |
| 批尺寸 | bs | |
| 硬件存储使用 | mem | 通常称为“显存”,单位为GiB |
| 端到端时间 | e2e_time | 总时间+Perf初始化等时间 |
| 验证总吞吐量 | p_val_whole | 实际验证序列数除以总验证时间 |
| 验证计算吞吐量 | p_val_core | 不包含IO部分耗时 |
| 推理总吞吐量 | p_infer_whole | 实际推理序列数除以总推理时间 |
| **推理计算吞吐量** | **\*p_infer_core** | 不包含IO部分耗时 |
| **计算卡使用率** | **\*MFU** | model flops utilization |
| 推理结果 | acc(推理/验证) | 单位为top1MaskedLM准确率(acc1) |

* 指标值


| 推理工具 | precision | bs | e2e_time | p_val_whole | p_val_core | p_infer_whole | \*p_infer_core | \*MFU | acc | mem |
| ----------- | --------- | ---- | ---- | -------- | ----------- | ---------- | ------------- | ------------ | ----------- | ----------- |
| tensorrt | fp16 | 32 | 1283.9 | 257.3 | 260.4 | 408.3 | 418.1 | 45.3% | 0.600/0.638 | 17.4/40.0 |
| tensorrt | fp32 | 32 | 1868.8 | 150.4 | 152.2 | 190.4 | 194.1 | 42.0% | 0.638/0.638 | 16.9/40.0 |


5 changes: 5 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from .dataloader import build_dataloader
from .model import create_model
from .export import export_model
from .evaluator import evaluator
from .forward import model_forward, engine_forward
64 changes: 64 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/dataloader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
from transformers import BertTokenizer
from torch.utils.data import DataLoader, Dataset
import torch
import random


class BertInferDataset(Dataset):

def __init__(self, input_ids, label_ids, seq_length):
self.input_ids = input_ids
self.label_ids = label_ids
self.seq_length = seq_length

def __len__(self):
return len(self.input_ids) // self.seq_length

def __getitem__(self, idx):
start_idx = idx * self.seq_length
chunk_input = self.input_ids[start_idx:start_idx + self.seq_length]
chunk_label = self.label_ids[start_idx:start_idx + self.seq_length]

chunk_input = torch.tensor(chunk_input).int()
chunk_label = torch.tensor(chunk_label).int()

return (chunk_input, chunk_label)


def build_dataset(config):

random.seed(config.random_seed)

with open(config.data_dir + "/" + config.eval_file, "r") as file:
text = file.read()

tokenizer = BertTokenizer.from_pretrained(config.data_dir + "/" +
config.weight_dir)
tokens = tokenizer.tokenize(text)

label_ids = tokenizer.convert_tokens_to_ids(tokens)
label_ids = [tokenizer.cls_token_id] + label_ids + [tokenizer.sep_token_id]

masked_tokens = []
for token in tokens:
if token != "[CLS]" and token != "[SEP]":
masked_tokens.append(
"[MASK]" if random.random() < config.mask_ratio else token)
input_ids = tokenizer.convert_tokens_to_ids(masked_tokens)
input_ids = [tokenizer.cls_token_id] + input_ids + [tokenizer.sep_token_id]

dataset = BertInferDataset(input_ids, label_ids, config.seq_length)

return dataset


def build_dataloader(config):
dataset = build_dataset(config)
loader = DataLoader(dataset,
batch_size=config.batch_size,
shuffle=False,
drop_last=True,
num_workers=config.num_workers,
pin_memory=True)

return loader
11 changes: 11 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/evaluator.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
import torch


def evaluator(pred, x, y):
mask = x == 103
masked_pred = pred[mask]
masked_y = y[mask]

correct = masked_pred[masked_pred == masked_y]

return len(correct), len(masked_y)
30 changes: 30 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/export.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import torch
import os


def export_model(model, config):
if config.exist_onnx_path is not None:
return config.exist_onnx_path

filename = config.case + "_bs" + str(config.batch_size)
filename = filename + "_" + str(config.framework)
filename = filename + "_fp16" + str(config.fp16)
filename = "onnxs/" + filename + ".onnx"
onnx_path = config.perf_dir + "/" + filename

dummy_input = torch.ones(config.batch_size, config.seq_length).int().cuda()

dir_onnx_path = os.path.dirname(onnx_path)
os.makedirs(dir_onnx_path, exist_ok=True)

with torch.no_grad():
torch.onnx.export(model,
dummy_input,
onnx_path,
verbose=False,
input_names=["input"],
output_names=["output"],
training=torch.onnx.TrainingMode.EVAL,
do_constant_folding=True)

return onnx_path
113 changes: 113 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/forward.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
from loguru import logger
import torch
import numpy as np
import time
from tools import torch_sync


def cal_perf(config, dataloader_len, duration, core_time, str_prefix):
model_forward_perf = config.repeat * dataloader_len * config.batch_size / duration
logger.info(str_prefix + "(" + config.framework + ") Perf: " +
str(model_forward_perf) + " qps")
model_forward_core_perf = config.repeat * dataloader_len * config.batch_size / core_time
logger.info(str_prefix + "(" + config.framework + ") core Perf: " +
str(model_forward_core_perf) + " qps")
return round(model_forward_perf, 3), round(model_forward_core_perf, 3)


def model_forward(model, dataloader, evaluator, config):
if config.no_validation:
return None, None, None
start = time.time()
core_time = 0.0

correct = 1
whole = 1

for times in range(config.repeat):

logger.debug("Repeat: " + str(times + 1))

all_top1 = []
for step, (x, y) in enumerate(dataloader):
torch_sync(config)
core_time_start = time.time()

if step % config.log_freq == 0:
logger.debug("Step: " + str(step) + " / " +
str(len(dataloader)))

with torch.no_grad():
x = x.cuda()
y = y.cuda()

pred = model(x)
torch_sync(config)
core_time += time.time() - core_time_start

pred = pred[0]
pred = torch.argmax(pred, dim=2)
correct_iter, whole_iter = evaluator(pred, x, y)

correct += correct_iter
whole += whole_iter

acc = correct / whole

logger.info("MaskedLM Acc: " + str(acc))

duration = time.time() - start
model_forward_perf, model_forward_core_perf = cal_perf(
config, len(dataloader), duration, core_time, "Validation")

return model_forward_perf, model_forward_core_perf, round(acc, 3)


def engine_forward(model, dataloader, evaluator, config):
start = time.time()
core_time = 0.0
foo_time = 0.0

correct = 1
whole = 1

for times in range(config.repeat):

logger.debug("Repeat: " + str(times + 1))

all_top1 = []
for step, (x, y) in enumerate(dataloader):
torch_sync(config)
core_time_start = time.time()

if step % config.log_freq == 0:
logger.debug("Step: " + str(step) + " / " +
str(len(dataloader)))

with torch.no_grad():

outputs = model([x])
pred = outputs[0]
foo_time += outputs[1]

torch_sync(config)
core_time += time.time() - core_time_start

pred = pred[0]
pred = pred.reshape(config.batch_size, config.seq_length, -1)
pred = torch.argmax(pred, dim=2)
pred = pred.cpu()
correct_iter, whole_iter = evaluator(pred, x, y)

correct += correct_iter
whole += whole_iter

acc = correct / whole

logger.info("MaskedLM Acc: " + str(acc))

duration = time.time() - start - foo_time
model_forward_perf, model_forward_core_perf = cal_perf(
config, len(dataloader), duration, core_time - foo_time, "Inference")

return model_forward_perf, model_forward_core_perf, round(acc, 3)
13 changes: 13 additions & 0 deletions inference/benchmarks/bertLarge/pytorch/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from transformers import BertForMaskedLM


def create_model(config):
model = BertForMaskedLM.from_pretrained(config.data_dir + "/" +
config.weight_dir,
torchscript=True)
model.cuda()
model.eval()
if config.fp16:
model.half()

return model
1 change: 1 addition & 0 deletions inference/benchmarks/bertLarge/pytorch/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
transformers
13 changes: 7 additions & 6 deletions inference/benchmarks/resnet50/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,16 +71,17 @@ find ./val -name "*JPEG" | wc -l
| 硬件存储使用 | mem | 通常称为“显存”,单位为GiB |
| 端到端时间 | e2e_time | 总时间+Perf初始化等时间 |
| 验证总吞吐量 | p_val_whole | 实际验证图片数除以总验证时间 |
| 验证计算吞吐量 | \*p_val_core | 不包含IO部分耗时 |
| 验证计算吞吐量 | p_val_core | 不包含IO部分耗时 |
| 推理总吞吐量 | p_infer_whole | 实际推理图片数除以总推理时间 |
| **推理计算吞吐量** | **\*p_infer_core** | 不包含IO部分耗时 |
| **计算卡使用率** | **\*MFU** | model flops utilization |
| 推理结果 | acc(推理/验证) | 单位为top1分类准确率(acc1) |

* 指标值

| 推理工具 | precision | bs | e2e_time | p_val_whole | \*p_val_core | p_infer_whole | \*p_infer_core | acc | mem |
| ----------- | --------- | ---- | -------- | ----------- | ---------- | ------------- | ------------ | ----------- | ---------- |
| tensorrt | fp16 | 256 | 613.4 | 1358.9 | 4263.3 | 1391.4 | 12406.0 | 76.2/76.2 | 19.7/40.0 |
| tensorrt | fp32 | 256 | 474.4 | 1487.3 | 2653.2 | 1560.3 | 6091.6 | 76.2/76.2 | 28.86/40.0 |
| torchtrt | fp16 | 256 | 716.4 | 1370.4 | 4282.6 | 1320.0 | 4723.0 | 76.2/76.2 | 9.42/40.0 |
| 推理工具 | precision | bs | e2e_time | p_val_whole | p_val_core | p_infer_whole | \*p_infer_core | \*MFU | acc | mem |
| ----------- | --------- | ---- | ---- | -------- | ----------- | ---------- | ------------- | ------------ | ----------- | ----------- | ---------- |
| tensorrt | fp16 | 256 |613.4 | 1358.9 | 4469.4 | 1391.4 | 12698.7 | 16.8% | 76.2/76.2 | 19.7/40.0 |
| tensorrt | fp32 | 256 | 474.4 | 1487.3 | 2653.2 | 1560.3 | 6091.6 | 16.1% | 76.2/76.2 | 28.86/40.0 |
| torchtrt | fp16 | 256 | 716.4 | 1370.4 | 4282.6 | 1320.0 | 4723.0 | 6.3% | 76.2/76.2 | 9.42/40.0 |

7 changes: 5 additions & 2 deletions inference/benchmarks/resnet50/pytorch/forward.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,12 +81,15 @@ def engine_forward(model, dataloader, evaluator, config):
with torch.no_grad():

outputs = model([x])
pred = outputs[0][0]
pred = outputs[0]
foo_time += outputs[1]
pred = pred.float()

torch_sync(config)
core_time += time.time() - core_time_start

pred = pred[0].float()
pred = pred.reshape(config.batch_size, -1)
pred = pred.cpu()
top1 = evaluator(pred, y)

all_top1.extend(top1.cpu())
Expand Down
Loading