Skip to content

E2E performance Rolling, PyTorch #388

E2E performance Rolling, PyTorch

E2E performance Rolling, PyTorch #388

Manually triggered December 5, 2024 05:16
Status Success
Total duration 2h 39m 5s
Artifacts 45

e2e-performance.yml

on: workflow_dispatch
Print inputs
2s
Print inputs
Matrix: Run test matrix
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
logs-huggingface-amp_bf16-inference-no-freezing-performance
20.2 KB
logs-huggingface-amp_bf16-inference-performance
19.5 KB
logs-huggingface-amp_bf16-training-performance
21.1 KB
logs-huggingface-amp_fp16-inference-no-freezing-performance
20.2 KB
logs-huggingface-amp_fp16-inference-performance
19.5 KB
logs-huggingface-amp_fp16-training-performance
21.1 KB
logs-huggingface-bfloat16-inference-no-freezing-performance
20.2 KB
logs-huggingface-bfloat16-inference-performance
19.5 KB
logs-huggingface-bfloat16-training-performance
21 KB
logs-huggingface-float16-inference-no-freezing-performance
20.2 KB
logs-huggingface-float16-inference-performance
19.4 KB
logs-huggingface-float16-training-performance
21 KB
logs-huggingface-float32-inference-no-freezing-performance
20.2 KB
logs-huggingface-float32-inference-performance
19.5 KB
logs-huggingface-float32-training-performance
21 KB
logs-timm_models-amp_bf16-inference-no-freezing-performance
14.2 KB
logs-timm_models-amp_bf16-inference-performance
13.8 KB
logs-timm_models-amp_bf16-training-performance
14.7 KB
logs-timm_models-amp_fp16-inference-no-freezing-performance
14.2 KB
logs-timm_models-amp_fp16-inference-performance
13.8 KB
logs-timm_models-amp_fp16-training-performance
14.7 KB
logs-timm_models-bfloat16-inference-no-freezing-performance
14.2 KB
logs-timm_models-bfloat16-inference-performance
13.8 KB
logs-timm_models-bfloat16-training-performance
14.7 KB
logs-timm_models-float16-inference-no-freezing-performance
14.1 KB
logs-timm_models-float16-inference-performance
13.8 KB
logs-timm_models-float16-training-performance
14.6 KB
logs-timm_models-float32-inference-no-freezing-performance
14.1 KB
logs-timm_models-float32-inference-performance
13.8 KB
logs-timm_models-float32-training-performance
14.6 KB
logs-torchbench-amp_bf16-inference-no-freezing-performance
11.2 KB
logs-torchbench-amp_bf16-inference-performance
10.9 KB
logs-torchbench-amp_bf16-training-performance
11.6 KB
logs-torchbench-amp_fp16-inference-no-freezing-performance
11.2 KB
logs-torchbench-amp_fp16-inference-performance
10.9 KB
logs-torchbench-amp_fp16-training-performance
11.6 KB
logs-torchbench-bfloat16-inference-no-freezing-performance
11.1 KB
logs-torchbench-bfloat16-inference-performance
10.9 KB
logs-torchbench-bfloat16-training-performance
11.6 KB
logs-torchbench-float16-inference-no-freezing-performance
11.1 KB
logs-torchbench-float16-inference-performance
10.9 KB
logs-torchbench-float16-training-performance
11.5 KB
logs-torchbench-float32-inference-no-freezing-performance
11.1 KB
logs-torchbench-float32-inference-performance
10.9 KB
logs-torchbench-float32-training-performance
11.5 KB