Skip to content

E2E performance Rolling, PyTorch #393

E2E performance Rolling, PyTorch

E2E performance Rolling, PyTorch #393

Manually triggered December 10, 2024 05:16
Status Failure
Total duration 2h 49m 42s
Artifacts 44

e2e-performance.yml

on: workflow_dispatch
Print inputs
0s
Print inputs
Matrix: Run test matrix
Fit to window
Zoom out
Zoom in

Annotations

1 error
Run test matrix (timm_models, inference, bfloat16) / Test timm_models bfloat16 inference performance
The self-hosted runner: triton-1550-5 lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.

Artifacts

Produced during runtime
Name Size
logs-huggingface-amp_bf16-inference-no-freezing-performance
20.2 KB
logs-huggingface-amp_bf16-inference-performance
19.5 KB
logs-huggingface-amp_bf16-training-performance
21.1 KB
logs-huggingface-amp_fp16-inference-no-freezing-performance
20.2 KB
logs-huggingface-amp_fp16-inference-performance
19.5 KB
logs-huggingface-amp_fp16-training-performance
21.1 KB
logs-huggingface-bfloat16-inference-no-freezing-performance
20.2 KB
logs-huggingface-bfloat16-inference-performance
19.5 KB
logs-huggingface-bfloat16-training-performance
21 KB
logs-huggingface-float16-inference-no-freezing-performance
20.2 KB
logs-huggingface-float16-inference-performance
19.4 KB
logs-huggingface-float16-training-performance
21 KB
logs-huggingface-float32-inference-no-freezing-performance
20.2 KB
logs-huggingface-float32-inference-performance
19.4 KB
logs-huggingface-float32-training-performance
21 KB
logs-timm_models-amp_bf16-inference-no-freezing-performance
14.2 KB
logs-timm_models-amp_bf16-inference-performance
13.8 KB
logs-timm_models-amp_bf16-training-performance
14.7 KB
logs-timm_models-amp_fp16-inference-no-freezing-performance
14.2 KB
logs-timm_models-amp_fp16-inference-performance
13.8 KB
logs-timm_models-amp_fp16-training-performance
14.7 KB
logs-timm_models-bfloat16-inference-no-freezing-performance
14.2 KB
logs-timm_models-bfloat16-training-performance
14.7 KB
logs-timm_models-float16-inference-no-freezing-performance
14.1 KB
logs-timm_models-float16-inference-performance
13.8 KB
logs-timm_models-float16-training-performance
14.6 KB
logs-timm_models-float32-inference-no-freezing-performance
14.2 KB
logs-timm_models-float32-inference-performance
13.8 KB
logs-timm_models-float32-training-performance
14.6 KB
logs-torchbench-amp_bf16-inference-no-freezing-performance
11.2 KB
logs-torchbench-amp_bf16-inference-performance
11 KB
logs-torchbench-amp_bf16-training-performance
11.6 KB
logs-torchbench-amp_fp16-inference-no-freezing-performance
11.2 KB
logs-torchbench-amp_fp16-inference-performance
11 KB
logs-torchbench-amp_fp16-training-performance
11.6 KB
logs-torchbench-bfloat16-inference-no-freezing-performance
11.2 KB
logs-torchbench-bfloat16-inference-performance
10.9 KB
logs-torchbench-bfloat16-training-performance
11.6 KB
logs-torchbench-float16-inference-no-freezing-performance
11.2 KB
logs-torchbench-float16-inference-performance
10.9 KB
logs-torchbench-float16-training-performance
11.5 KB
logs-torchbench-float32-inference-no-freezing-performance
11.2 KB
logs-torchbench-float32-inference-performance
10.9 KB
logs-torchbench-float32-training-performance
11.6 KB