Skip to content

[fix] added support for vlm in offline inference (#3548) #556

[fix] added support for vlm in offline inference (#3548)

[fix] added support for vlm in offline inference (#3548) #556

Triggered via push February 14, 2025 21:27
Status Cancelled
Total duration 1m 8s
Artifacts

pr-test-amd.yml

on: push
accuracy-test-1-gpu-amd
0s
accuracy-test-1-gpu-amd
mla-test-1-gpu-amd
0s
mla-test-1-gpu-amd
Fit to window
Zoom out
Zoom in

Annotations

2 errors
accuracy-test-1-gpu-amd
Canceling since a higher priority waiting request for 'pr-test-amd-refs/heads/main' exists
mla-test-1-gpu-amd
Canceling since a higher priority waiting request for 'pr-test-amd-refs/heads/main' exists