[fix] added support for vlm in offline inference (#3548) #556
pr-test-amd.yml
on: push
accuracy-test-1-gpu-amd
0s
mla-test-1-gpu-amd
0s
finish
0s
Annotations
2 errors
accuracy-test-1-gpu-amd
Canceling since a higher priority waiting request for 'pr-test-amd-refs/heads/main' exists
|
mla-test-1-gpu-amd
Canceling since a higher priority waiting request for 'pr-test-amd-refs/heads/main' exists
|