You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
System Info
transformers 4.31.0 , window os , python 3.10.12
I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?
Expected behavior
I expect it to work in a hybrid way between my computer and the server to achieve faster results.
The text was updated successfully, but these errors were encountered:
System Info
transformers 4.31.0 , window os , python 3.10.12
I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?
Expected behavior
I expect it to work in a hybrid way between my computer and the server to achieve faster results.
The text was updated successfully, but these errors were encountered: