Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run the model on another machine and send the answer to another machine. #213

Open
ixn3rd3mxn opened this issue Jan 2, 2025 · 0 comments

Comments

@ixn3rd3mxn
Copy link

System Info
transformers 4.31.0 , window os , python 3.10.12

I have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?

Expected behavior
I expect it to work in a hybrid way between my computer and the server to achieve faster results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant