Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mutil gpus #507

Closed
stevenXS opened this issue May 20, 2020 · 6 comments
Closed

mutil gpus #507

stevenXS opened this issue May 20, 2020 · 6 comments

Comments

@stevenXS
Copy link

Has anyone solved the mutil-gpus traing probem?

@MinaGabriel
Copy link

Did you find a solution for this?

@ScottHoang
Copy link

I have found a solution to this. I leverage Pytorch distributed package with Nvidia Apex to train on 4 rtx 2080ti with a batch size of 80.

@jxhno1
Copy link

jxhno1 commented Jun 17, 2020

How to do that for your saying 'leverage Pytorch distributed package with Nvidia Apex to train on 4 rtx 2080ti with a batch size of 80' Is that easy or not, Thank you!

@ScottHoang
Copy link

@jxhno1 it is definitely not very hard to do. It is just time-consuming for first-timer. You will have to dig deep into PyTorch Cuda API functionalities. But once you are able to deploy one model, you can do for all of them. It is something everyone should pick up since it can be applied generally.

@genqiaolynn
Copy link

can you solute multi gpu for training?Thanks!

@Flova Flova mentioned this issue Feb 2, 2021
@Flova
Copy link
Collaborator

Flova commented Aug 2, 2021

Duplicate of #520

@Flova Flova marked this as a duplicate of #520 Aug 2, 2021
@Flova Flova closed this as completed Aug 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants