Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training on Multiple GPU #301

Closed
vahidk opened this issue Apr 7, 2014 · 2 comments
Closed

Training on Multiple GPU #301

vahidk opened this issue Apr 7, 2014 · 2 comments
Labels

Comments

@vahidk
Copy link

vahidk commented Apr 7, 2014

Can we use multiple gpus for training with caffe? What's the best strategy to do this? I couldn't find any documentation about this.

@vahidk vahidk closed this as completed Apr 7, 2014
@vahidk vahidk reopened this Apr 7, 2014
@shelhamer
Copy link
Member

Caffe doesn't (yet) support this out-of-the-box, though see #194 for how CUDA 6 can distribute the BLAS computation across GPUs.

A DIY version of distributed solving could run training minibatches across GPUs, aggregate the diffs, then update the params of the replicated models (or just average the weights every now and again). How to effectively do this is still a research (+ engineering) question. None of the core Caffe devs are working on this at the moment, but that doesn't mean you shouldn't if you're interested.

See #65 for others who are interested in this line of work.

@vahidk
Copy link
Author

vahidk commented Apr 7, 2014

Thanks for the information Evan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants