Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware Recommendation and Choosing CPU Cluster or GPUs #423

Closed
escorciav opened this issue May 17, 2014 · 4 comments
Closed

Hardware Recommendation and Choosing CPU Cluster or GPUs #423

escorciav opened this issue May 17, 2014 · 4 comments
Labels

Comments

@escorciav
Copy link

Hi all,

I'm new in this kind of deep learning algorithms and I would like to know other hardware specs beyond a powerful GPU? Specifically a powerful machine for training big models like the imagenet challenge and many others.
minimum RAM ?
minimum number of cores ?
minimum hdd ?

Thank you for sharing this valuable piece of code and create a active community!!!

@Yangqing
Copy link
Member

Yangqing commented Jun 6, 2014

Hardware requirements really depend on many factors - if you just want to do a hobby project, a machine with 4-8G memory and a few cores would be sufficient. Any hard disk would do, and given the current low price you probably want to buy a big disk anyway.

My imagenet model is trained with an i5 4570 CPU and 4G memory, if that helps.

@Yangqing Yangqing closed this as completed Jun 6, 2014
@Cheng-Wang
Copy link

Dear all,

I am going to conduct some experiments based on caffe for training up to 5 million images. I have a chance to apply access permission of hardware resources from our Lab. I have two options:

(1) A 1000 core compute cluster comprising 25 nodes of 40 cores and 1 TB RAM each
(2) HP Server with 2 TB RAM and 64 cores and NVIDIA Tesla K20X

I can choose one of them, can you give me some suggestions regarding this problem from perspective of computing capability as well as the amount of configuration work? If I choose the first solution, how can I distribute my training work to different nodes ?

Thank you in advance !

@sguada
Copy link
Contributor

sguada commented Jun 21, 2014

Currently Caffe is not distributed, so getting to use multiple nodes will require some non-trivial extensions.
It will depend if you plan to train many different models in parallel or just one. For training multiple models you could use CPUs, but not sure how slow that would be. For training just one model GPU should be faster.

@shelhamer shelhamer changed the title Hardware Recommendation Hardware Recommendation and Choosing CPU Cluster or GPUs Jun 22, 2014
@Cheng-Wang
Copy link

Thank you for your suggestions @sguada , It's very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants