Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hardware resources recommendation #519

Closed
Cheng-Wang opened this issue Jun 19, 2014 · 1 comment
Closed

hardware resources recommendation #519

Cheng-Wang opened this issue Jun 19, 2014 · 1 comment

Comments

@Cheng-Wang
Copy link

Dear all,

I am going to conduct some experiments based on caffe for training up to 5 million images. I have a chance to apply access permission of hardware resources from our Lab. I have two options:

(1) A 1000 core compute cluster comprising 25 nodes of 40 cores and 1 TB RAM each
(2) HP Server with 2 TB RAM and 64 cores and NVIDIA Tesla K20X

I can choose one of them, can you give me some suggestions regarding this problem from perspective of computing capability as well as the amount of configuration work? If I choose the first solution, how can I distribute my training work to different nodes ?

Thank you in advance !

@Yangqing
Copy link
Member

It seems that the former and latter are not really comparable... I guess the answer is "it depends" - Caffe uses single machine with GPU (and hopefully with multiple GPUs in the future), so the second one fits this purpose better. However, the first option apparently is more powerful and may serve many other tasks - distributed computing, computation without GPU, etc.

(Disclaimer: I am just giving my best guess and kindly don't hold me responsible for any decisions. You might want to consult with some domain experts in your field.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants