Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Double precision #17

Open
ipanchenko opened this issue Sep 7, 2015 · 3 comments
Open

Double precision #17

ipanchenko opened this issue Sep 7, 2015 · 3 comments

Comments

@ipanchenko
Copy link

Is it possible to use Double precision floating point on GPU?
As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor.
And that about ClTorch? Don't you plan to add this?

@hughperkins
Copy link
Owner

I wasn't planning on adding it... mostly because I'm mostly targeting neural nets, and latest trends are to go to lower precision, ie fp16. Do you mind if I ask your use-case? Doesnt mean I will suddenly jump up and do it, but would be good to at least understand what you are trying to achieve.

@genixpro
Copy link

genixpro commented Jun 5, 2016

I would be in favour of just making the precision selectable, including both 64 bit doubles and 16 bit halfs. As someone relatively new to neural networks and Torch, I like to experiment to see what the differences are. I'm very scientific in how I approach this, so I would just like to run my network at 16 bit precision, 32 bit precision and 64 bit precision to see what the differences are. I don't like to just take people at their word that smaller is better, I want to see it for myself. It helps me to learn and understand.

@hughperkins
Copy link
Owner

@genixpro

Ok, sounds good. The underlying cutorch codebase, ie cutorch, already provides different precisions actually, so you could plausibly use a similar technique perhaps. https://github.com/torch/cutorch/blob/master/lib/THC/THCGenerateAllTypes.h

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants