-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Double precision #17
Comments
I wasn't planning on adding it... mostly because I'm mostly targeting neural nets, and latest trends are to go to lower precision, ie fp16. Do you mind if I ask your use-case? Doesnt mean I will suddenly jump up and do it, but would be good to at least understand what you are trying to achieve. |
I would be in favour of just making the precision selectable, including both 64 bit doubles and 16 bit halfs. As someone relatively new to neural networks and Torch, I like to experiment to see what the differences are. I'm very scientific in how I approach this, so I would just like to run my network at 16 bit precision, 32 bit precision and 64 bit precision to see what the differences are. I don't like to just take people at their word that smaller is better, I want to see it for myself. It helps me to learn and understand. |
Ok, sounds good. The underlying cutorch codebase, ie cutorch, already provides different precisions actually, so you could plausibly use a similar technique perhaps. https://github.com/torch/cutorch/blob/master/lib/THC/THCGenerateAllTypes.h |
Is it possible to use Double precision floating point on GPU?
As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor.
And that about ClTorch? Don't you plan to add this?
The text was updated successfully, but these errors were encountered: