Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use single precision floats #107

Merged
merged 3 commits into from
Oct 11, 2019
Merged

Use single precision floats #107

merged 3 commits into from
Oct 11, 2019

Conversation

inejc
Copy link
Member

@inejc inejc commented Oct 6, 2019

Fixes #104.

@inejc inejc added bug Something isn't working work in progress high priority labels Oct 6, 2019
@inejc inejc requested a review from matejklemen October 6, 2019 20:21
@inejc inejc self-assigned this Oct 6, 2019
@matejklemen
Copy link
Member

matejklemen commented Oct 6, 2019

Just double checking to be sure, this fix removes doubles because (most of the time) there's no need for that high precision, right? Or is there some other bug in the library that is caused by having doubles?

I'll take a closer look tomorrow (I guess it's mostly about making sure every case of using doubles is found), but at the moment I've noticed (besides the failed test that's seen in log of CI) that the gradient tests for linear/logistic/Poisson regression and softmax are failing - I assume it's because the tolerance is too strict.

Edit: nvm, didn't see the WIP tag, will check back later.

@picnicml picnicml deleted a comment Oct 7, 2019
@picnicml picnicml deleted a comment Oct 7, 2019
@inejc
Copy link
Member Author

inejc commented Oct 8, 2019

@matejklemen yeah, we should sacrifice precision (I think ~7 digits is plenty for most cases) for a lower memory consumption (should be halved). The problem with failing tests was that we were using breeze's function for computing approximate gradients which we compared to the analytical solution and that one used the one-sided version of the finite difference formula. I now changed that to the two-sided estimation and also had to lower the tolerance for comparisons. The integration tests for linear models show a negligible change in performance. The tests are passing now but I want to test this on larger datasets to confirm I didn't break anything.

@inejc inejc changed the title [WIP] Use single precision floats [IN TESTING] Use single precision floats Oct 8, 2019
@inejc inejc mentioned this pull request Oct 8, 2019
3 tasks
@inejc inejc merged commit 5a0c1fa into master Oct 11, 2019
@inejc inejc removed the in testing label Oct 11, 2019
@inejc inejc changed the title [IN TESTING] Use single precision floats Use single precision floats Oct 11, 2019
@inejc inejc deleted the fix/float branch October 11, 2019 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use single precision floating points
2 participants