Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add algorithm for difference of squares and move is_monotonic_increasing() to utils #1082

Merged
merged 2 commits into from
Jan 16, 2024

Conversation

amandalund
Copy link
Contributor

This pulls out a couple small changes from #1080.

@amandalund amandalund added core Software engineering infrastructure minor Minor internal changes or fixes labels Jan 15, 2024
@amandalund amandalund requested a review from sethrj January 15, 2024 19:36
Copy link
Member

@sethrj sethrj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@sethrj sethrj merged commit 2d06ea4 into celeritas-project:develop Jan 16, 2024
20 checks passed
@amandalund amandalund deleted the helper-functions branch January 16, 2024 16:40
@sethrj
Copy link
Member

sethrj commented Jan 26, 2024

By the way, I originally thought the expression they used was to eliminate roundoff error when the two numbers are different magnitudes, but inspired by @paulromano 's use of mpmath it's clear that's not the case (shown here for single-precision arithmetic):

error

https://gist.github.com/sethrj/e0b1a0e92d4b89e11e14ffed2b3fee01

So I guess those routines were written that way as a microoptimization to turn one multiply * into an addition +?

Yet it looks like it takes an extra register and an extra operation to do! (See godbolt for both arm and x86.)

Maybe we should follow up by writing the expression as God intended it...

@pcanal Is there some edge case I'm missing where the (a - b) * (a + b) formulation might have some advantage?

@amandalund
Copy link
Contributor Author

I thought your original guess was right, but intended to reduce the error when they are close in magnitude. However, in practice I didn't really see it make a difference either. I tried to come up with some examples when I wrote the test (and also played around with mpmath), but didn't have much luck finding a case where rearranging the expression helped. So I'd also support just writing it the usual way.

@sethrj
Copy link
Member

sethrj commented Jan 26, 2024

Oh right. Well once a and b get stored as floating point values, there seems to be still no advantage:
error

@pcanal
Copy link
Contributor

pcanal commented Jan 29, 2024

@pcanal Is there some edge case I'm missing where the (a - b) * (a + b) formulation might have some advantage?

In the cases where a and b are far apart (order of magnitude) but not far enough (i.e. a-b != a) you might have (after numerical errors):

(a*a) - (b*b) == (a*a)
but
(a-b) != a
(a+b) != a
and thus
(a-b) * (a+b) != a*a

@sethrj
Copy link
Member

sethrj commented Jan 29, 2024

If they're far apart, the squares are further... so the result doesn't change 😕

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Software engineering infrastructure minor Minor internal changes or fixes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants